add all my posts

master
Rachel Lambda Samuelsson 2022-01-27 19:44:11 +01:00
parent 14218bbce5
commit a21faac1e0
10 changed files with 723 additions and 1 deletions

View File

@ -0,0 +1,13 @@
<figure>
{% if include.url %}
<a href="{{ include.url }}" title="{{ include.title }}" target="_blank">
{% endif %}
<img src="/assets/img/{{ include.name }}" alt="{{ include.title }}"/>
{% if include.url %}
</a>
{% endif %}
{% if include.caption %}
<figcaption>{{ include.caption }}</figcaption>
{% endif %}
</figure>

View File

@ -7,7 +7,7 @@ layout: centered
</article>
<footer>
<hr>
<h3>Info</h3>
<h2>Info</h2>
<ul>
<li>Published {{ page.date | date_to_long_string }}</li>
<li>Last compiled {{ site.time | date_to_long_string }}</li>

View File

@ -0,0 +1,35 @@
---
layout: post
title: "Librebooting the macbook 2,1"
---
{% include image.html title="libreboot boot screen" name="clibreboot.png" caption="My macbook showing its libreboot boot screen" %}
<!--more-->
## What and why?
Recently I've taken interest in [libreboot](https://www.libreboot.org), an open source bios alternative. The main reasons for my interest in libreboot are privacy and it being open source. Now, how could switching your bios effect your privacy? Well, on all modern systems there will be a backdoor of sorts put there by the CPU manufacturer. Intel, which most laptops use, has the Intel management engine, or the ime. The ime is a separate chip constantly running on your computer (even when it's turned off). It has access to your entire system, including the internet, a clear invasion of privacy.
## The installation process
Unlike most of the computers that libreboot supports you don't need to physically open the computer to flash libreboot on the macbook 2,1, instead you simply run an installation script from the computer and then reboot. However, installing linux itself on the macbook was a pain.
At first I tried installation media I had lying around, but neither debian nor arch would boot on the mac. Since mac OS comes on a disc I thought that the reason for my installation media not booting was because of being booted from a USB and not the CD/DVD disk reader. Luckily this wasn't the case and after doing some digging I found [this](https://wiki.debian.org/InstallingDebianOn/Apple/MacBook/2-1) debian wiki page which explained that the computer only boots i386-EFI/32-bit UEFI partitions. Since I knew that after installing libreboot I'd be able to boot other devices I simply searched for a 32-bit UEFI live distro and found a live image for fedora which I could boot on the macbook.
After this it ran pretty smoothly, the progress that is, not fedora which ran slow as syrup even though I was only working in the tty. Slow speeds are to be expected with live usbs though so I couldn't really complain. Next up I downloaded the libreboot utils and roms using wget from the tty and ran the installation script, which immediately returned an error message. I asked around on [the libreboot irc](https://webchat.freenode.net/#libreboot) and they refered me to [this](https://libreboot.org/faq.html#flashrom-complains-about-devmem-access) part of the libreboot FAQ, and after following the instructions the installer ran flawlessly. After a while of running it showed me the magical message I'd been waiting for: `Verifying flash... VERIFIED.`
After rebooting my computer I quickly noticed an issue. It didn't boot. At first I thought that I had bricked the mac, which made me feel pretty bummed out, however I once again asked for help in [the libreboot irc](https://webchat.freenode.net/#libreboot) and was advised to try moving the ram sticks around, luckily apple hadn't decided to hide the ram sticks from the user yet back in 2007 so I quickly got to testing. Trying the different combinations and seeing it repeatedly fail made me think that I really had bricked it, however the very last combination worked and the computer booted. Seeing the familiar picture of tux and gnu felt amazing. After this I installed [parabola](https://en.wikipedia.org/wiki/Parabola_GNU/Linux-libre) on the mac, a 100% free (open source) linux distro.
## Making the macbook a bit more useable
As I mentioned before, the macbook is far from a dream laptop, however I've tried to make it slightly more useable. The computer is quite slow, this doesn't really affect me since I use i3 in a minimal distro, though something that does bother me is the battery life. I tried installing [tlp](https://wiki.archlinux.org/index.php/TLP) which is a power manager for linux (and one of the best programs ever), however even with tlp the battery only lasts for around two and a half hours. Another problem with the macbook is how hot it gets. To help with this I drilled holes through the bottom in an attempt to improve the airflow.
{% include image.html title="macbook 2,1 with holes" name="clibreboot-holes.png" caption="Bottom side of my macbook showing the holes" %}
To drill the holes I opened the computer (apple still let you do that in 2007), and removed the CD/DVD reader since I'd never use it and it makes an annoying noise on startup. Then I drilled holes where the CD/DVD reader used to be. This helped a bit with the heating issue, however the CD/DVD reader was of course not what generated the heat, to get a better affect I would have to drill holes under the CPU, though removing the motherboard is a bit to tedious for me to consider it at the moment.
## Afterthoughts
In hindsight I made a lot of stupid mistakes that ultimately cost me a lot of time, however I think that the process as a whole has been a valuable experience. The macbook has a horrible battery life and heats up really fast so I don't think I'll use it all that much, but I already knew all that when I bought it. It's been fun learning about the libreboot project, and I plan to buy a thinkpad x200 in the future and libreboot that, since I'd actually be able to use that as my day-to-day laptop.
Last edited: 2019-12-02 12:14

View File

@ -0,0 +1,62 @@
---
layout: post
title: "Go download a car"
---
"Piracy, It's a crime \[...\] You wouldn't steal a car.", these are the words of an anti piracy campaign launched by the Motion Picture Association, [an organization which works for multi billion dollar corporations such as Disney, Netflix, Paramount, Sony, Universal, and Warner Bros](https://www.motionpictures.org/who-we-are/). Entertaining as these words may be, they also excellently illustrate how ludicrous the arguments against piracy can be.
<!--more-->
## There is no value in a digital copy
Why is it that that comparing piracy and car theft is so silly? It is a question with an obvious answer, a car is a physical object, you cannot just download physical objects. To produce each car there are heaps of labor required to extract and to refine raw materials, eventually resulting in a car. It is in all of this labour in which the value of the car lies. This is in stark contrast to the digital media which the campaign directly compares to said car. Digital files require no additional labour in order to be produced, one simply copies the file, therefore each copy of the file holds no value. This in turn means that one cannot compare piracy to theft as there is nothing being stolen. During piracy no one loses anything, a new copy is simply created.
## Piracy does not directly correspond to lost sales
When discussing piracy it is commonplace for the opposition to bring forth statistics of massive revenue loss, citing an estimated amount of pirated copies multiplied by the manufacturer's suggested retail price (MSRP). This logic however is faulty as it assumes that each instance of piracy corresponds to a lost sale. To claim that all, or even most people who downloaded the full discography of the rolling stones off Napster would have purchased all these records if Napster was not available is unreasonable. For this argument to be sound everyone who would be willing to receive something for free would also have to be willing to pay for it.
The truth is that most pirates would not choose to pirate if they were willing to pay the price of a product. This view is reflected in this quote from Gabe Newell, co-founder of software giant Valve Software:
> The easiest way to stop piracy is not by putting antipiracy technology to work. It is by giving those people a service that is better than what they are receiving from the pirates.
In other words, the best way to stop pirates is not to "fight piracy", but rather to provide something which is worth paying for.
## Disney does not need your money
As a response to the first paragraph some readers might have thought something akin to "Well, there is still value in the work put into creating the original file", which is true. Even though digital copies require no labour to create the original file does. While this is true you must still recognise the difference between stealing a physical object and copying a file. In the first case the creator loses all work put into the object, in the case of piracy nothing is lost, even if the creator isn't compensated.
The original work does still warrant value though, and it is important that those who produce enjoyable media are properly compensated for their work. So please, if you like a product and you have the money then support independent creators. Artists, indie bands and indie game developers are hardworking people who need their money.
That being said. Disney does not need your money. Neither does Netflix, Sony, Paramount or any of the other corporations which the Motion Picture Association represents. These are multi billion dollar corporations which underpay their workers and bring in massive amounts of revenue. Even if this article were successful in convincing thousands to become pirates, Disney's marketing teams would not even notice.
## A quick guide to pirating
This is going to be a quick guide to where you can pirate software as well as where you can support independent creators.
First of all, if you don't already have one, download a torrenting client. If you're not terribly "techy" I recommend the free and open source qbittorrent. If you're "techy", I won't tell you what to do.
### Pirate sites
* [fitgirl](https://fitgirl-repacks.site/) - Games
* [z-library](https://z-lib.org/) - Ebooks and Publications
* [libgen](https://libgen.nl/) - Ebooks and Publications
* [sci-hub](https://sci-hub.scihubtw.tw/) - Publications
* [yts](https://yts.mx/) - Movies
* [Soulseek](https://nicotine-plus.org/) - Music
* [nyaa.si](https://nyaa.si) - Anime, Manga, etc
* [1337X](https://1337x.gd/) - Everything
* [The Pirate Bay](http://thepiratebay.org/) - Everything
## Support independent creators!
If you find music you like a great way to support independent creators is to purchase their music on [bandcamp](https://bandcamp.com)!
A great website for finding and supporting indie game developers is [itch.io](https://itch.io/)!

View File

@ -0,0 +1,280 @@
---
layout: post
title: "Writing portable Makefiles"
---
Makefiles are a great portable tool for compiling your programs which any POSIX compliant system will provide, however, a lot of Makefiles are GNU Makefiles, which are less portable. This blog post will not only explain how to write Makefiles, but how to emulate GNU make features.
<!--more-->
## An introduction to Makefiles
In a Makefile, you can specify variables or macros as they are called in the world of Makefiles. All environment variables will also be available as macros.
```make
half_name=hello
in_files=hello.o world.o
```
These can be referenced later by using `$(NAME)` or `${NAME}`.
```make
full_name=$(half_name)_world
```
You're also able to write comments by starting lines with a `#`.
```make
# This line is a comment which will be ignored
```
Targets are used to specify how files are made. A target is defined by the following syntax:
```make
filename: depends
action
action2
action3
```
This tells make that the file "filename" needs the file "depends" to be present to be made. To make "filename" it then runs the action lines in the system shell. Each line is run in a separate shell instance. Macros can be used in both target names, depends, and actions. Much like in shell macros which contain spaces will be split when evaluated, meaning a list of depends can be stored in a macro.
We can now use the macros we defined earlier to specify how to compile our hello world program:
```make
$(full_name): $(in_files)
$(CC) $(CFLAGS) -o $@ $(in_files)
```
Here I used some special macros, the `$(CC)` macro is used to let users specify their preferred C compiler in the `CC` environment variable, if it is not set it will default to `cc` or `c99`. This is preferable to hard coding `gcc`, `clang`, or whatever compiler you might use since not everyone will use the same tools as you. The `$(CFLAGS)` macro is added to let the user specify flags to send to the C compiler in their `CFLAGS` environment variable. Lastly, and maybe most importantly, the `$@` macro evaluates to the name of the current target, which in this case is `$(full_name)`, which in turn evaluates to `hello_world`. Ultimately this target will run the following shell:
```
cc -o hello_world hello.o world.o
```
But hold on, what about our hello.o and world.o files? Well, if they're already present make will happily use them, however, since we want to automate our build process we should specify how to build them too.
```make
hello.o: hello.c
$(CC) -c $(CFLAGS) -o $@ hello.c
world.o: world.c
$(CC) -c $(CFLAGS) -o $@ world.c
```
Now running make will run the following:
```
cc -c -o hello.o hello.c
cc -c -o world.o world.c
cc -o hello_world hello.o world.o
```
Now make knows how to make our full `hello_world` program, but why do we specify that `hello.o` depends on `hello.c`? We write the source ourselves, so surely there is no need to tell make this? Well, the beauty of make is that it checks the last edited date of depends to rebuild targets. In other words, if we edit hello.c and rerun make it will only run the following:
```
cc -c -o hello.o hello.c
cc -o hello_world hello.o world.o
```
This is because the `world.o` target is already up to date.
It's also worth mentioning that you can choose which target to make by running `make TARGET_NAME` and that by default the first defined target is the one ran.
We now have a complete Makefile for our hello world project!
```make
# Hello world Makefile
half_name=hello
in_files=hello.o world.o
full_name=$(half_name)_world
# main target
$(full_name): $(in_files)
$(CC) $(CFLAGS) -o $@ $(in_files)
# depends
hello.o: hello.c
$(CC) -c $(CFLAGS) -o $@ hello.c
world.o: world.c
$(CC) -c $(CFLAGS) -o $@ world.c
```
Of course, this isn't a complete guide to writing Makefiles but it should give you enough of a grasp on it to understand the rest of this post.
## A powerful GNU Makefile
The following is a Makefile written by a now-graduated senior of mine, slightly modified by me to show more non-standard syntax. It is very powerful, and I used it as a template for writing my own Makefiles for a long time. However, it uses a lot of GNU extensions, making it a perfect example of how to replace them. I've added some comments throughout the file which explain the nonstandard syntax.
```make
# GNU make uses ?= to define a macro if not already set by an environment
# variable. It also allows spaces between keys = and values.
TARGET_EXEC ?= a.out
BUILD_DIR ?= ./build
SRC_DIRS ?= ./src
# GNU make uses := to assign variable immediately instead of when needed.
# GNU make adds the syntax $(shell command) to assign a macro the output
# of a shell command.
SRCS := $(shell find $(SRC_DIRS) -name *.cpp -or -name *.c -or -name *.s)
# GNU make allows for string substitution in variables. Here each word in
# SRCS is represented by %, which is replaced by $(BUILD_DIR)/%.o
# meaning we get a full list of object files from out list of source files.
OBJS := $(SRCS:%=$(BUILD_DIR)/%.o)
INC_DIRS := $(shell find $(SRC_DIRS) -type d)
# GNU make adds the syntax $(addprefix prefix,words) which is used to
# add a prefix to the beginning of each word in words.
INC_FLAGS := $(addprefix -I,$(INC_DIRS))
CPPFLAGS ?= $(INC_FLAGS) -MMD -MP -pg -ggdb -std=c99 -pedantic -O2
LDFLAGS = -lm -lSDL2 -lSDL2_gfx -lSDL2_image -lSDL2_ttf -pg
CC = gcc
CXX = g++
# GNU make evaluates $^ to the full list of depends for the current target
$(BUILD_DIR)/$(TARGET_EXEC): $(OBJS)
$(CC) $^ -o $@ $(LDFLAGS)
# GNU make allows for generic targets matching patterns, here all files
# in $(BUILD_DIR) with the extention .s.o are targeted.
# assembly source
$(BUILD_DIR)/%.s.o: %.s
# GNU make adds the syntax $(dir file) used for getting the name
# of the directory a file resides in.
$(MKDIR_P) $(dir $@)
# GNU make adds the $< macro to all targets. This macro evaluates
# to the first dependency. In standard make this macro is only
# defined for inference rules (more on that later).
$(AS) $(ASFLAGS) -c $< -o $@
# c source
$(BUILD_DIR)/%.c.o: %.c
$(MKDIR_P) $(dir $@)
$(CC) $(CPPFLAGS) $(CFLAGS) -c $< -o $@
# c++ source
$(BUILD_DIR)/%.cpp.o: %.cpp
$(MKDIR_P) $(dir $@)
$(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $< -o $@
# The special target .PHONY is standard and tells make that one or more
# targets do not correspond to a file. If this was not here make would
# not run the clean target if a file named clean existed.
.PHONY: clean profile stacktrace
clean:
$(RM) -r $(BUILD_DIR)
# In GNU make adding macro definitions after a target name defines these
# macros for that particular target.
profile: CPPFLAGS += -pg
profile: LDFLAGS += -pg
profile: $(BUILD_DIR)/$(TARGET_EXEC)
stacktrace: CPPFLAGS += -v -da -Q
stacktrace: $(BUILD_DIR)/$(TARGET_EXEC)
```
This Makefile will compile or assemble all C, C++, and assembly source in the source directory and then link it, making it easy to reuse since there is no need to specify individual files.
### Special targets
Before we start translating this to standard make I'm going to explain two special targets. There are many special targets, and `.PHONY` is one of them. For a full list of special targets and their purpose refer to the POSIX make definition. The two we're interested in here are `.POSIX` and `.SUFFIXES`. `.POSIX` is a target that should be defined before anything else in a standard Makefile, it tells make to not use any extensions which might collide with the standard. The `.SUFFIXES` is used to specify file extensions (suffixes) which make should recognize.
### Generic targets and suffix rules
The generic rules that GNU make provides can be very useful. Thankfully, we can recreate them without any non-standard extensions. Let's look at a generic GNU make target:
```make
%.o: %.c
$(CC) -c $(CFLAGS) -o $@ $<
```
In standard make we can use the `.SUFFIXES` target to ensure make knows of our file extensions.
```make
.SUFFIXES: .o .c
.c.o
$(CC) -c $(CFLAGS) -o $@ $<
```
Unfortunately it is not possible to specify another output directory like in the GNU make example, however, this is rarely necessary.
### Internal macros
In the GNU make example we had this target:
```make
$(BUILD_DIR)/$(TARGET_EXEC): $(OBJS)
$(CC) $^ -o $@ $(LDFLAGS)
```
This uses the non-standard `$^` macro, instead of this we could reuse `$(OBJS)`.
```make
$(BUILD_DIR)/$(TARGET_EXEC): $(OBJS)
$(CC) $(OBJS) -o $@ $(LDFLAGS)
```
If the depends are not already a macro you'd have to type them out manually. In general, I try to keep dependencies in macros to make this process easy.
### Shell and substitution macros
Standard make cannot replicate these features by itself, however, using a shell script and include line this behavior can be replicated.
For example, to emulate the following behavior:
```make
BUILD_DIR ?= ./build
SRC_DIRS ?= ./src
SRCS := $(shell find $(SRC_DIRS) -name *.cpp -or -name *.c -or -name *.s)
OBJS := $(SRCS:%=$(BUILD_DIR)/%.o)
INC_DIRS := $(shell find $(SRC_DIRS) -type d)
INC_FLAGS := $(addprefix -I,$(INC_DIRS))
```
We could put this in our Makefile:
```make
include config.mk
```
and create a script containing this:
```sh
#!/bin/sh
SRC_DIR="${BUILD_DIR:-./src}"
SRCS="$(find "$SRC_DIR" -name '*.cpp' -or -name '*.c' -or -name '*.s')"
OBJS="$(for SRC in $SRCS; do printf '%s ' "$SRC.o"; done; printf '\n')"
INC_DIRS="$(shell find "$SRC_DIR" -type d)"
INC_FLAGS="$(for DIR in $INC_DIRS; do printf '%s ' "-I$DIR"; done printf '\n')"
cat > config.mk <<-EOF
SRC_DIR=$SRC_DIR
SRCS=$SRCS
OBJS=$OBJS
INC_DIRS=$INC_DIRS
INC_FLAGS=$INC_FLAGS
EOF
```
Then, after creating new files one would rerun the script to recreate this configuration. This does require some more manual work, however, it also removes the need for make to run these shell commands on each invocation.
## Afterwords
I hope that after reading this, you, just like me have realized that writing standard Makefiles is easy and that they can be just as powerful as GNU Makefiles.
## See also
[POSIX make definition](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html)
[A tutorial on portable Makefiles by Chris Wellons](https://nullprogram.com/blog/2017/08/20/)

View File

@ -0,0 +1,111 @@
---
layout: post
title: "Running Linux on the Thinkpad X13 Yoga (Gen 2)"
---
This is a rather short blog post on how to get the Thinkpad X13 Yoga (Gen 2) working properly on Linux. This includes, making audio work, removing screen tears, making the screen auto rotate, and making the desktop a comfortable size. <!--more-->I'm writing this cause there isn't that much info out there on this system, and hopefully this prevents duplicated efforts.
# Stylus and Touch
Work out of the box! No, seriously! If it doesn't work for you try finding out if your distribution has a Wacom driver package and install that.
# Audio
Out of the box Alsa did not detect any devices whatsoever. Though in the output of lspci it showed that the sound card had Intel snd drivers loaded. The solution to this was to simply download the sof-firmware package, on Ubuntu you need to install firmware-sof-signed.
# Video stutters/lockups/cursor lag
The video worked well for me mostly, though occasionally the cursor would lag behind, creating ghost copies of itself or terminals would freeze up for a second. I never had any issues with hardware acceleration though. This is solved by adding "i915.enable_psr=0" to the kernel commandline. If you're using GRUB (if you don't know what this means then you are) you need to append this inside the quotes of the line `GRUB_CMDLINE_DEFAULT` in the file `/etc/default/grub`. After saving you changes run `update-grub`. If you're not using grub or running a distro without `update-grub` I'm going to assume you know how to do this.
# HiDPI
I've got the 2K version of the X13, because it was cheaper (the more decked out one was for sale). Needless to say that that many pixels on a 13" screen can cause some visibility issues.
To solve this you just need to adjust the DPI setting in the display options.
If you're not using a desktop enviornment this is fixed by adding the following line to your ~/.Xresources.
```
Xft.dpi: 144
```
If you're using xinit/startx you'll also want to add this to your xinitrc (if it's not already there).
```
xrdb -merge "$HOME/.Xresources"
```
# Making the screen auto rotate
Note that you might not need this as some desktop environments handle this automatically.
Long story short, many other people have written, bad, half working scripts. I've hacked together these and written a, less bad, working script.
```
#!/bin/sh -e
# Auto rotate screen based on device orientation
# Might be necessary depending on your environment
export DISPLAY=:0
# put your display name here
DNAME=eDP1
# may change grep to match your touchscreen(s)
INDEVS="$(xinput | awk '/Wacom/ { split($8,a,"="); print a[2] }')"
rotate () {
ORIENTATION=$1
NEW_ROT="normal"
CTM="1 0 0 0 1 0 0 0 1"
# Set the actions to be taken for each possible orientation
case "$ORIENTATION" in
normal)
NEW_ROT="normal"
CTM="1 0 0 0 1 0 0 0 1"
;;
bottom-up)
NEW_ROT="inverted"
CTM="-1 0 1 0 -1 1 0 0 1"
;;
right-up)
NEW_ROT="right"
CTM="0 1 0 -1 0 1 0 0 1"
;;
left-up)
NEW_ROT="left"
CTM="0 -1 1 1 0 0 0 0 1"
;;
esac
xrandr --output "$DNAME" --rotate "$NEW_ROT"
for INDEV in $INDEVS; do
xinput set-prop "$INDEV" 'Coordinate Transformation Matrix' $CTM
done
}
monitor-sensor | while read -r line; do
case "$line" in
*orientation*)
set -- $line
rotate "$4"
;;
esac
done
```
To use this make sure to install iio-sensor-proxy, and then edit this line
```
DNAME=eDP1
```
to the name listed as connected by the output of the `xrandr` command.
Then mark the file as executable and add it to your autostart.
If your wallpaper looks weird after rotating you can edit the script, adding a line which re-sets your wallpaper after this line.
```
xrandr --output "$DNAME" --rotate "$NEW_ROT"
```

View File

@ -0,0 +1,212 @@
---
layout: post
title: "A first look at SECD machines"
---
The SECD machine is a virtual machine designed to evaluate lambda expressions. It's purpose is to be a more natural goal for compilers than assembly while maintaining reasonable execution speed. In this blog post you will get a quick intro to SECD machines and an overview of a simple implementation.
<!--more-->
## The SECD Machine
SECD stands for Stack Environment Control Dump, all of which but the environment are stacks in the SECD machine. The machine operates by reading instructions from the control stack which operate on itself and the other stacks. A lambda is implemented as its body coupled with its environment. Application is done with the "apply" instruction which pops a lambda off the stack and adds the next element of the stack to the lambdas environment environment, binding the variable of the lambda. The previous stacks are then pushed onto the dump stack. When the lambda has been fully reduced the return instruction is used to save the top of the stack, the result of the reduction, and restores the stacks from the dump stack.
## The Modern SECD Machine
This approach, while sound, suffers from some performance issues. Thankfully, there are a number of improvements which can be made. The first of which is the use of De Brujin indecies. In De Brujin index notation each variable is a number indexing which lambda it was bound at (counting outwards). Here are some examples:
$\lambda x.x \xRightarrow[]{\text{De Brujin}} \lambda \#0$
$\lambda f.\lambda g.\lambda x. f(gx) \xRightarrow[]{\text{De Brujin}} \lambda\lambda\lambda \#2(\#1\#0)$
The benefit in using De Brujin notation is that, rather than a balanced binary tree, a stack may be used for the environment. A variable is then an index into the environment. An additional benifit is that the machine does not need to concern itself with alpha equivalence. As a side-note, counting conventionally starts at $\#1$, not $\#0$, however in a virtual machine counting from $\#0$ is, of course, much more natural.
The second improvement to be made is to "dump" the dump stack. Rather than using the dump stack the current state will be pushed onto the stack as a closure which the return instruction can then restore when exiting a lambda.
The third improvement is to introduce a new, static "stack". The rationale for this is that many programs will include top level definitions which are in scope throughout the program. Including these definitions in the environment whenever a closure is created is a waste of both memory and processing power.
The fourth improvement is allowing multi-variable lambdas. In the case of the SECD machine using a multi-variable lambda rather than multiple single variable lambdas corresponds to not nesting closures, which would lead to redundant copies of the same environment being made. Note that this does not sacrifice currying, as will become clear in later sections.
## An implementation of a simple SECD machine
In this section a SECD machine implemented in Haskell will be introduced bit by bit. This machine is by no means an efficient nor powerful implementation, it serves only to demonstrate the core concepts of the SECD machine.
### Data structures
The machine will only be able to recognise two types of values. Integers and Closures.
```
data Val
= I Int
| Clo Int [Val] [Inst] -- number of arguments, environment, instructions
deriving (Eq, Show)
```
The machine comes with the following instruction set.
```
data Inst
= Const Int
| Global Int
| Local Int
| Closure Int Int -- number of args, number of instructions
| App
| Ret
| Dup
| Add
deriving (Eq, Show)
```
### The instructions
* Const pushes a constant onto the stack.
* Global pushes a global variable onto the stack.
* Local pushes a local variable onto the stack.
* Closure creates a closure taking a set number of arguments and consuming a set amount of instructions from the control stack. The resulting closure is pushed onto the stack.
* App pops a closure and an argument from the stack. If the closure takes more than one argument the argument is pushed onto the closures environment and the closure is pushed back onto the stack. If the closure takes only one argument the closure will, rather than being pushed onto the stack, replace the current environment, such that the closures instructions are placed in the control stack, and its environment placed in the environment stack. A closure is then formed from the old control and environment stacks, which is pushed onto the stack.
* Ret pops a closure in the second index of the stack and installs it as the current control and environment stacks. The element at the top of the stack remains untouched, yielding the result of an application.
* Dup duplicates the element at the top of the stack. It's only included as a matter of convenience.
* Add pops the top two elements off the stack, adds them, and pushes the result back onto the stack.
#### Instruction Table
All stacks grow from right to left, that is, the left most element is at the top of the stack.
<table>
<tr>
<th align=center colspan="3">Before</th><th align=center colspan="3">After</th>
<tr>
<th>Control</th><th>Env</th><th>Stack</th><th>Control</th><th>Env</th><th>Stack</th>
</tr>
<tr>
<td>Const i : c</td><td>e</td><td>s</td><td>c</td><td>e</td><td>i : s</td>
</tr>
<tr>
<td>Global i : c</td><td>e</td><td>s</td><td>c</td><td>e</td><td>Globals[i] : s</td>
</tr>
<tr>
<td>Local i : c</td><td>e</td><td>s</td><td>c </td><td> e</td><td>e[i] : s</td>
</tr>
<tr>
<td>Closure n a : $c_1$ ... $c_n$ : c</td><td>e</td><td>s</td><td>c</td><td>e</td><td>Closure a {e} [$c_1$ ... $c_n$] : s</td>
</tr>
<tr>
<td>App : c</td><td>e</td><td>Closure {e'} [c'] : v : s</td><td>c'</td><td>v : e'</td><td>Closure$^0$ {e} [c] : s</td>
</tr>
<tr>
<td>App : c</td><td>e</td><td>Closure$^n$ {e'} [c'] : v : s</td><td>c</td><td>e</td><td>Closure$^{n - 1}$ {v : e'} [c'] : s</td>
</tr>
<tr>
<td>Ret : c </td><td>e</td><td>v : Closure$^0$ \{e'\} [c'] : s </td><td> c'</td><td>e' </td><td> v : s</td>
</tr>
<tr>
<td>Dup : c</td><td>e</td><td>v : s</td><td>c</td><td>e</td><td>v : v : s</td>
</tr>
<tr>
<td>Add : c </td><td>e</td><td>v : s</td><td>c</td><td>e</td><td>v + v : s</td>
</tr>
</table>
### An example
Consider the following lambda expression:
$(\lambda f.\lambda g.\lambda x. f(gx)) (\lambda x. x + 1) (\lambda x. x + 1) 0$
It composes a lambda which increments its input by one with an identical lambda and then applies 0 to the result. In order to run this on our SECD machine it must first be rewritten with De Brujin index notation.
$(\lambda \lambda \lambda \#2(\#1\#0)) (\lambda 1 + \#1) (\lambda 1 + \#1)0$
Now in order to create an instruction tape for the SECD machine it must be rewritten, using Closure as lambdas, Local $\#i$ for variables and keeping in mind to explicitly apply to and return from lambdas. Additionally it will have to be written in postfix form due to the nature of stack based machines. After performing these actions the expression has been transformed into the following tape:
$Const 0 : Closure [ Const 1 : Local \#0 : Add : Ret ] : Closure [ Const 1 : Local \#0 : Add : Ret ] :$
$Closure^3 [ Local \#0 : Local \#1 : App : Local \#2 : App : Ret ] : App : App : App$
Reading this (or one of the previous lambda expressions) backwards you should be able to convince yourself of the validity of this translation.
This can now directly be translated into a list of instructions for the Haskell machine implementation.
```
[ Const 0, Closure 1 4, Const 1, Local 0, Add, Ret, Dup
, Closure 3 6, Local 0, Local 1, App, Local 2, App, Ret
, App, App, App]
```
Which can then be evaluated in order to yield the normalized expression.
```
λ eval [] [ Const 0, Closure 1 4, Const 1, Local 0, Add, Ret, Dup
, Closure 3 6, Local 0, Local 1, App, Local 2, App, Ret
, App, App, App]
[I 2]
```
As expected, the expression reduces to 2.
## What's next?
In the future I plan to create a specification for an instruction set which allows for meaningful computations as well as IO. Then I wish to define an instruction encoding and binary object file format. Once all this is done an efficient C implementation could be written. Ideally I would also write a compiler to said binary format from something a bit user friendlier. Perhaps an untyped, or simply typed lambda calculus.
## The Haskell source
The implementation itself is trivial and is little more than a Haskell translation of the instruction table above. It's ugly, it doesn't do error handling and is little more than a demonstration piece.
```
module SECD where
data Inst
= Const Int
| Global Int
| Local Int
| Closure Int Int -- args, scope
| App
| Ret
| Dup
| Add
deriving (Eq, Show)
data Val
= I Int
| Clo Int [Val] [Inst] -- args env code
deriving (Eq, Show)
vadd :: Val -> Val -> Val
vadd (I x1) (I x2) = I (x1 + x2)
vadd _ _ = error "attempted addition with closure"
data SEC = SEC [Val] [Val] [Inst] [Val] -- stack, environment, control, global
deriving Show
-- | Takes a global env, a list of instructions and returns and int
eval :: [Val] -> [Inst] -> [Val]
eval globals insts = go (SEC [] [] insts globals)
where
go :: SEC -> [Val]
go sec@(SEC stack env control global) = case control of
[] -> stack
(x:xs) -> case x of
Const i -> go (SEC (I i:stack) env xs global)
Global i -> go (SEC (global !! i:stack) env xs global)
Local i -> go (SEC (env !! i:stack) env xs global)
Closure a s -> go (SEC (Clo a env (take s xs):stack) env (drop s xs) global)
Ret -> let (v:(Clo 0 e c):st) = stack in go (SEC (v:st) e c global)
App -> case stack of
(Clo 1 e c:v:st) -> go (SEC (Clo 0 env xs:st) (v:e) c global)
(Clo n e c:v:st) -> go (SEC (Clo (n-1) (v:e) c:st) env xs global)
Dup -> let (v:st) = stack in go (SEC (v:v:st) env xs global)
Add -> let (x1:x2:st) = stack in go (SEC (vadd x1 x2:st) env xs global)
```
## Licensing Information
Feel free to use any of the code or concepts here in your own projects! The code can be considered free of copyright and all notions of liability and/or warranty.
## See Also
[Modern SECD Machine, University of Calgary](https://pages.cpsc.ucalgary.ca/~robin/class/521/lectures_lambda/secd.pdf) - Gave me the idea to scrap the D stack.
[SECD machine, Wikipedia](https://en.wikipedia.org/wiki/SECD_machine)

View File

@ -49,6 +49,15 @@ body {
background-color: $background;
}
img {
max-width: 100%;
}
figcaption {
text-align: center;
color: $text;
}
@import url(https://cdn.jsdelivr.net/npm/firacode@6.2.0/distr/fira_code.css);
code, .Agda { font-family: 'Fira Code', monospace; }

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB