User Tools

Site Tools


Low-Fidelity Data Compression

Having data distributed spatially is advantageous robust framework like the Movable Feast Machine, however it makes it very difficult to transport this data if it is needed elsewhere in the system. Consider the situation in which some spatially sorted data, sitting across a full tile, is needed three or four tiles away from where the data is located. I can't think of a way to successfully move that much data as a whole when there are no given global coordinates and the data has no knowledge about where it needs to be in relation to the rest of the sorted body of data.

The idea behind data compression is to create a dense and compact signature of what the data looks like and ship that much smaller piece in the direction it needs to go. The data can then be 'unpacked' with the signature and used in whatever manner is required on the receiving end. This project focuses on the compression part of the problem and tries to come up with a procedure that does just that.

Sketch of Required Elements

Four elements have been identified so far:


The Attractor is a modified version of the original neuron element in the more realistic brain. These atoms are signal transferring atoms and generalize the input received by the Data Reader element. They have two states:

  1. Passive State:
    • Random diffusion
    • Create new Attractors from any Res encountered
  2. Active (broadcasting) State:
    • Output signal dissipates every activation
    • Broadcast signal to all other Attractors located strictly to the right
    • Select a random Attractor to the left (input) and one to right (output) and move into their average y position to strengthen to the signal between the two

Data Reader

This is vertically reproducing element. Data Reader atoms consume any atom on its left and broadcasts that signal to an Attractor atom to its right. The signal it broadcasts is a signal calculated as how close the atom's bit values match to a bitmask supplied by the user. The atoms consumed are consumed regardless of the type encountered.

Output Sink

This atom collects signals from Attractor atoms to it's left. Every so often the top-most Output Sink atom will spawn a Data Block. If any Output Sink sees a Data Block in the upper half of its event window, it updates the Data Block with it's current charge and moves the block to the bottom half of its event window. The output sink then sets its collected signal to zero.

Data Block

The element that represents the culmination of the data compression process. Any Data Block atom expects to be in a 3×3 square of other Data Block atoms. It keeps track of its relative (x,y) coordinates in the square. Each row of the square is made up of two 64-bit atoms of data and 1 64-bit XOR of the other two. If at any time, a single atom in the block detects there is only 1 Data Block atom in a row, it blows up and deletes its cooperating Data Block atoms to remove the square from the world. If only a single Data Block atom is missing from a row, it reconstructs it via the other two atoms in the same row.


These blocks are meant to be the low-fidelity signature of the incoming data. This is a small step towards higher-fidelity data compression that is hopefully the start of better methods in data compression and/or transportation of data in the MFM

Figure 1

Starting TLS failed
people/taylor_berger/low_fidelity_data_compression.txt · Last modified: 2014/11/10 23:30 by tberge01