people:joby_elliott:log
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
people:joby_elliott:log [2017/10/16 02:27] – updated log jelliott | people:joby_elliott:log [2017/11/27 16:48] (current) – jelliott | ||
---|---|---|---|
Line 3: | Line 3: | ||
===== 2017 ===== | ===== 2017 ===== | ||
+ | |||
+ | ==== November ==== | ||
+ | |||
+ | === 2017-11-02 === | ||
+ | * Not much to report code-wise. I've mostly just been running experiments and working on my paper. | ||
+ | * Tonight I'm probably going to actually make one tiny change to my code -- I need to so I can get a higher resolution on one of my independent variables. Something interesting might be happening very close to zero. | ||
+ | |||
+ | === 2017-11-02 === | ||
+ | * Script that runs through various parameters and calls a given number of mfms runs with each value now works | ||
+ | * Unsurprisingly, | ||
+ | * The scripts for gathering data are fairly flexible -- I should be able to also use them for counting individual occurrences in the log files, too. | ||
+ | * That might be useful for quantifying something about how swarms work, because I'm still interested in building mobile swarms of beacons. | ||
+ | * Now that my data-gathering tooling is working I'll be getting back to aggressively expanding what my actual project code can do, and hopefully soon find a U-shaped curve somewhere. I had kind of put my actual ULAM code by the wayside to work on this stuff, because I wanted to have thorough data for Monday' | ||
+ | * I should be able to iterate and quantify at a fairly fast pace now, since I can use these scripts to get quantitative data out of some thousands of AEPS/ | ||
+ | |||
+ | === 2017-11-01 === | ||
+ | * Parsing log files to make CSV data files of AEPS to completion from a series of experiments ... [drumroll] ... works | ||
+ | * [this is me high-fiving a million angels] | ||
+ | * Now I need to finish the bits that will step through multiple parameter values, running a given number of experimental runs for each one | ||
==== October ==== | ==== October ==== | ||
+ | |||
+ | === 2017-10-27 === | ||
+ | * Scripting simulator runs is totally working! | ||
+ | * I have a Node script that sets a parameter in an ExperimentParameters quark, compiles everything, and spawns/ | ||
+ | * Has arguments for declaring what parameter to set, what to set it to, how many runs to do, and how many processes to run concurrently | ||
+ | * mfms is set to end when the grid is full, so runs decide when to end by having some sort of monitor atoms that decide it's done and turn into a cancer | ||
+ | * Decides on a name for this particular experiment instance and tells mfms to put all the logs and whatnot in a folder with that name in tmp | ||
+ | * Apparently the multi-threading stuff that AMD got sued over for advertising my processor as 8 core when it's *really* 4 works well enough in this case. Running 4x4 tile simulations with two going simultaneously is **almost** twice as fast as running them consecutively. It takes like 53% as long. | ||
+ | * Actually, it turns out running 6 at once is even faster. I'll have to see how many I can run at once before it starts to turn around. That's going to be a whole exercise in finding a u-shaped curve all its own. | ||
+ | * Next I need to make a script for this one to call, that searches the log files for the data I'm logging and saves it into a simple data file. | ||
+ | * Then I need a master script that iterates through my independent variable, calling this one a bunch of times. If I do like 100 runs per parameter value, and my parameter has 100 values, this could easily take 24 hours per experimental run -- but will generate a lot of data in that time. 10,000 data points! It's a good thing hard drives are big in 2017, the whole mfms logging parts will probably fill my temp directory with about 20 gigabytes of logs per full experiment. | ||
+ | * I also want to make what mfs file to use as the starting environment an argument for this script. That way I can keep multiple experiment-starting mfs files around, not have to call them all " | ||
+ | |||
+ | === 2017-10-24 === | ||
+ | * Slightly belated log update concerning mostly what I did last week/end, and the tiny amount I got done yesterday | ||
+ | * Figured out the workflow for making recordings with mfms | ||
+ | * Made an extremely simple Seeker/Nest experiment, collected data from 25 runs of it for a graph. It was not a particularly interesting graph, in the end. | ||
+ | * Learned about how to run mfms with no GUI, started working on a plan to script the setting of parameters, compiling, and execution of experimental runs. Basic script process will be | ||
+ | * Set parameter to what it needs to be in some source file | ||
+ | * Compile | ||
+ | * Run experiments with GUI-less mfms -- since the time it takes will be variable, I'm planning on terminating experiments programmatically using a super cancer and the flag for mfms to automatically stop when the board is full. | ||
+ | * Parse log files to figure out how long things took. This should be pretty easy, since most of my experiments are going to be simple AEPS counts of how long it takes a certain number of things to find their way to some other thing. Although logging whenever a " | ||
+ | * Record all that data somewhere. | ||
+ | * Did some refactoring and organizing of my source code toward the ends above. Set up a new working directory that's focused on my first potential experiment, and pulled in only the code needed for that experiment. Also refactored to move all my potential independent variable parameters into one file, so it will be easy to script changing them. | ||
+ | * Today I'm probably just going to take a leisurely walk through the gnuplot documentation, | ||
=== 2017-10-15 === | === 2017-10-15 === |
people/joby_elliott/log.1508120851.txt.gz · Last modified: 2017/10/16 02:27 by jelliott