User Tools

Site Tools


project_people:josh_donckels

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
project_people:josh_donckels [2017/10/08 21:11] jdonckelsproject_people:josh_donckels [2017/12/10 20:01] (current) jdonckels
Line 1: Line 1:
-===== Project Idea ===== +===== Project Concept ===== 
-   * Create trainable neural network based on input neurons, unknown neurons, and output neurons.  Once trained this should be able to decently classify objects that it was trained to recognize in input images.  Create visual representation of how the neural network feed weights into the next row of neuronsthen will back propagate based on the error calculated+Implement convolutional neural network (CNN) that will be able to classify an input pattern (a 3x3 window on the MFM).  It will use the layers based around very basic CNN including: convolution layer and a fully-connected layer.  This will have to start off static in the MFMas passing data would be very difficult otherwise.  This will require five elements that I will go into detail further down this page
  
-===== Proposed Steps ===== +===== Elements ===== 
-   Create the three different types of neurons I specified above, that will be inherited from a parent generic neuron quark +  Neuron: 
-   Create some sort of connection/communication system where a all neurons in one row can transmit/talk data to any neuron in the rows adjacent to them no matter how big the rows get +    * Will contain the weights from the pre-trained network that it will be using to classify the image 
-   Figure out how to feed images into a ulam program and break them down into its byte representation, and then feed those to the input neurons +    Deal with all of the computations for the convolution 
-   Find a simple training data-set to feed into this neural network  +    Will update the neighboring pixels based on the output from the computation 
-   Try different images after the network has been trained+  * Init_Layer: 
 +    * This is the middle Neuron, which will initialize, reset, and repair all of the other Neurons in the specific filter 
 +    * This will also contain its own weight, and bias which will be summed into the total from the filter 
 +  FC_Neuron: 
 +    * Neurons for the fully-connected layer, which will used fixed point multiplication to calculate its values.  
 +    There will be four layers of 12, which will each represent a pattern to be classified 
 +  * FC_Init_Layer: 
 +    * Will initialize, reset, and repair the FC_Neurons in each layer 
 +  * Label: 
 +    * Will be used to classify the network once it is complete, meaning the FC_Layer_Twelve will create this once the the classification values are complete 
 + 
 + 
 +===== Weekly Logs =====
  
 Week 7 Update: Week 7 Update:
Line 15: Line 27:
    * Also will work on fixing a size for the CNN based around a smallish image     * Also will work on fixing a size for the CNN based around a smallish image 
  
 +Week 8 Update:
 +   * Updated project page to more clearly state what my projects is and my goals
 +   * Messed around with trying to get my own images as inputs for tiny_dnn
 +   * Researched how small my input images could be for a CNN
 +
 +Week 9 Update:
 +   *Created Demo video for progress on project
 +   *New Title: Object Classification in the Movable Feast Machine
 +   *Scaled back project a lot to 3x3 hand made images without padding. So they represent object as the pixel values can only be 0 or 1.  
 +     * These 3x3 images can contain a dot (1) or a 2-length line (0).  
 +     * I get a 80% successful classification rate in tiny_dnn with the new CNN structure and these input images 
 +   *Researched into CNN structures and tested them in tiny_dnn with hand-made images of 9x9, 7x7, 5x5, 3x3, with padding and without 
 +   *New CNN structure is a 12 layer Convolution layer, that goes into a Fully_connected layer that takes in 12 input and outputs 2 results.
 +   *Implemented working convolution layer in MFM, which takes pixels from a image element and will calculate the output from this layer
 +   *Created shell for Fully-Connected layer for MFM, contains weights and biases, but nothing else yet.  
 +   *Scratched communication concept, as I can place all layer in a sort of column fashion, so there is no need for this
 +
 +Week 10 Update:
 +   *Implemented the fully connected layer
 +   *Compared values with weights and results from each layer with the tiny-dnn output, they compared well
 +     * The results weren't exact, but I found a bug where the values were changing barely based on certain Neurons exceeding their stages and skipping certain steps
 +   * Found some major problems in the weights I was using after testing every single possible input pattern
 +     * Will find better weights in week 11
 +   * Forgot to put my week 10 update on week 10....
 +
 +Week 11 Update:
 +   *Found better patterns and shapes to use together, am using a 2x2 box and a horizontal 2 length line
 +     * Will look into adding more objects that can still be successfully classified
 +     * Implemented these weights and biases into the project, and it classifies 8/10 possible patterns (6 from the horizontal 2 length line, and 4 from the 2x2 box)
 +   * Does not pull the image (pixels) from a element anymore, now can be hand drawn by the user
 +   * Tested hitting the network with "radiation", and it failed horribly every time
 +   * Working towards getting a reset to work
 +     * The neurons in the convolution already had their weights stored separately from their output values, whil the fully connected layer did not
 +       * I have successfully separated the weights and the output values for the fully-connected layer, and it works perfectly  
 +   * Added a reset element, as when the Neurons see it, they will reset the network
 +   * Created a presentation for this project
 +
 +Week 12 Update:
 +   *Fixed bug with fully_connected layer when the simulation was going too fast for it to correctly pass the weights
 +   *Created scripts to get data from modifying parameters for training in tiny_dnn and then will compare the results to what I get from my pre-trained model
 +   *Added different objects, so It can classify up to 5 now, with a much worse percentage...
 +   *Wrote the abstract for this project
 +
 +Week 13 Update:
 +   *Added more rotations and shifts of each shape, and this increased the classification rates for every number of classification patterns
 +     * Can no only classify 4, but way better percentages up by about 20-30% from the previous version
 +   * Also found a problem with the version I was using in tiny-dnn, but was able to fix it.  Was having to do with the rescaling of the outputs
 +   * Created more plots with the improved numbers and places them in my paper
 +   * Added a lot more to my paper including introduction, methods, some results, and some sort of conclusion/discussion
 +   * Corrected my abstract
 +   * Fixed my CNN in the MFM to be more robust, where when a big chunk is removed it will repair itself, and then when a reset is used it can be re-run
 +
 +Week 14 Update:
 +   *Found a few bugs in the repairing and was able to improve it
 +   *Redesigned fully-connected layer, to where it can classify four different patterns at any rotation
 +     * Box, two-length line, L shape, and a three-length line, I get a global max of 77.5% class rate
 +     * Went back to three as I was running into problems and I get consistent 85% classification rate
 +   * Improved paper, still need more plots for results and an improved discussion
 +     * Finished Paper!
 +   
 +Week 15 Update: 
 +   *Improved slides, and practiced presenting.  
 +   *Finished everything and turned it in.
 +
 +
 +  
  
project_people/josh_donckels.1507497097.txt.gz · Last modified: 2017/10/08 21:11 by jdonckels