project_people:josh_donckels
This is an old revision of the document!
Table of Contents
Project Idea
Implement a convolutional neural network (CNN) that will be able to classify an input image. It will use the layers based around a very basic CNN including: convolution layer, pooling layer, rectifier layer, and a fully-connected layer. This will have to start off static in the MFM, as passing data would be very difficult otherwise. This will require five elements that I will go into detail further down this page.
Elements
- Neuron:
- Will contain the weights from the pre-trained network that it will be using to classify the image
- Deal with all of the computations for the convolution, pooling, normalization, and fully-connection
- Will update the neighboring pixels based on the output from the computation
- Pixel
- Holds the gray-scale value for 7 pixels, then has space for a couple more bits for flags in the future
- Router
- Will hold routing information based on which way for a packet to go based on where it is going
- Path
- Will help with guiding the packets to the specified Neuron cluster
- Packet
- Will hold information about 7 pixels, and will follow the path and routers to its destination to deliver the information
Goals
- Main Goal: Run a simple MNIST example with smaller training set of 0's, of image size 18×18(?)
- First Step: Get a layout that will work for the CNN (X)
- Second Step: Get 7 packets sent from one cluster of pixels to another cluster of pixels ( )
- Third Step: Get pixel packets from all other clusters of pixels to one Neuron cluster, then run that specific convolution layer over all pixels from the image ( )
- Fourth Step: Get the above goal to work with all clusters of Neurons ( )
- Fifth Step: Implement the pooling step ( )
- Sixth Step: Set-up the rectifier layer ( )
- Seventh Step: Create the fully-connected outcome ( )
- Eight Step: Classify the image ( )
Weekly Logs
Week 7 Update:
- Now understand the concept of CNN better through messing with tiny_dnn
- Found “final” design for blocks of CNN
- Will now work on communication between the blocks in the CNN
- Also will work on fixing a size for the CNN based around a smallish image
Week 8 Update:
- Updated project page to more clearly state what my projects is and my goals
- Messed around with trying to get my own images as inputs for tiny_dnn
- Researched how small my input images could be for a CNN
Week 9 Update:
- Created Demo video for progress on project
- New Title: Object Classification in the Movable Feast Machine
- Scaled back project a lot to 3×3 hand made images without padding. So they represent object as the pixel values can only be 0 or 1.
- These 3×3 images can contain a dot (1) or a 2-length line (0).
- I get a 80% successful classification rate in tiny_dnn with the new CNN structure and these input images
- Researched into CNN structures and tested them in tiny_dnn with hand-made images of 9×9, 7×7, 5×5, 3×3, with padding and without
- New CNN structure is a 12 layer Convolution layer, that goes into a Fully_connected layer that takes in 12 input and outputs 2 results.
- Implemented working convolution layer in MFM, which takes pixels from a image element and will calculate the output from this layer
- Created shell for Fully-Connected layer for MFM, contains weights and biases, but nothing else yet.
- Scratched communication concept, as I can place all layer in a sort of column fashion, so there is no need for this
Week 10 Update:
- Implemented the fully connected layer
- Compared values with weights and results from each layer with the tiny-dnn output, they compared well
- The results weren't exact, but I found a bug where the values were changing barely based on certain Neurons exceeding their stages and skipping certain steps
- Found some major problems in the weights I was using after testing every single possible input pattern
- Will find better weights in week 11
- Forgot to put my week 10 update on week 10….
Week 11 Update:
- Found better patterns and shapes to use together, am using a 2×2 box and a horizontal 2 length line
- Will look into adding more objects that can still be successfully classified
- Implemented these weights and biases into the project, and it classifies 8/10 possible patterns (6 from the horizontal 2 length line, and 4 from the 2×2 box)
- Does not pull the image (pixels) from a element anymore, now can be hand drawn by the user
- Tested hitting the network with “radiation”, and it failed horribly every time
- Working towards getting a reset to work
- The neurons in the convolution already had their weights stored separately from their output values, whil the fully connected layer did not
- I have successfully separated the weights and the output values for the fully-connected layer, and it works perfectly
- Added a reset element, as when the Neurons see it, they will reset the network
- Created a presentation for this project
Week 12 Update:
- Fixed bug with fully_connected layer when the simulation was going too fast for it to correctly pass the weights
- Created scripts to get data from modifying parameters for training in tiny_dnn and then will compare the results to what I get from my pre-trained model
- Added different objects, so It can classify up to 5 now, with a much worse percentage…
- Wrote the abstract for this project
project_people/josh_donckels.1511026884.txt.gz · Last modified: 2017/11/18 17:41 by jdonckels