people:taylor_berger:infinite_brain_paper
This is an old revision of the document!
An Indefinitely Scalable Brain: Implicit Neural Networks in a Spatially Distributed System
Abstract: Mathematical models of neural networks can be very sensitive to perturbations such as deletions of nodes or corruptions of weight vectors. We define an implicit, spatially distributed neural network and show pattern recognition is not only viable, but robust in its classification tasks in volatile systems. We use the Moveable Feast Machine architecture to investigate a neural network with implied connections between neurons based on their proximity. We show that this type of neural network can be scaled indefinitely and learns arbitrary patterns despite adverse learning conditions. We then show this neural network is a viable option for a two-class classification task.
people/taylor_berger/infinite_brain_paper.1413174560.txt.gz · Last modified: 2014/10/13 04:29 by tberge01