Blockchains and Neural Networks: Catching Up With The Hype Train

I’ve always avoided hyped technologies. But now that I’ve run out of shiny cool things to play with- and I can mess around with bytes without crapping myself- I’ve taken a committed step towards learning about and applying these new hot topics.

Blockchain. It’s the thing that most investors don’t understand, but will throw money at like it’s Stripper of the Year. After the Bitcoin bubble popped in 2018, I don’t care much about cryptocurrency. The nice thing about crypto is that there are no banks involved. The bad thing about crypto is that… there are no banks involved. Once you decide to cash out into a bank account, don’t expect Wells Fargo and Uncle Sam to be very nice about it.

But cryptocurrency is only a subset of blockchain, not its entirety. A blockchain- as I currently understand it- is really just a data structure where node N contains a hash that’s generated from its own data in addition to the data of node N-1. Each node can be validated by recalculating and comparing its hash, which makes all nodes intrinsically tied to the previous, hence the “chain” part.

The act of creating the hash is referred to as Proof of Work. Ideally, it should be computationally expensive, like any good password hashing algorithm- so that rebuilding the entire blockchain would be expensive.

I’m interested in the integrity aspect of blockchain. If you need to log non-trivial data like financial transactions, you want to be assured that each record is valid and no records are missing nor modified. Any linear dataset can be made into a blockchain by adding a Proof of Work hash to each entry. It seems like a nice auditing feature.

Beyond that, I’m still learning about how decentralization works with a blockchain. I see a few challenges with achieving decentralization, such as peer discovery, peer authentication, delegation and possible race conditions. Here’s the Github project that I’m using to do blockchain experiments: https://github.com/ryanbennettvoid/chainz

Neural Networks. Last year, I took my first plunge into machine learning. I learned that the big picture of ML is to train a model (algorithm) with inputs and correct outputs, so that when given a new input, it can provide an accurate output. In school, we learned about the formula y = mx + b and we’d produce outputs from the forumula. In the real world, we often have outputs, but no forumula. Machine learning is about finding formulas- or in other words- identifying patterns.

I did my first “machine learning” project using linear regression, which is the grandaddy of all machine learning techniques. It would draw a line through historical movie ratings to predict future data. https://github.com/ryanbennettvoid/machine-learning-movies

Neural Networks- at the core- allow us to train a model on multi-featured inputs and classify them. In the following example, the input is a photo of a banana. A “feature” of the input is a pixel value. The output is an array of weights (or confidence) for every possible class.

Illustration of a Neural Network

The above array is example of what a neural network will take at the input layer and return at the output layer. The input is a 1-dimensional array of pixel values that represents a photo a fruit. For us humans, we see pictures as 2-dimensional objects, but the neural network is fine with a 1-D representation, under the condition that all the images are normalized to the same size.

The output layer is an array of all possible classes that the input photo might be, where each value is the weight (or confidence) of that class. The reason that this is awesome is because it aligns with how humans think. If we see an object that we’ve never seen before, we don’t always have the exact answer to what it is, but can deduce the answer by comparing multiple possibilities. At the end of the day, we usually choose the best answer (the weight with the biggest value) and move on. In this case it’s the banana.

But sometimes it’s also useful to know the 2nd-best answer because it might also be correct. A neural network is flexible enough to give multiple answers to a single question, which is in my opinion the reason why it’s a big deal and something I’ll be focusing on for the foreseeable future. I made a program to recognize handwritten characters using a neural network: https://github.com/ryanbennettvoid/recog

Moving forward, I’ve found that there are numerous abstractions to help with machine learning, such as TensorFlow, scikit-learn, and more. Personally, I enjoy doing ML in C/C++, but I’ll inevitably need to get comfortable with Python if I’m going to take data science seriously.

My biggest challenges thus far:
– Figuring out dope problems to solve with machine learning
– Finding good data sources (I remember a friend told me to check out Kaggle)
– Formatting data to play nice with neural networks (OpenCV seems decent at this)

Machine learning is super fun and I look forward to posting more of my experiences with it.

Leave a Reply

Be the First to Comment!

wpDiscuz