TLDR; Quick and easy pipelines that autodetect duplicate runs and reuse intermediate results. Here are the relevant links:
Source Code: https://github.com/shashank-yadav/fastpipeline
If you’re a data scientist you might have used sklearn pipelines at some point. Using pipelines makes your workflow easy to read and comprehend. It helps divide a big task into logical chunks while making your work reproducible. However, I found several issues when using sklearn pipelines in my workflow:
Augmented reality is one of the things that’s been slowly gaining steam over the years and aims to completely eliminate the boundaries between real and virtual. Both Apple and Google have released their AR SDKs for quite some time(ARKit and ARCore respectively). One of the best real-world examples of this technology is IKEA Place. It allows you to see how a piece of furniture will look in your house
This is quite a useful application and sounded something worth building, for learning purposes. So that’s what I did, for Android!
From my most recent escapade into the deep learning literature I present to you this paper by Oord et. al. which presents the idea of using discrete latent embeddings for variational auto encoders. The proposed model is called Vector Quantized Variational Autoencoders (VQ-VAE). I really liked the idea and the results that came with it but found surprisingly few resources to develop an understanding. Here’s an attempt to help other who might venture into this domain after me.
Attention mechanism for sequence modelling was first introduced in the paper: Neural Machine Translation by jointly learning to align and translate, Bengio et. al. ICLR 2015. Even though the paper itself mentions the word “attention” scarcely (3 times total in 2 consecutive lines!!) the term has caught on. A lot of prominent work that came later on uses the same naming convention (Well, I for one think it’s more of a “soft memory” rather than “attention”).
This post focuses on Bengio et. al. 2015 and tries to give a step by step explanation of the (attention) model explained in their…
Works in the Core ML team of Goldman Sachs