TLDR; Quick and easy pipelines that autodetect duplicate runs and reuse intermediate results. Here are the relevant links:

Documentation: https://shashank-yadav.github.io/fastpipeline/

Source Code: https://github.com/shashank-yadav/fastpipeline

If you’re a data scientist you might have used sklearn pipelines at some point. Using pipelines makes your workflow easy to read and comprehend. It helps divide a big task into logical chunks while making your work reproducible. However, I found several issues when using sklearn pipelines in my workflow:

  1. Restrictive API: sklearn pipeline consists of transformers, you can use the existing ones or create your own. However, it forces you to work with only array-like data…

Augmented reality is one of the things that’s been slowly gaining steam over the years and aims to completely eliminate the boundaries between real and virtual. Both Apple and Google have released their AR SDKs for quite some time(ARKit and ARCore respectively). One of the best real-world examples of this technology is IKEA Place. It allows you to see how a piece of furniture will look in your house

IKEA Place on AppStore

This is quite a useful application and sounded something worth building, for learning purposes. So that’s what I did, for Android!

TLDR;

For the feisty ones, here’s a link to the…


From my most recent escapade into the deep learning literature I present to you this paper by Oord et. al. which presents the idea of using discrete latent embeddings for variational auto encoders. The proposed model is called Vector Quantized Variational Autoencoders (VQ-VAE). I really liked the idea and the results that came with it but found surprisingly few resources to develop an understanding. Here’s an attempt to help other who might venture into this domain after me.

Like numerous other people Variational Autoencoders (VAEs) are my choice of generative models. Unlike GANs they are easier to train and reason…


Attention mechanism for sequence modelling was first introduced in the paper: Neural Machine Translation by jointly learning to align and translate, Bengio et. al. ICLR 2015. Even though the paper itself mentions the word “attention” scarcely (3 times total in 2 consecutive lines!!) the term has caught on. A lot of prominent work that came later on uses the same naming convention (Well, I for one think it’s more of a “soft memory” rather than “attention”).

This post focuses on Bengio et. al. 2015 and tries to give a step by step explanation of the (attention) model explained in their…

Shashank Yadav

Works in the Core ML team of Goldman Sachs

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store