Deriving Expectation-Maximization

 — 

Deriving the expectation-maximization algorithm, and the beginnings of its application to LDA. Once finished, its intimate connection to variational inference is apparent.


Additional Strategies for Confronting the Partition Function

 — 

Stochastic maximum likelihood, contrastive divergence, negative contrastive estimation and negative sampling for improving or avoiding the computation of the gradient of the log-partition function. (Oof, that's a mouthful.)


A Thorough Introduction to Boltzmann Machines

 — 

A pedantic walk through Boltzmann machines, with focus on the computational thorn-in-side of the partition function.


From Gaussian Algebra to Gaussian Processes, Part 2

 — 

Introducing the RBF kernel, and motivating its ubiquitous use in Gaussian processes.


From Gaussian Algebra to Gaussian Processes, Part 1

 — 

A thorough, straightforward, un-intimidating introduction to Gaussian processes in NumPy.


A Practical Guide to the "Open-Source Machine Learning Masters"

 — 

The higher education paradigm is changing. Motivation, logistics and strategic insight re: designing the "Open-Source Masters" for yourself.


Joining ASAPP

 — 

I'm joining ASAPP, Inc. as a Machine Learning Engineer.


Neurally Embedded Emojis

 — 

Convolutional variational autoencoders for emoji generation and Siamese text-question-emoji-answer models. Keras, bidirectional LSTMs and snarky tweets @united within.


Random Effects Neural Networks in Edward and Keras

 — 

Coupling nimble probabilistic models with neural architectures in Edward and Keras: "what worked and what didn't," a conceptual overview of random effects, and directions for further exploration.


Further Exploring Common Probabilistic Models

 — 

Exploring generative vs. discriminative models, and sampling and variational methods for approximate inference through the lens of Bayes' theorem.


© Will Wolf 2017

Powered by Pelican