Convolutional variational autoencoders for emoji generation and Siamese text-question-emoji-answer models. Keras, bidirectional LSTMs and snarky tweets @united within.
Coupling nimble probabilistic models with neural architectures in Edward and Keras: "what worked and what didn't," a conceptual overview of random effects, and directions for further exploration.
Exploring generative vs. discriminative models, and sampling and variational methods for approximate inference through the lens of Bayes' theorem.
Statistical underpinnings of the machine learning models we know and love. A walk through random variables, entropy, exponential family distributions, generalized linear models, maximum likelihood estimation, cross entropy, KL-divergence, maximum a posteriori estimation and going "fully Bayesian."
Autoencoding airports via variational autoencoders to improve flight delay prediction. Additionally, a principled look at variational inference itself and its connections to machine learning.
Deriving the softmax from first conditional probabilistic principles, and how this framework extends naturally to define the softmax regression, conditional random fields, naive Bayes and hidden Markov models.
In this post, we look to beat the performance of Implicit Matrix Factorization on a recommendation task using 5 different neural network architectures.
A follow-up to Erik Bernhardsson's post "More MCMC – Analyzing a small dataset with 1-5 ratings" using ordered categorical generalized linear models.
Simple intercausal reasoning on a 3-node Bayesian network.
A toy, hand-rolled Bayesian model, optimized via simulated annealing.
© Will Wolf 2020
Powered by Pelican