In this post, I explore the evolving world of Language Learning Models (LLMs), considering how they learn, the future of human-LLM conversations, the hallucination problem, compensating data providers, the potential lucrativeness of data annotation, and the advent of a new Marxist struggle.
A survey of how neural networks are currently being used in simulation-based inference routines.
A detailed derivation of Mean-Field Variational Bayes, its connection to Expectation-Maximization, and its implicit motivation for the "black-box variational inference" methods born in recent years.
Deriving the expectation-maximization algorithm, and the beginnings of its application to LDA. Once finished, its intimate connection to variational inference is apparent.
Stochastic maximum likelihood, contrastive divergence, negative contrastive estimation and negative sampling for improving or avoiding the computation of the gradient of the log-partition function. (Oof, that's a mouthful.)
A pedantic walk through Boltzmann machines, with focus on the computational thorn-in-side of the partition function.
Introducing the RBF kernel, and motivating its ubiquitous use in Gaussian processes.
A thorough, straightforward, un-intimidating introduction to Gaussian processes in NumPy.
Convolutional variational autoencoders for emoji generation and Siamese text-question-emoji-answer models. Keras, bidirectional LSTMs and snarky tweets @united within.
Coupling nimble probabilistic models with neural architectures in Edward and Keras: "what worked and what didn't," a conceptual overview of random effects, and directions for further exploration.
© Will Wolf 2020
Powered by Pelican