Our Future with LLMs

 — 

In this post, I explore the evolving world of Language Learning Models (LLMs), considering how they learn, the future of human-LLM conversations, the hallucination problem, compensating data providers, the potential lucrativeness of data annotation, and the advent of a new Marxist struggle.


Neural Methods in Simulation-Based Inference

 — 

A survey of how neural networks are currently being used in simulation-based inference routines.


Deriving Mean-Field Variational Bayes

 — 

A detailed derivation of Mean-Field Variational Bayes, its connection to Expectation-Maximization, and its implicit motivation for the "black-box variational inference" methods born in recent years.


Deriving Expectation-Maximization

 — 

Deriving the expectation-maximization algorithm, and the beginnings of its application to LDA. Once finished, its intimate connection to variational inference is apparent.


Additional Strategies for Confronting the Partition Function

 — 

Stochastic maximum likelihood, contrastive divergence, negative contrastive estimation and negative sampling for improving or avoiding the computation of the gradient of the log-partition function. (Oof, that's a mouthful.)


A Thorough Introduction to Boltzmann Machines

 — 

A pedantic walk through Boltzmann machines, with focus on the computational thorn-in-side of the partition function.


From Gaussian Algebra to Gaussian Processes, Part 2

 — 

Introducing the RBF kernel, and motivating its ubiquitous use in Gaussian processes.


From Gaussian Algebra to Gaussian Processes, Part 1

 — 

A thorough, straightforward, un-intimidating introduction to Gaussian processes in NumPy.


Neurally Embedded Emojis

 — 

Convolutional variational autoencoders for emoji generation and Siamese text-question-emoji-answer models. Keras, bidirectional LSTMs and snarky tweets @united within.


Random Effects Neural Networks in Edward and Keras

 — 

Coupling nimble probabilistic models with neural architectures in Edward and Keras: "what worked and what didn't," a conceptual overview of random effects, and directions for further exploration.


© Will Wolf 2020

Powered by Pelican