hi, i'm matthew

and I'm interested in everything.

Written by Matthew Russell who follows Jesus, studies machine learning at the University of Kentucky, and interns at NASA. Get to know him or check out his projects on GitHub.

Peering Into the Void

October 30, 2021#astronomy#black-holes

Radio telescopes fascinate me. When someone explains how they work, you can’t help but think: “Wait, you can actually do that?” On one hand, the mathematical theory behind them simply exudes elegance, yet the implementation challenges seem impossible overcome in practice. We are clearly dealing with some absurdities when typical solutions involve maser atomic clocks and keeping equipment at temperatures less than a handful of degrees above absolute zero, which, if you had forgotten, is the…

Let's Talk About Variational Inference

October 16, 2021#math#probability#machine-learning

Previously on this blog I’ve discusses Markov chain Monte Carlo (MCMC) and how we can use it to estimate a complex posterior distribution we cannot directly solve for. I’ll recap some of the motivations for this and then introduce how variational inference can help us solve the same problem. True to form, I’ll stick to hand-wavy explanations without math first before introducing a more technical description. Problem Definition In the real world, we often encounter uncertainty. There are things…

You Should Use ISO 8601

April 24, 2021

You've been writing the date wrong. A message from Geneva. How to win friends and influence people.

Connecting the Dots of Monte Carlo

February 20, 2021

Monte Carlo ad nauseum. Predicting diseases from known genes. Estimating genes from observed diseases. Blatant abuse of function notation.

Monte Carlo Simulations

February 19, 2021

An intractable coffee problem. Your model is garbage, but very precisely so. The Tampa Bay Rays don't understand autoregressive processes. My reading goal is a bit lofty.

Making Sense of the Embedded Landscape

December 25, 2020#iot#arduino

The world of embedded hardware and firmware is confusing. If you’re already confused, “firmware” just means “software that runs very close to the hardware.” And “very close to the hardware” means that you have no operating system to work with. The code you run is the only code there is. Nothing’s magic, and that’s important to remember, probably in general. I’d guess the culprit is the variation among manufacturers which becomes accentuated with embedded platforms since you are working so close…

Reviewing the reMarkable 2

December 20, 2020#review

Back in April I pre-ordered the new reMarkable 2 e-paper tablet. Shipping took several months, but I finally received my tablet back in November. In the days leading up to receiving my unit, I heard mixed, but overall positive reviews for the new device. After working with the device for the last month, I wanted to add my thoughts to the mix. The basics are there: palm rejection is phenomenal since the pen uses different technology than the touchscreen, writing is fluid with no noticeable delay…

Estimating Baseball Event Probabilities With log5

December 03, 2020#baseball#probability#math

In his 1981 and 1983 Baseball Abstracts, pioneering sabermetrician Bill James proposed the log5 method for mixing event probabilities, which is similar to metrics used in other fields. Here are two motivating scenarios: Team A has winning percentage . Team B has winning percentage . What is the expected winning percentage of Team A against Team B? A pitcher strikes out 20% of batters he faces. A batter strikes out 10% of the time. What is the expected strikeout rate in this matchup? Winning…

Disentangling VAEs, KL Divergence, and Mutual Information

September 24, 2020#probability#math#machine-learning#jointvae

I’ve recently been reading about the JointVAE model proposed by Emilien Dupont in the paper Learning Disentangled Joint Continuous and Discrete Representations. The paper builds on the development of variational autoencoders (VAEs). As a quick overview, autoencoders are neural networks that take an input, generate some “secret” representation of it, then try to use that “secret” code to reconstruct the input. If it can do that well, then we have found a secret code that summarizes the important…

Markov Chain Monte Carlo Sampling

September 23, 2020#probability#math

One of my courses recently introduced Markov Chain Monte Carlo (MCMC) sampling, which has a lot of applications. I’d like to dive into those applications in a future post, but for now let’s take a quick look at Metropolis-Hastings MCMC. A Brief Prologue Let’s say we have a probability distribution function (within a mulplicative constant) that is very complex. We have an equation, but maybe it is impossible to integrate. Somehow, we’d like to draw samples from this distribution to estimate…

Belief Propagation

September 16, 2020#probability#math

Let’s say that you and I are roommates, and I notice you’ve been gone the last two Friday nights. This is not necessarily unusual, and sometimes your Friday night excursion is a date. However, you don’t communicate well, so I have no idea if you had a date or not. As you prepare to go out for the third Friday in a row, I wonder if this Friday you have a date. You won’t spill the beans, but I have a mind-reading superpower — *pause for dramatic effect* — math 🔥. I make an educated guess that if…

Initial Commit

December 12, 2019#markdown

I've considered starting a blog more than once over the past few years. Also, markdown is the best.