# Writing

I post occasionally about topics that interest me (usually math…)

# Chess on a Klein Bottle

Today I came across this 2006 article by Timothy Chow in The Mathematical Intelligencer that introduces a variant of chess played on a Klein bottle. In Chow's game the surface is constructed in the canonical sense, with an inversion of direction when the a and h files are joined.

# NL = co-NL

In 1988, Neil Immerman (now Professor Emeritus at UMass Amherst, as of 2024) published a famous paper in which he proved NL = co-NL; in the same year, Róbert Szelepcsényi independently proved the same result. I first learned their theorem (now widely known as Immerman-Szelepcsényi) in an undergraduate course that covered, among other topics, introductory complexity theory. We discussed the proof idea only briefly in class as we were more interested in the result itself, but there are a number of interesting results about alternation that follow from the proof. This post discusses the idea behind Immerman-Szelepcsényi.

# Message-Passing Networks

Graph Neural Networks (GNNs) have been cropping up everywhere in literature. They're a pretty neat concept, after all; graphs have wide coverage when it comes to encoding real-world data, so GNNs should, in theory, be very versatile models. But what exactly do they *do*? Why do they work? These are questions that we will explore in this post.

# Polynomial Interpolation

Cursory introductions to AI/ML often reference the black box—that is, the notion of a function whose inner workings are uninterpretable—in describing machine learning models. They're not wrong; we truly don't understand how and why these models work—and not without reason.

# Hello World

This is a placeholder post for myself to use while I design the page style. Here's a simple equation: Here's an equation with more parts: Here's a matrix: Here's a code block with syntax highlighting for Python. Now, here's some text. Get ready for a lot of lipsum! Lorem ipsum dolor sit amet, consectetur adipiscing elit.