# Polynomial Interpolation

Cursory introductions to AI/ML often reference the black box—that is, the notion of a function whose inner workings are uninterpretable—in describing machine learning models. They're not wrong; we truly don't understand how and why these models work—and not without reason. Real-world data is often multidimensional in nature and embeds complex relationships that can't be articulated using conventional statistics.

So there's a reason why we turn to the black box model. It's got clear advantages, of course; we never could've made so much progress in ML research if we hadn't embraced the "it works because it works" mentality. But there's also loads of drawbacks, especially when ML meets decision-making. When we turn to computers to make decisions, we suddenly hesitate to trust their judgement. That's because, when real-world consequences are involved, it's not enough to say "the computer made me do it" to justify our actions; clear, founded reasoning must follow. The black box isn't going to cut it for high-stakes usage.

So how do we begin to understand the models we've spent years building? It's a daunting task, but, as good scientific practice suggests, we could begin by simplifying the problem and solving it with a foundation of solid mathematical principles. Enter the interpolation problem, a "predecessor" of sorts to machine learning. First, some notation:

Let

Now, suppose a function

The problem arises: how do we construct a tractable interpolant

A reasonable first approach leverages the fact that there exists a unique polynomial of degree

Lagrange interpolation enables us to find explicit coefficients. Consider the following construction of a polynomial:

So how well does our interpolant perform? Let's put it to the test. I begin by sampling evenly-spaced points
on

```
import scipy
import numpy as np
import matplotlib.pyplot as plt
f = scipy.special.legendre(3)
x = np.linspace(-1, 1, 16)
y = f(x)
p = scipy.interpolate.lagrange(x, y)
fig, ax = plt.subplots(1, 2, figsize=[10, 3])
for i in (0, 1):
ax[i].set_xlim([-1.25, 1.25])
ax[i].set_ylim([-1.25, 1.25])
ax[i].scatter(x, y)
d = np.linspace(-1.25, 1.25, 500)
ax[1].plot(d, p(d), 'g')
```

Here, the Lagrange interpolation works like a charm. But that's to be expected; we're using a polynomial to interpolate a polynomial. What if we test our method with a harder task, like interpolating a transcendental function?

```
import scipy
import numpy as np
import matplotlib.pyplot as plt
f = lambda x: 1 / np.cosh(5 * x)
x = np.linspace(-1, 1, 16)
y = f(x)
p = scipy.interpolate.lagrange(x, y)
fig, ax = plt.subplots(1, 2, figsize=[10, 3])
for i in (0, 1):
ax[i].set_xlim([-1.25, 1.25])
ax[i].set_ylim([-0.25, 1.25])
ax[i].scatter(x, y)
d = np.linspace(-1.25, 1.25, 500)
ax[1].plot(d, p(d), 'g')
```

The key takeaway is that, like in machine learning, unexpected behavior can arise as a natural consequence of the method in numerical analysis. By understanding the mathematical basis for these errors, we can develop robust mitigation techniques.

- Aaron