## Wednesday, November 20, 2013

### Keeping punters log-happy: Some properties of a "pristine parimutuel" market clearing mechanism

Is there such a thing as a perfect probabilistic paradise for punters? A place where participants' prior probabilistic prophecies are pulled apart and placed into a philosophically pristine parimutuel posterior, without painful departure from previously perceived pseudo-optimal particulars? I believe the platonic possibility is promising (though to provoke, in practice we puritans may be prevented from partaking in this powerful portal whilst we are paralyzed by the paranoid parlance of the present day).

A pristine market

Well now that I'm out of spit, here are some characteristics of a market for mutually exclusive, exhaustive outcomes that I would like to see:
1. Every participant is forced to invest in a manner that optimizes their long run wealth. They will allocate their wealth to maximize $$E^P[\log {\rm wealth}]$$ with respect to their best estimate of real world probability $$P$$.
2. Every participant's estimate of probability, on which 1. is evidently based, must include both private information (from which they might derive a probability measure $$R$$ say) and the market probabilities themselves (call them $$Q$$). In contrast, to simply equate $$P$$ with $$R$$ in 1. is a dreadful, though rather common, mistake.
3. Criteria 2. is achieved without the need for participants to monitor and respond to price feedback. In contrast the usual mechanism for including market information, as we find in racetrack parimutuels, is to provide provisional estimates of $$Q$$ in realtime so that punters can react.
The last requirement motivated my ponderings on this topic. In our perceived utopia participants are given inventive to provide their own private probabilities $$R$$ well before "race time" (as it were). They can then put their feet up and relax safe in the knowledge that 1. will be satisfied automatically.

Such is non-trivial because even investors whose bets are tiny with respect to the overall market volume (and thus have negligible price impact) must quite rationally react to partial price discovery even though in "theory", they should simply bet proportionately irrespective of the odds on offer (a lovely accident mentioned below). I use scare quotes because unless the odds are truly known, $$Q$$ should really enter the optimization via the back door: an update to $$P$$ acknowledging the market's ability to aggregate information.

In this setup the market clearing mechanism does the heavy lifting normally performed iteratively and imperfectly. So long as there is a rule governing the manner in which "$$R + Q = P$$" (figuratively speaking) we can anticipate every player's estimate of $$P$$ given $$Q$$, and thereby solve simultaneously for Q and log-optimal allocations that presumably influence Q. Here we consider the simplest rule of that sort. We shall assume P is a convex combination of $$R$$ and $$Q$$ with fixed coefficients.

Also for simplicity we consider the case where the market is not subsidized (though that might be an interesting direction for generalization). Then linear pricing forces us to adopt the most straightforward parimutuel clearing mechanism once we known the allocations: divide all the money wagered amongst those choosing the winning horse, in proportion to their bet.

Necessary optimality conditions

Suppose market participant $$k$$ allocates all his wealth $$W^k$$ across $$I$$ mutually exclusive outcomes. Suppose his estimate of probability for the $$i$$'th state is given by $$p^k_i = \eta^k r^k_i + (1- \eta^k) q_i$$ where $$r^k$$ is his best estimate using only private information, and $$q_i$$ is the market implied probability arrived at by means to be discussed.

As noted this equation is a statement of a seemingly rational philosophy, independent of how the market operates. Investor $$k$$ might have noticed in the past that his private information adds some explanatory power to the market, but he probably shouldn't ignore the market prices altogether in arriving at the best estimate of real world probability.

We shall further suppose, in what follows, that all $$K$$ participants are rational in another sense. They wish to optimize the log of their posterior wealth. Now it is well appreciated that if $$p^k_i$$ are considered fixed this log-optimality leads simply to proportional betting, but that is not the case here. Only $$r^k_i$$ and $$\eta^k$$'s are fixed, and we shall attempt to construct allocations $$u = \{u^k\}$$ and clearing prices $$q_i$$ that overtly depend on the investments made by participants.

To that end let $$u^k$$ denote the fraction of wealth investor $$k$$ invests in the $$i$$'th state. Suppose that the market clears in parimutuel fashion, meaning that all participants receive the same price. The market probability for the $$i$$'th state must be $$q_i = \frac{ \sum_k u^k_i W^k } { \sum_{k=1}^K W^k } = \sum_k u^k_i W^k$$ since we might as well suppose w.l.o.g. that $$\sum_k W^k = 1$$.

The question then arises, is there a choice of $$\{u^k_i \}_{i,k}$$ such that investment by each participant is log-optimal? Intuitively one would expect a market equilibrium provided $$\eta^k$$'s are strictly between $$0$$ and $$1$$.

The utility function for the $$k$$'th investor is \begin{eqnarray} U^k( u^k ) & = & E^p\left[ \log\left( \frac{u^k_i}{q_i} \right) \right] \\ & = & \sum_{i=1}^I p^k_i \log\left( \frac{u^k_i}{ \sum_k u^k_i W^k } \right) \\ & = & \sum_{i=1}^I ( \eta^k r^k_i + (1- \eta^k) q_i ) \log\left( \frac{u^k_i}{ \sum_k u^k_i W^k } \right) \end{eqnarray} and by definition of $$u^k_i$$ the constraints are $$\sum_i u^k_i = 1$$. Or for every $$k$$ we might write $$g(u^k)=0$$ where $$g(u^k) = \sum_i u^k_i - 1$$. This sets up the first order Lagrangian equations for $$\Lambda(u,\lambda) = U(u) - \lambda \cdot g( u )$$ where we collect the components $$k=1,..K$$. As usual for these problems we set $$\nabla \Lambda = 0$$ because for optimality the derivative of the objective function must be proportional to the derivative of the constraint function. This leads to equations of the form \begin{eqnarray} 0 & = & p^k_i( q_i(u)) \left( \frac{1}{u^k_i} - \frac{W^k}{q_i(u)} \right) - \eta^k W^k \log \left( \frac{u_i}{q_i(u)} \right) - \lambda^k, \ \ \ {\rm and} \\ 0 & = & \sum_i u^k_i - 1 \end{eqnarray} relating the allocations $$u^k_i$$. Notice there are $$KI+K$$ equations and $$KI+K$$ free variables, including both the $$u^k_i$$'s and the $$\lambda^k$$ multipliers (the $$W^k$$ are fixed parameters and we have $$q_i(u) = \sum_k u^k_i W^k$$ as noted above). The solution to this system of non-linear equations defines a pristine parimutuel clearing mechanism.

Comparison to proportional betting

In contrast if we imagine that $$q_i$$ are not a function of allocations $$u^k_i$$ but fixed, and further suppose $$\eta^k = 1$$ then we return to the overly stylized world where participants don't take market prices into account, preferring to believe their own homework is a sufficient statistic. The optimality conditions are instead: \begin{eqnarray} 0 & = & \frac{p^r_i}{u^k_i} - \lambda^k, \ \ \ {\rm and} \\ 0 & = & \sum_i u^k_i - 1 \end{eqnarray} It is apparent from the first equation that $$u^k_i \propto r^k_i$$ and then, from the second, that $$u^k_i = r^k_i$$. We recover the remarkable accident referred to as proportional betting, where the price $$q_i$$ does not enter the picture. This works perfectly well for blackjack where $$r^k = p^k$$ but not most real world games where markets inevitably supplement one's private information.

Critique

While I have caged this discussion in market language, it should be apparent that we have derived an algorithm for combining multiple probabilistic forecasts into a single probability distribution $$Q = \{q_i\}$$, for which there is a large literature. See the references to Ranjan and Gneiting for example.

I shall concede that what is philosophically inconsistent is my use of simple linear pooling to arrive at the individual's subjective probability estimates $$p^k = \{p^k_i\}_{i=1}^I$$ based on their prior $$r^k$$ and the final market price $$q_i$$, given that we then derive a more sophisticated combination scheme for meshing between individuals. Why not use the "better" scheme to combine $$Q$$ and $$R$$?

I suppose a flippant response is "why not put a picture of a boy holding a cereal box on the cereal box?". Let me think about a more satisfying answer and get back to you.

## Thursday, October 31, 2013

### Hooke's Law and the Kalman filter. A little "spring theory" emphasizing the connection between statistics and physics.

Readers will be familiar with statistical mechanics. But what about mechanical statistics?

This post is purely expository and concerns the simplest one dimensional Kalman filter. You'll recall the setup. We periodically observe Brownian motion subject to gaussian measurement error and attempt to infer the ground truth from those measurements. But rather than dive into a bunch of conditional expectations for gaussian linear systems we'll look for some physical intuition. We'll pretend that we've been robbed of our computers and are forced to create analog varieties just as in the good old days.

We make an observation that isn't always stressed up front in the statistical literature (Harrison and West for example) or control systems perspective (such as you will find at the at wikipedia  entry for "Kalman filter", for example). Then we pursue the analogy between statistics and physics a little further, and show how the updating of a location estimate of a gaussian distribution amounts to a combination of center of mass and reduced mass calculations.

The perspective we arrive at is that most statistical calculations of a filtering variety are directly mapped to a corresponding physical system containing both fixed and free masses. Resolving these physical systems requires a change of reference frame. The calculations are therefore, somewhat unsurprisingly, precisely "center of mass" and "reduced mass" calculations from physics.

Kalman filter equations are just a center of mass calculation

Suppose the prior estimate of location for a particle is $$m$$ and the prior covariance is $$P$$. Suppose we make an observation $$y$$ with error variance $$R$$. Our posterior belief is gaussian with location $$m'$$ say and variance $$P'$$. The update is usually written \begin{eqnarray} m' & = & m + K ( y - m ) \\ P' & = & P(1-K), \ \ {\rm where} \\ K & = & \frac{P}{P+R} \end{eqnarray} However it is in many ways more natural to use the inverses of covariances instead. If we write $$\varphi = 1/R$$, $$p = 1/P$$ and $$p' = 1/P'$$ and multiply by through by $$\frac{P+R}{PR}$$ we notice that the Kalman filter update is merely a center of mass calculation: \begin{eqnarray} m' & = & \frac{m/P + y/R} { 1/R + 1/P } = \frac{ pm + \varphi y }{ \varphi + p } \\ p' & = & \frac{1}{P'} = \frac{P+ R}{PR} = \frac{1}{P} + \frac{1}{R} = \varphi + p \end{eqnarray} The analogy works if we treat precision as mass. And in what follows we'll be equally interested in the analogy between force and the derivative of the negative log likelihood function.

This table suggests that in a gaussian world force is linear in distance. And it true that we can construct an analogue Kalman smoother with rods and springs as follows:

 An "analogue" gaussian smoother using perfect Hookean springs
Minimizing energy corresponds to minimizing maximizing log-likelihood. And setting the derivative of log-likelihood to zero corresponds to finding the equilibrium where forces cancel out.

Futhermore the fact that combining two pieces of evidence for one latent variable can sometimes be as simple as merging the two observations at their "center of precision" corresponds to a nice accident when forces grow linearly with distance: the impact of two masses on a third is unchanged if they coalesce at their center of mass.

But there is more to the story...

Reading a "spring diagram" in a Hookean universe

To demonstrate a richer physical analogy we consider next a Gaussian distribution whose location is assumed unknown, but also gaussian.

 Figure 1. Hierarchical model where location of a gaussian distribution is itself gaussian

Suppose our prior is \begin{eqnarray} P( x | \mu ) & \propto & e^{-\rho(x-\mu)^2} \\ P(\mu) & \propto & e^{-p(\mu-m)^2} \end{eqnarray} where this time $$m$$ represents our guess as to the location of the center of the distribution. Symbolically we might represent the prior with the following diagram.

Figure 2. Spring diagram representing prior knowledge of the location of a gaussian distribution

Here the tuples represent location and precision, or if you prefer, position and mass. It isn't an analogy. If we assume attractive forces vary as a Hookean law, which is to say proportional to the product of masses and distance: $$Force \propto M_1 M_2 d$$ then in the example above the yellow and green masses witness attractive force and potential energy given by \begin{eqnarray*} {\rm Force} & = & \frac{p}{\rho} \rho |m - \mu| = p |m - \mu |,\ \ {\rm and\ thus} \\ {\rm Energy} & = & \frac{1}{2} p (m - \mu )^2\\ \end{eqnarray*} Since energy is the negative log likelihood, we "read" the diagram as $$P( x | \mu ) \propto e^{-\rho(x-\mu)^2}$$. Similarly the spring between the yellow mass and red "test particle" is read $$P(\mu) \propto e^{-p(\mu-m)^2}$$. The system therefore represents the hierarchical model and indeed, it is an exact physical analogue. So in what follows we will ask the following question: what force does the test particle feel as we add other masses to the picture?

Simplifying  a spring diagram using reduced mass

The game begin in earnest when we introduce noisy evidence of our unknown location parameter $$\mu$$ for our mysterious distribution. Suppose we take a draw from said distribution $$x_2$$. Suppose we don't observe $$x_2$$ itself but instead, a noisy measurement $$y$$ whose precision (or "mass", if you will) is $$\varphi$$. The noisy measurement's distribution conditional on $$x_2$$ is $$P(y|x_2) \propto e^{-\varphi(y-x_2)^2}$$ and corresponds to the following spring diagram.

 Figure 3. Spring diagram representing noisy evidence
As suggested in the diagram, we will attach the evidence to the latent variable $$\mu$$ as follows:

 Figure 4. Prior location belief plus a noisy measurement
Our task is simplification of this system until, from the perspective of the test mass, it resembles the form of the prior we began with. We should think of the observed evidence (green rectangles) as fixed points whereas the yellow circles represent unknown quantities that are free to move.

We ought to recall here the rules for combining springs in series, or to be more direct, the "reduced mass" trick for replacing a three body problem with a two body problem. In either situation physics reminds us that the combined action of the rightmost two masses can be simplified:

 Figure 5. Prior belief plus a noisy measurement simplified using reduced mass

We replace the mass $$\phi$$ with a reduced mass $$\frac{\phi}{\phi+\rho}$$ because the intermediating unit mass reduces the pull. Since it is well covered elsewhere I will not derive the reduced mass expression but notice why the reduced mass makes sense in the limits. If $$\phi \rightarrow 0$$ the relative size of the yellow unit mass is huge and so the mass at $$\mu$$ hardly feels the pull from the green mass at $$y$$ at all. In the other extreme case, when $$\phi \rightarrow \infty$$, the unit mass is sucked into the green mass and is, for all intents and purposes, stationary. Thus it acts like a fixed unit mass pulling the mass at $$\mu$$ rather than a floating one.

We proceed to the final simplification of the diagram. This is pretty easy as the two green masses are inertial. Their impact on the yellow mass is equivalent to a single inertial mass at their center of mass. Thus:

 Figure 6. Simplification of Figure 4 by reduced mass and center of mass calculation.
Apologies here because the diagram contains a small error. The denominators should read $$\varphi + \rho$$, not $$\varphi + p$$. Evidently this takes the form of the prior system in Figure 2 and can be easily read. It states that our posterior belief about the location parameter $$\mu$$ is gaussian with mean $$m'$$ say and precision $$p'$$ say, where \begin{eqnarray*} m' & = & \frac{ m \frac{p}{\rho} + y \frac{\varphi}{\varphi+\rho} } { \frac{p}{\rho} + \frac{\varphi}{\varphi + \rho} } \\ p' & = & \frac{p}{\rho} + \frac{\varphi}{\varphi + \rho} \end{eqnarray*}
This closes the loop and demonstrates how updating can be performed for the hierarchical model in Figure 1.

Recovering the Kalman filter update

As a parting note, we see that the limit $$\rho \rightarrow \infty$$ leads to update equations \begin{eqnarray} p' & = & \frac{p}{\rho} + \frac{\varphi}{\varphi + \rho} = \frac{p+p/\rho + \varphi}{\varphi +\rho}  \rightarrow \varphi + p \\ m' & = & \frac{ m \frac{p}{\rho} + y \frac{\varphi}{\varphi+\rho} } { \frac{p}{\rho} + \frac{\varphi}{\varphi + \rho} }   \rightarrow \frac{ pm + \varphi y }{ \varphi + p } \end{eqnarray} which is the Kalman update as before. This is to be expected, since in the limit $$\rho \rightarrow \infty$$ the problem of locating a distribution with unknown location (noisily observed) shrinks down to the problem of locating a point mass with unknown location (noisily observed).

## Sunday, September 15, 2013

### The Horse Race Problem: A Subspace Solution

The "horse race problem" asks for a reasonable estimate of the relative ability of entrants in a competition given that we know the probabilities of winning. The question is vague, but implies the competition awards a win to the contestant with the highest (or lowest) score, and perhaps further implies that the distribution of contestant scores is a translation family where "ability" is a location parameter.  This post formalizes the problem and provides a simple practical solution.

 Geelong Cup Photo Finish 2005

Inferring running time distributions from win probabilities

The question is hardly new. As noted in Probability models for horse race outcomes by Mukhtar M. Ali, one way to estimate race outcome probabilities given a liquid market for the winner is to calibrate distributions of their running times. For example, one might assume the distribution of the $$i$$'th horse's running time $$\{\tau_i\}$$is gaussian, with known variance, and that the location of each horses' distribution is a free parameter to be estimated. More generally we assume all running time distributions belong to a translation family where $$f$$ is fixed and a single parameter $$a_i$$ characterizes each horse via its running time density $$f_i(x) = f(x;a_i) = f(x-a_i)$$ We denote the corresponding probability that the $$i$$'th horse's running time $$\tau_i$$ is less than $$x$$ by $$F(x - a_i) = Prob( \tau_i < x).$$ Under these assumptions the probability $$p_i$$ that the $$i$$'th horse wins the race is$$p_i = \int f(x;a_i) \prod_{j \not= i}^k \left(1-F\left(x;a_j\right) \right) dx.$$ If we arbitrarily fix $$a_1=0$$ we can presumably solve for $$\{a_i\}_{i>1}$$. More on that in a moment.

Motivation (or lack thereof)

On application of the horse race problem is the estimation of so-called exotic probabilities: the probabilities of relevance for racegoers betting on quinellas, exactas (forecast) and trifectas. This presupposes that the market for horses winning a race is more efficient than the markets for various permutations of finishers. There is a small literature aimed at exploiting inefficiencies between the win and place (show) markets at racetracks.

The evidence is not altogether compelling, incidentally. It has been argued here that place prices on at least one betting exchange are roughly as efficient as the win market. And the ability for professional punters to beat the track take is greater for trifectas than the win market (because knowledge of two or three horses can be multiplied, as it were) suggesting that the smart money might be in trifectas instead. In a past life I gathered a fair amount of evidence to suggest just that, so don't rush off to the Dapto dogs armed only with a quaint assumption. Still, the horse race problem as stated remains a interesting mathematical problem and has motivated some people to study order statistics.

Normal running times

As noted by Ali, in the case where $$f = \phi$$ is the normal distribution an approximate solution for the $$a_i$$ given known $$p_i$$ is given by Henery $$a_i = \frac{ (k-1) \phi\left( \Phi^{-1}(\frac{1}{k}) \right) \left(\Phi^{-1}(p_i)- \Phi^{-1}(\frac{1}{k})\right) }{ \Phi^{-1}\left( \frac{i-\frac{3}{8}}{k + \frac{3}{4} } \right) }$$ Some other approximations are mentioned in this presentation by Victor Lo who summarizes some empirical findings.

Gamma running times

I leave it to the reader to explore another territory where analytic solutions are available. For a treatise on ranking models with gamma processes see Gamma processes, paired comparisons, and ranking by Hal Stern. (Related: Generalized Bradley-Terry models).

Arbitrarily distributed running times by brute force optimization

Here I restrict myself to some observations on the problem where $$f$$ is arbitrary. As mentioned above we can set $$a_1=0$$ and throw  $$\{a_i\}_{i>1}$$ to a solver for $$F=0$$. Here $$F = F\left(\{a_i\}_{i\not=1}\right)$$ is an $$n$$ dimensional vector function taking $$n-1$$ arguments, and the $$i$$'th entry of $$F$$ is the difference between the computed ($$q_i$$) and desired probabilities: $$F_i = \overbrace{\int f(x;a_i) \prod_{j \not= i}^k \left(1-F\left(x;a_j\right) \right) dx}^{q_i} - p_i.$$This works fine for moderate $$n$$, but for $$n>100$$ it is time consuming, as can be seen in the following figure.

 Computation time in seconds (left) as a function of the number of runners in the race. Log scale on right. Matlab's implementation of Levenberg-Marquardt optimizer was used.
Solving a race with 200 runners takes on the order of one hour.

A novelty? Arbitrarily distributed running times by subspace optimization

We now introduce a method that cuts the computation time dramatically while retaining a high level of accuracy in the solution. The idea is to restrict the $$n-1$$ dimensional search to a subspace of dimension $$m < n-1$$. In particular if $$m$$ is fixed (say $$m=15$$) then we can compute the solution in approximately linear time. A one hour computation for a large race becomes a one minute computation (not as cool as detexity, but not bad).

Obviously the subspace must be chosen with some care, lest it lie nowhere near the best solution. But before getting to that it also helps to introduce a scaled measure of discrepancy between the desired vector $$p$$ and the one computed. We use the log-odds discrepancy:
$$D(p_i, q_i) = log(p_i)-log(1-p_i)-log(q_i)+log(1-q_i)$$ rather than the naked difference of probabilities in the equation above.

Now to optimize over a subspace we define a map $$\phi(z \in R^m) \mapsto a \in R^n$$ as follows. We assume the probabilities are sorted from highest to lowest. We let $$\{c_k\}_{k=1}^{m-1}$$ be a collection of centroids obtained by k-means clustering of the differenced log-probabilities $$\{\log(p_1)-\log(p_i)\}$$ into $$m-1$$ clusters. We then enlarge the collection by adding $$0$$ and $$\log(p_1)-\log(p_n)$$. After sorting the enlarged list of cluster centroids (padded with extreme points) we have a collection $$c = \{0, \dots, \log(p_1)-\log(p_n)\}$$ of cardinality $$m+1$$. Then we define an $$n$$ by $$m+1$$ matrix $$A$$ by setting $$A_{1j} = \delta_{1j}$$, $$A_{n,j} = \delta_{m+1,j}$$ where $$\delta$$ is the Dirac delta function. For $$2 \le i \le m$$, choosing the convex combination of the nearest two cluster points that returns $$p_i$$. Thus there are two non-zero entries of $$A_{ij}$$ for all but the first and last rows (unless a $$p_i$$ coincides with a centroid). With slight abuse of notation, $$A$$ as a linear transformation from $$R^{m+1}$$ to $$R^n$$ satisfying $$p = A(c)$$.

The map $$\phi(z \in R^m) \mapsto a \in R^n$$ is defined by composition:
$$\phi = A \cdot \iota$$where in matlab notation we would write $$\iota(z) = [0;z]$$. That is to say $$\iota$$ is the trivial map from $$R^m$$ to $$R^{m+1}$$ that assumes the first coordinate is zero. Finally then, we pass the vector function $$\tilde{F}$$ taking $$m$$ arguments to a solver. The $$i$$'th coordinate $$\tilde{F}$$ is given by
$$\tilde{F}_i = D\left( q_i\left( \phi\left(z\right)\right), p_i \right)$$
This works surprisingly well. The intuition is that the sorted abilities $$\{a_i\}$$ can be interpolated reasonably (and very roughly, vary with the $$\log$$ of $$p_i$$) so long as we have enough knot points. It helps to place the knot points near population centers to reduce the amount of interpolation. As we vary $$z$$ we are varying the locations of the knot points and forcing the $$a_i$$'s to move in a subspace, but since the interpolation is pretty good the subspace lies quite close to the solution.

And as noted, it is much faster. The figure below shows computation time as a function of the number of starters when we use subspaces of dimension $$5$$ and $$15$$. No code optimization has been attempted and undoubtedly, better numbers could be achieved in any number of ways. But the point is that using a subspace search alone results in a $$50$$x improvement or so for $$n \approx 200$$. The improvement would of course be much greater, in relative terms, for $$n \gg 200$$.

 Computation time as a function of race size when using a subspace search
The error introduced by the subspace approximation will vary, of course, but as a rough guide the ratio of $$p_i$$ to $$q_i$$ differs from unity by about half a percent when using a subspace of dimension five, even when there are $$200$$ runners. So a horse at 100/1 might be calibrated at 99.5/1. This error drops by another order of magnitude when using a subspace of dimension $$15$$, whereupon the calibrated and desired probabilities are, as a practical matter, indistinguishable.

Possible improvements

For known distributions we ought to be able to use more accurate interpolation. Here I merely assumed that probabilities fall off as the exponential of ability difference. For very large races we should use the stability postulate instead, at least for those in the tail.

A generalization?

It's usually around this point where start to suspect there can't be anything new here. After all, any similar optimization problem where we have an ounce of intuition as to the solution that could be similarly addressed (i.e. by minimizing interpolation with clustering). Perhaps somebody could point me to method XYZ and conclude that the horse race solution presented here is merely an application of the same.

## Monday, September 2, 2013

### Quasi-random sampling subject to linear and moment constraints (or: "how to bypass model calibration")

Its a rainy long weekend in New York and I've been dusting off an old experiment while listening to the thunder at a friend's place. I was particularly impressed by the fact they had an outlet on their porch positioned right by a comfy chair.

Here's the motivation. In finance we sometimes find ourselves attempting to calibrate a probabilistic model for an underlying random variable (such as a stock price) subject to known constraints (such as the price of an option on the same). Often, we then draw samples from the same model for the purpose of pricing other securities, performing risk calculations or satisfying some other need.

$$observed\ prices \rightarrow calibrated\ model \rightarrow random\ samples$$ This has several advantages including the built in regularization we get by using a parametrized model. There is on obvious drawback however: one has to come up with the model!

A while ago I was motivated to skip the calibration step. I was inspired by the herding literature in machine learning, and I started playing with a more direct path: $$observed\ prices \rightarrow random\ samples$$.The idea is that we devise quasi-random sampling schemes that automatically achieve the calibration while striving to fill out the space in a way that achieves some of the objectives of a calibrated model.

Best described by an example I feel. And in this one we consider samples on a discrete lattice where smoothness of the sampled distribution is controlled using the discrete Laplacian.

Example

I will lead with the picture. The middle plot demonstrates that we are able to generate samples whose histogram is quite smooth, yet satisfy the constraints provided. In this case we've chosen a two dimensional state space, as might commonly occur when (strike, maturity) or say (attachment point, maturity) axes are in play.

 Discrete Laplacian minimizing quasi-random samples (middle) approximating the true distribution (left). For comparison we also show a minimal surface (right) generated using the same constraints with direct optimization.
We've started with some fake distribution on the left, created some functions of the same and used those functions as constraints.

Aspirations and notation

We allow a lattice to be determined by axes $$x^{(1)}=\{x^{(1)}_1 >...> x^{(1)}_{n_1}\}$$ and $$x^{(2)}=\{x^{(2)}_1 >... > x^{(2)}_{n_2}\}$$. We suppose that $$F = \{f_k\}_{k \in K}$$ is a collection of functions on the implied lattice whose means are known constants $$\mu_k$$. That is,

\mu_k = E[f_k] \ \  \ \ k \in K.

with respect to some density that we wish to sample from. Specifically we want to draw samples $$z=\{z^i\}_{i=1}^I = \{(z^i_1,z^i_2)\}_{i=1}^I$$ such that

\hat{f_k} := \frac{1}{I}\sum_{i=1}^I f_k(z^i)

satisfies approximate equalities

\hat{f_k}(z) \approx \mu_k, \ \ k \in K.

It will turn out that using equalities here involves no loss of generality. Indeed a cute feature of the algorithm exhibited here is that inequalities are treated in the same manner as equalities: we simply use the same function twice with the same weight $$w$$ but different $$\mu$$'s. The lower $$\mu$$ is interpreted as a lower bound and the higher as an upper bound.

Now for notational convenience we might as well consider $$f_k$$ to be functions defined on the integer lattice $$\{1..n_1\} \times \{1..n_2\}$$, identifying $$f(r,s) := f(x^{(1)}_r,x^{(2)}_s)$$. We also identify samples $$(z^{(1)}_r,z^{(2)}_s)$$ with their integer pairs $$(r,s)$$. We denote by $$Z^i(r,s)$$ the number of samples falling on the lattice point.

A fast algorithm for generating quasi-random samples

In what follows we will use a prior probability $$\phi(r,s).$$ defined on the lattice. It might might be determined by minimization of the square of the discrete Laplacian subject to the equality and inequality constraints given, or by other means.

The idea is to start lobbing samples on the lattice (like stackable Go stones) and watch the distribution emerge. By monitoring the discrete Laplacian we can try to coerce the samples to fill in holes - smoothing things out as it were. By monitoring the expectation of the constraint functions $$f_k$$ with respect to the pseudo-sample distribution thus far, we can correct for bias.

For this to be efficient we need a very efficient means of bookkeeping.

Initialization:

At the outset we fix a constant lattice function $$A_k(r,s) = signum \left( \mu_k - f(r,s) \right)$$ for $$k \in K$$. We initialize the laplacian $$D(r,s)=0$$ for all lattice points $$(r,s)$$. We initialize counters $$S^k_0 = 0$$ for $$k \in K$$.

Selecting the next quasi-random sample:

Beginning with quasi-random sample $$i=1$$ we perform the following steps.

1. Set urgency $$\nu(r,s) = D(r,s)$$ then for $$k \in K$$ update it by $$\nu(r,s) \rightarrow \nu(r,s) + w_k A_k(r,s) signum( S^k - i\mu_k )$$
2. Choose the most urgent lattice point after weighting: $$(r_i,s_i) = \arg \max \nu(r,s) \phi(r,s)$$
3. Increment the sample count for the lattice point:  $$Z(r_i,s_i) \rightarrow Z(r_i,s_i) + 1$$
4. Decrement $$D(r_i,s_i) \rightarrow D(r_i,s_i) -1$$ then increment $$D(r^*,s^*) \rightarrow D(r^*,s^*) + \frac{1}{4}$$ for all lattice points $$r^*,s^*$$ neighbouring $$r_i,s_i$$.
5. Update $$S^k \rightarrow S^k + f^k(r_i,s_i)$$.

What we are doing here is selecting a new point that improves as many of the biases as possible, subject to the discrete Laplacian not getting out of whack.

Selecting the next quasi-random sample (slightly more efficient):

It turns out we can be a little sneakier and initialize $$\nu(r,s) = \phi(r,s)$$. Then the following algorithm  is equivalent and somewhat more efficient.
1. Choose the most urgent lattice point after weighting: $$(r_i,s_i) = \arg \max \nu(r,s) \phi(r,s)$$
2. Increment the sample count for the lattice point:  $$Z(r_i,s_i) \rightarrow Z(r_i,s_i) + 1$$
3. Update $$S^k = S^k_{prev} + f^k(r_i,s_i)$$
4. For $$k \in K$$ update $$\nu(r,s) \rightarrow \nu(r,s) + w_k A_k(r,s) \left( signum(S^k-i \mu_k) - signum(S^k_{prev} - (i-1) \mu_k \right)$$
5. Decrement $$\nu(r_i,s_i) \rightarrow \nu(r_i,s_i) -1$$ then for all four neighbours $$(r^*,s^*)$$ of $$(r_i,s_i)$$ increment $$\nu(r_i,s_i) = \nu(r_i,s_i) + \frac{1}{4}$$.
6. Set $$S^k_{prev} = S^k$$.
Notice that we don't really need to consider the discrete Laplacian and the moment terms separately, since we only care about their direct tradeoff.

A minor practical tweak

It seems to help to use a tuning parameter $$\lambda$$ that weighs the relative importance of satisfying the constraints $$f_k$$ versus minimizing the discrete Laplacian. Left as an exercise for the interested reader.

## Sunday, September 1, 2013

### Two Kelly betting rules: one of which I might actually remember

I was having a conversation the other day involving proportional betting and the Kelly criteria, and was embarrassed that I couldn't remember of the top of my head what some of the numbers came out to be. So I decided to try to make this really simple for myself by supplying two rules. The first applies to betting on an even money favorite. The second applies to betting on longshots. Both assume you wish to maximize the log of your wealth.

RULE 1: If "heads" is ten percent more likely than usual, bet ten percent of your wealth.

Here's a plot of $$E[log(1+fX)]$$ where $$X$$ is a binary random variable taking value $$1$$ or $$-1$$ with probability $$0.55$$ and $$0.45$$ respectively. The x-axis shows $$f$$, the fraction of one's wealth one decides to put on the coin flip.

 Percentage of one's wealth to wager on a coin flip with p=0.55

The y-axis can be interpreted as the average return, if we assume we get paid even money even though we know the true probability is different. Clearly we want to bet about ten percent of our wealth to maximize this.

But is the scale wrong? Since we have a ten percent edge we should, on average, be making ten percent of ten percent return, or one percent return, right?

Well no. You only get half that. If you want to remember why you only get half a percent, not a one percent return, notice that you are approximately maximizing a linear function (the return you expect, not taking into account the $$log$$) subject to a quadratic penalty (the concavity of $$log$$). To be more precise we observe the expansion around  $$f=0$$ of

E[\log(1+X)]  = 0.55 \log(1+f) + 0.45 \log(1-f)

which is given by

0.55 \log(1+f) + 0.45 \log(1-f)  = \overbrace{0.1 f}^{naive} - \overbrace{\frac{1}{2} f^2}^{concavity} + O(f^3)

The next two terms are $$+ \frac{1}{30} f^3 - \frac{1}{4} f^4$$ incidentally. But as might be surmised from the plot we are pretty close to a parabola. The marginal return goes to zero just as the parabolic term starts to match the linear one (i.e. its rate) and the absolute return goes all the way to zero when the parabolic term wipes out the linear one completely. So it must be that we optimize half way down, as it were.

Another way to come at this is that the fraction of wealth you should invest is simply $$p-q$$. That's surprising at first. But since $$E[log(1+X)] = p \log(1+f) + q \log(1-f)$$ in general we note that

\frac{\partial}{\partial f} E[\log(1+X)]  = \frac{p}{1+f} - \frac{q}{1-f}

which is zero for

f = p - q

Incidentally your return grows slightly faster than the square of your edge. As the amount we bet is linear in the edge, we'd expect the return to grow quadratically as we vary $$p$$, more or less. And not forgetting the factor of one half it ought to be half the square of our edge. In fact it grows slightly faster, as can be seen by substituting the optimal $$f= p-q$$ back into the return equation.

 Average return for an even money coin flip.

The actual return is the red line. The quadratic approximation is the blue. Not that the percentage edge is $$2(p-0.5)$$, not $$p-0.5$$.

So much for betting on the favourite, as it were, but what about longshots? The the math doesn't change but our intuition might. Instead of fixing the outcome at even money and varying the true probabilities of an outcome, let's instead fix the true probability at $$p = 0.01$$ and vary the odds we receive. All being fair we should receive $100 back, including our stake, for every$1 bet. We'll assume instead we get a payout of $$\alpha$$ times the fair payout.  Then we maximize

E[\log(1+X)]  = p \log\left(1+ f \alpha \left( \frac{1}{p}-1\right) \right) + (1-p) \log(1-f )

This brings us the the second easy to remember betting rule.

RULE 2. If a longshot's odds are ten percent better than they should be, bet to win ten percent of your wealth

Now I know what you are thinking: why do we need rule #1 if we have rule #2? Well you probably don't, actually, but evidently I might because I have managed to forget rule #2 in the past. We'll call it a senility hedge.

But rule 2 the rule is quite reasonable on a larger domain - though eventually it too breaks down. If you can get ridiculously good odds on a longshot (let's say ten times the fair odds) don't go absolutely nuts. Only bet the amount that would double your wealth if the odds were fair. Diminishing returns set in if you are set to win so much that the concavity of $$log$$ kills the fun. You can see that in the picture below.

 Average return on a 100/1 horse when we are offered 1000/1 and 2000/1 respectively.

Notice that we bet only marginally more if we get 20 times the fair odds versus 10 times. We do win twice as much, however.

## Tuesday, July 23, 2013

### The Boy or Girl Paradox: A Martingale Perspective.

Mr Smith always tells the truth.

Interviewer: Mr Smith, how many children do you have?
Mr Smith: I have two.
Interviewer: Mr Smith, do you have a son?
Mr Smith: Yes

Q1: What is the probability that Mr Smith's other child is also a boy?

This is a clean version of the boy or girl paradox. The correct answer is 1 in 3, as a simple application of Bayes' Rule establishes. Given forty families, thirty will have at least one boy. Of those thirty, ten will have two boys.

Slightly different evidence would result in an answer of 1 in 2. For example:

Interviewer: Mr Smith, how many children do you have?
Mr Smith: I have two.
Interviewer: Mr Smith, is your oldest child a boy?
Mr Smith: Yes

Q2: What is the probability that Mr Smith's other child is also a boy?

The answer to the second question is clearly 1 in 2.

The boy or girl paradox rises to the status of a controversy because many people will answer 1 in 2 in circumstances other than this as well (as pointed out by a reader - see comments below). However both Q1 and Q2 are, I think, entirely clear. The first situation results in a lower posterior probability.

Now don't ask me why I was looking at this right now but I got to wondering, is there an even easier way to see the inequality between the two answers?  Bayes Rule is pretty simple when you write it down but humans are notoriously bad at using it semi-consciously, if you know what I mean. Scanning the wikipedia explanation left me a bit unsatisfied. So here's another angle.

The Martingale Argument (Informal)

Suppose you had wagered that Mr Smith had two boys. You paid $1 and you'll receive$4 if he has two boys. It's an investment. What evidence would make you happier about your investment? Learning that at least child of two is a boy, or learning that at least one child of one is a boy? The latter is a priori less likely, and therefore better news.

## Pricing of simple contracts

Next we consider the pricing of our simple risky annuities $$A$$ and default count bets $$D$$ given a default surface $$S$$ of dimension $$N+1 \times T$$. We claim these matrices can be computed as follows: \begin{eqnarray} \label{simple} D(\omega) & = & \left(\left(L S W\right)\cdot \left(R\Lambda'W\right)\cdot \left(R\Gamma'G\right)\right)F/10000 \nonumber \\ A(\omega) & = & \left(\left(H\left(\left(1-LS\right)Z\right)W\right)\cdot \left(H\left(R\Lambda'W\right)\right)\right)F \end{eqnarray} where $$\Gamma$$ is a vector of times and $$\Lambda$$ a vector of risk free discount factors.

The remaining matrices are constant. $$L = \left[ \begin{array}{cccc} 1 & 0 &\dots & 0 & \\ 1 & 1 & \dots & 0 & \\ \vdots & \vdots & \ddots & 0 & \\ 1 & 1 & 1 & 1 & \end{array} \right]_{N+1, N+1}$$ $$F = \left[ \begin{array}{cccc} 1 & 1 &\dots & 1 \\ 0 & 1 & \dots & 1 \\ \vdots & \vdots & \ddots & 1 \\ 0 & 0 & 0 & 1 \end{array} \right]_{T+1, T+1}$$ $$H = \left[ \begin{array}{cccc|c} 1 & 0 &\dots & 0 & 0 \\ 0 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \ddots & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{array} \right]_{N, N+1}$$ $$R = \left[ \begin{array}{c} 1 \\ 1 \\ \vdots \\ 1 \end{array} \right]_{N+1,1}$$ $$G = \left[ \begin{array}{cccc} -1 & 0 &\dots & 0 \\ 1 & -1 & \dots & 0 \\ 0 & 1 & \ddots & 0 \\ \vdots & \vdots & \ddots & -1 \\ 0 & 0 & 0 & 1 \end{array} \right]_{T, T-1}$$ $$W = \left[ \begin{array}{cccc} 1/2 & 0 &\dots & 0 \\ 1/2 & 1/2 & \dots & 0 \\ 0 & 1/2 & \ddots & 0 \\ \vdots & \vdots & \ddots & 1/2 \\ 0 & 0 & 0 & 1/2 \end{array} \right]_{T, T-1}$$ $$Z = \left[ \begin{array}{ccccc} 1 & -1 & 0 & \dots & 0 \\ 0 & 1 & -1 & \dots & 0 \\ 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & -1 \\ 0 & 0 & 0 & 0 & 1 \end{array} \right]_{T, T}$$ Combining (\ref{decomp}) and (\ref{simple}) we can price all tranches for all default models using only matrix computations. The $$\theta$$ dependent matrices $$C$$ and $$P$$ in (\ref{decomp}) can be computed offline.

I made a few notes a while back for peeps like myself who are familiar with monads in functional programming but wondered why their definition isn't entirely obvious from the definition of a monad in mathematics, specifically category theory.

I personally made rather slow progress on this front (still don't understand why  arrows are strong monads for example). But perhaps something of the connection between the monad pattern in programming and the monad definition in mathematics can be cleared up here.

I confess that part of the motivation comes from being turned off by articles about monads. Newbies like your author might easily be forgiven for thinking that a monad as an endofuntorial elephant. I nearly went mad after reading twenty random monad blogs without finding a single one that states which category or categories we are in. That's like finding descriptions of "Germany" that include Nazi's, white sausages and so forth and yet never mention that Germany is a country.

This lack of precision hardly helps us when we turn to details, like that fact that monad is Haskell isn't actually a functor in the Haskell hierarchy. Indeed as Heunen and Jacobs  point out, both monads and arrows are actually monoids, in mathematical parlance.

So let me state this first: a monad (programming speak) is a monad (math speak) in the category of types. If neither of us get anything more from this exercise, so be it.

Introducing MAP, a built-in functor

Let's start from the programming side. One use of a monad pattern emerges from an everyday programming need: the desire to extend the number system (or some other type) to include an exception, or 'missing' value. This is achieved by creating a new type. In Scala (you'll forgive the bias) this is constructed as Option[something]. When you alter a Double so that it is an Option[Double] you are creating an obvious mapping from one type to another, you might even say "the" obvious mapping. But you are not merely mapping objects to Option[objects] - at least not if you wish to get any work done. You are also dragging along all the nice things you'd like to do to the objects, namely hit them with functions. Your dragging is a functor.

A functor is not just a map from one space to another, we recall, but also a map of all the stuff-you-can-do-to-things in that space to stuff-you-can-do-to-things in the other space. To be more precise I should say we are dragging over not just functions but maybe some other stuff like relations, but that is a fine point for now. In the case of Double => Option[Double] you drag f(x) = x*x+3, say, into a function we might call Some(f) which should act on elements of Option[Double] and take Some(x) to Some(x*x+3).

Or take Double and shove it in Mathematica, metaphorically. It is no longer a Double, but merely an expression, like x*x+3 which is yet to be bound to a specific Double. Thing is, you'd like to do to the expressions what you do to real numbers, so evidently we are dragging morphisms like the one taking 3 into 9 and 5 into 15 over to morphisms on expressions (like the one taking x into 3*x). They ain't the same thing but there is clearly some conservation of structure as we go from real, tangible Double into ephemeral 'x'.

For more examples of "obvious" mappings take a bunch of program fragments and shove them together in a combinator pattern. Or play with state machines, whose composition can be simplified to a single state machine. The notion of an incomplete calculation, an unconsumated something, an unsimplified something, or an extension of a domain are all examples where there is a obvious, rich mapping from one type to another. So rich, in fact, that if we are thinking sloppily we often fail to distinguish between the domain and range of the obvious mapping.

Now let's back up and dot a few t's because while a monad is a fairly simple pattern in programming we aren't close to recognizing its mathematical definition (well, I'm not). To be a little formal we need categories, functors and natural transformations. Remember that a category is supposed to be a space together with some morphisms obeying laws? It was a long time ago for me too, but the morphisms in two of the examples I mentioned are merely the set of all functions with the signature Double => Double. Again, a morphism is a more general concept than a function, basically obeying associativity, and for that reason is often called an arrow (in mathematics). But let's not worry about the possible deficiency in our morphism collection right now because the set of all Double => Double functions is interesting enough.

Two quick checks. First, it is obvious that composition of functions in the mathematical sense - no side effects - is associative. Second, we'll need not concern ourselves overly with the existence of an identity morphism - the other category requirement - because constructing the function that takes x and returns x presents little challenge to most programmers! Safe to assume the identify exists.

So jolly good, DOUBLE is a category and, returning to our first example, Some( ) sure looks like it might help make a functor from DOUBLE into something we imaginatively call SOME-DOUBLE. (In general we could call the first category THING and the new category SOME-THING, I suppose). Likewise Symbolic( ), which I just made up, looks like it might help create a functor from DOUBLE into SYMBOLIC-DOUBLE, as we might name it, a category representing a computation but not its answer. Are the images of these maps themselves categories? Its seems pretty obvious that they are, because of the triviality of function application associativity and, to be pedantic, the obvious fact that the identity morphism maps to the identity morphisms in both SOME-DOUBLE and SYMBOLIC-DOUBLE.

We focus on the first functor momentarily, whose implementation in Scala comprises the constructor x=>Some(x) and the lifting f => (x => x map f) which uses the 'built-in' map method. I personally think of this built-in functor as a 2-tuple where MAP._1 acts on objects and MAP._2 acts on functions. I'll just use MAP going forward for either though, slightly abusing notation.

MAP  = ( x=>Some(x), f => (x => x map f) )

As an aside, if I had my way Function1 would come with a map method so the morphism mapper f => (x => x map f) would look simpler, but it doesn't matter so long as we remember that morally speaking, the morphism-mapping part of the functor is just 'map'. When I say that map is 'built in' I really mean built in to the collections libraries, incidentally, because map is provided in the scala.collection.Traversible and scala.collection.Iterable traits and all collections are of one of these two types. Thus for all intents and purposes the combination (constructor, map) is an "off the shelf" functor although yes, for the third time, there are indeed plenty of morphisms other than functions - partial orders, paths on graphs et cetera - I'll let it go if you will.

What is the domain of MAP? My guess is that we should extend it to include not just DOUBLE but SOME-DOUBLE. Also SOME-SOME-DOUBLE, SOME-SOME-SOME-DOUBLE and so on ad infinitum. Call that big union the category Omega so that MAP is, in math speak, an endofunctor because it maps the category Omega into itself. We are getting a little closer to the mathematical definition now, I do hope, and here comes the last little jog.

A natural (and obvious) transformation between the built-in functor MAP applied once and the built-in functor MAP applied twice

To humor me apply MAP twice to get MAPMAP = MAP compose MAP, a functor that maps points x => Some(Some(x)) and of course messes with morphisms as well. For example, consider the function f(x) = x*x+3. MAP takes this morphism into another morphism which in turn happens to map Some(x) into Some(Some(x*x+3)). On the other hand MAPMAP would take f into a different morphism taking Some(x) into Some(Some(Some(x*x+3))). This just emphasizes the fact that MAP and MAPMAP are not the same thing, but they are dreadfully close.

 The built-in functor MAP takes points to points and functions to functions

In fact with apologies for my handwriting and the mixing of mathematical and Scala notation on this diagram, MAP and MAPMAP are so similar that when applied to a particular morphism f(x)=x*x+2 they result in very similar looking functions g(x) and h(x). In fact g and h "are" the same Scala function - the very same pattern matching snippet of code I have scrawled in the picture! Needless to say the images of points a and b look quite similar as well. It is awfully tempting to relate MAP and MAPMAP by squishing Some(Some(x)) back to Some(x). The nice thing is, it costs us little to be precise.

In category theory there is a notion of a natural transformation between two functors that respects composition of morphisms. If we were category theorists we'd probably say "oh I suppose there must be a natural transformation from MAPMAP to MAP". I did not, come to think of it, have that exact experience myself, admittedly, because natural transformations are natural in some respects but also an abstract head f@#k the first time you come across them (they relate two functors, each of which in turn map functions to functions). The moral of the story: I think we should talk more about natural transformations to make us better people.

Actually we'll only talk about the one natural transformation mentioned: MAPMAP to MAP. As the definition on wikipedia explains (with F=MAPMAP and G=MAP, and C=D=Omega), this amounts to finding what is known as a component function (math speak) that lifts h=MAP(MAP(f)) onto g=MAP(f). That is straighforward because, translating back to Scala speak,

h andThen flatten =  flatten andThen g

You couldn't hope for a more "obvious" commutative diagram that this - which is a good sign because in category theory many things are supposed look obvious after untangling of definitions or chasing of diagrams. We only need the top half of the diagram here, incidentally, and I don't think I need to define flatten. Even Scala newbies could implement the bit of it we need via pattern matching: x => x match {Some(y) => y}.

 Flatten is the component realizing the natural transformation from MAP MAP to MAP (top half)  whereas Some is the component realizing the natural transformation from Id to MAP (bottom half)

Thus natural transformations are sometimes quite friendly, it would seem, and there is an even more trivial one lurking here in the bottom half of the diagram, namely the natural transformation that takes the identity functor into MAP. That is realized by a component that happens to translate into a constructor (programming speak, of course) x => Some(x). Evidently:

f andThen Some = Some andThen g

so the bottom half commutes.

Finally, the connection to mathematical monads

We're done. If you skip down the wikipedia monad page to the formal definition, ignoring the stuff at the start about adjoint functors which is just confusing, you'll see that a monad comprises a functor F and two natural transformations, one taking the identity functor Id to F and the other taking F compose F to F. The natural transformations are defined by their components which are, respectively, the constructor Some and the squishing operation flatten. To summarize, here is how we would relate the definition of a mathematical monad on Wikipedia to the day-to-day notion of a monad in programming Scala

 Mathematical monad ingredients Example of a common monad Two categories C,D One category comprising a an infinite union of similar types C=D=Omega=Union(DOUBLE,SOME-DOUBLE,SOME-SOME-DOUBLE,...) A functor between the categories F: C->D The "built-in" functor MAP taking points x=>Some(x) and functions f => (x => x map f) A natural transformation taking the identify functor Id to F A natural transformation taking the identify functor Id to MAP A "component" realizing said natural transformation A constructor x => Some(x) that "wraps" up x A second natural transformation taking F F -> F A natural transformation from MAP MAP to MAP A second "component" realizing the second natural transformation A flatten operation x => x match {Some(y)=>y} that unwraps y

As the mathematicians remind us there are some things to check, known as coherence conditions, but I leave that to the reader.

What's the point?

Now here we are transforming functors when what we are looking for is as elementary as unwrapping a burrito. One might argue that understanding the abstraction is no harder than noticing the essential similarity between any two cases where this patterns arises and any good programmer should be on the lookout for that sort of thing. They should also be looking to communicate it to those maintaining code long after they have written it, and make that maintenance as simple as possible. Abstraction is next to cleanliness, as they don't really say.

If one is going to talk loosely about "conserving structure" one might as well talk rigorously about it if it isn't all that much harder. And when you do, there are certainly some nontrivial things that fall out of category theory (for another time). No wonder the seemingly arcane area, once considered so obscure and abstract as to be useless, has made a charge up the Google n-gram leaderboard in recent times.

 Popularity of "category theory" and "functional programming"

Still, I think most people would prefer to keep it a little wooly and merely recognize that we have an obvious identification between a doubly constructed type like List(List(3,4)) and one level down List(3,4), and that this identification needs to be carried over to morphisms - sorry functions. That is why you would naturally write flatmap yourself as the need arose, but might not necessarily appreciate the similarity to ComputeThis(ComputeThis(blah ))  or ParseMe(ParseMe( blah )) or  AbstractMe(AbstractMe( blah )) and so on. Actually Option is kind of like a partial computation too. You can think of Some(x) and Some(Some(x)) as calculations in stasis, if you will, although the only thing that "remains" to be calculated is whether x exists or not. My goodness, it always does!

One might wax mathematical in the hope of achieving a higher state of monadicness. Perhaps, for example, we "understand'' what MAP is doing because we root our intuition to pattern matching, kind of. If I tell you that g = (Some(x)=>Some(x*x+3)) you say "oh I get it, that is like x=>x*x+3 lifted up into Option[Double]. And if I tell you that h = (Some(Some(x))=>Some(Some(x*x+3)) you say "oh I get it, that is like x=>x*x+3 lifted all the way to Option[Option[Double]]". And you won't complain if I point out that g and h are the same, at least on some domain, because they are rooted in the same thing, the naked function x=>x*x+3. Perhaps the obviousness prevents us from seeing the top half of the second diagram independently of the bottom, however.

Footnote: The built-in flatMap

It may seem odd that we have discussed monads in both programming and mathematics and I made only passing reference to flatMap, the built-in "bind" for Scala collections (defined here and here for Traversable and Iterable traits respectively). For another time, I guess.

### Compositional models and their relation to Bayesian networks

Some time ago I noticed that Radim Jirousek published a foundational paper bringing together results on compositional models. A subclass of compositional models are an alternative formulation of Bayesian networks.

The basic idea goes as follows. Suppose $$\pi$$ is a joint distribution on variables $$x_1$$ and $$x_2$$ specified by the following table (see the linked paper page 620).

 $$\pi$$ $$x_1=0$$ $$x_1=1$$ $$x_2=0$$ $$\frac{1}{2}$$ $$\frac{1}{2}$$ $$x_2=1$$ 0 0

and further suppose $$\nu$$ is the uniform distribution on variables $$x_1$$ and $$x_3$$:

 $$\pi$$ $$x_1=0$$ $$x_1=1$$ $$x_2=0$$ $$\frac{1}{2}$$ $$\frac{1}{2}$$ $$x_2=1$$ 0 0

Then we define $$(\pi \rhd \nu)(x_1,x_2,x_3) = \frac{ \pi(x_1,x_2) \nu(x_2, x_3) } { \nu^{\downarrow(2)}(x_2) }$$ where the down arrow indicates a marginal distribution. This division may not always be defined. For example the composition $$\pi \rhd \nu(x)$$ is defined for all combinations of $$x_1$$,$$x_2$$ and $$x_3$$ whereas the composition $$\nu \rhd \pi(x)$$ is defined for only some, as in the table below which can be compared to that on page 630 of Jirousek's paper.

 $$x_1$$ $$x_2$$ $$x_3$$ $$\pi \rhd \nu(x)$$ $$\nu \lhd \pi(x)$$ false false false Some(0.25) Some(0.125) false false true Some(0.25) Some(0.125) false true false Some(0.0) None false true true Some(0.0) None true false false Some(0.25) Some(0.125) true false true Some(0.25) Some(0.125) true true false Some(0.0) None true true true Some(0.0) None

Conveniently Scala allows function names like |> and <|, and the monadic representation of results. Here None means undefined, and a finer point about the implementation below is that I might have used a ternary operator to resolve 0*0/0=0, as per Jirousek's convention.

Anyway, here's the code