## Wednesday, October 29, 2014

### Some special cases of the Lambert W function

Suppose $$f(x)$$ is given by the functional relation $$e^{-a(x) f(x) + b(x)} = c(x) f(x) + d(x)$$ then $$f(x) = \frac{1}{a(x)}W\left( \frac{a(x)}{c(x)} e^{b(x)+\frac{a(x)d(x)}{c(x)} } \right) - \frac{d(x)}{c(x)}$$ where $$W$$ is the Lambert W Function. You can click through to Wikipedia for a proof. Here I merely list some special cases which every good citizen should instantly recognize (and it can be mildly tedious to reproduce them).

### Recovering the definition of $$W(x)$$

Just to check we are on the right planet set $$a=1$$, $$c=1$$, $$b=\ln(x)$$ and $$d=0$$ to get $$x e^{-f(x)} = f(x)$$ or equivalently $$f(x)e^{f(x)} = x$$ which is the definition of the Lambert W function. Sure enough those substitutions recover $$f(x) = W(x)$$ as expected. A minor modification uses $$a(x)=1$$, $$c(x)=1$$, $$b(x)=0$$ but leaves $$d(x)$$ general.

### The solution to $$e^{-f(x)} - f(x) = d(x)$$

is $$f(x) = W \left( e^{d(x)} \right)$$ Only slightly more generally, set $$a(x)=a$$, $$b=\ln(g(x)), c=1, d=1$$ for those cases where $$g(x)$$ is on the wrong side, as it were.

### The solution to $$e^{-af(x)} = f(x) g(x)$$

or equivalently $$f(x) e^{af(x)} g(x) = 1$$ is $$f(x) = \frac{1}{a} W\left( a g(x) \right)$$ In particular,

### The solution to $$x^k e^{-af(x)} f(x) = 1$$

which seems to crop up a fair bit for your author is $$f(x) = \frac{1}{a} W\left( a x^{-k} \right)$$

Similarly setting $$a=-1$$ ...

### The solution to $$x e^{f(x)} f(x) = 1$$

(i.e. where $$x$$ is on the wrong side but we otherwise have the Lambert W definition) must be $$f(x) = - W\left( -\frac{1}{x} \right)$$ We might also take $$b = \ln(g(x))$$ and thus

### The solution to $$g(x) e^{-af(x)} = f(x)$$

or equivalently $$f(x)e^{af(x)} = g(x)$$ is $$f(x) = \frac{1}{a} W\left( a g(x) \right)$$ which reduces to the Lambert W function for $$a=1$$, as we expect. It is also pretty obvious from first principles, because if we multiply both sides by $$a$$ we have $$(af) e^{(af)} = ag$$ and thus $$af = W(ag)$$. Next suppose we want a power of $$f$$ to appear. Let $$b=\beta(x)/k$$, $$c = \gamma^{1/k}$$, $$a = k/\alpha$$ and $$d=0$$. And then raise both sides to the power $$k$$. It follows that...

### The solution to $$e^{\alpha f(x) + \beta(x)} = \gamma f(x)^k$$

is $$f(x) = \frac{\alpha}{k}W\left( \frac{k e^{\frac{1}{k}\beta(x)} }{\alpha \gamma^{1/k}} \right)$$ and if, in particular, $$\beta(x) = -\ln(g(x))$$ then

### The solution to $$e^{\alpha f(x)} = g(x) \gamma f(x)^k$$

is $$f(x) = \frac{\alpha}{k}W\left( \frac{k }{\alpha g(x)^{1/k} \gamma^{1/k}} \right)$$ and if we take $$c = \frac{1}{\gamma}$$ and $$g(x)=x$$ and $$\alpha = \frac{s}{2}$$ and $$k=2$$ then p>

### The solution to $$e^{-\frac{s}{2}f(x)} x f(x)^2 = c$$

is $$f(x) = \frac{s}{2k}W\left( \frac{2k}{s} \sqrt{ \frac{c}{x}} \right)$$

## Wednesday, August 13, 2014

### A new use for the number 59575553 (ISDA shorten's 20 character Legal Entity Identifiers to 10 character Unique Trade Identifier Prefixes)

The International Swaps and Dealers Association (ISDA) faced a minor problem recently. In shortening their 20 character Legal Entity Identifiers (LEI's) into 10 character Universal Trade Identifier (UTI) prefixes they ran into collisions.

Somewhat randomly this came across my desk. So under proposal is an improvement for hashing LEI's into UTI prefixes: the modulus operation lifted from integers to the space of case insensitive alpha-numeric strings.

You can grab the python code here. Of course it's hardly a new idea.

## Wednesday, May 28, 2014

### Sensitivities of a Kalman Filter estimate with respect to all past observations

Consider a sequence of observations $$t_1,...,t_k$$ at which a latent vector process $$x$$ is observed indirectly, via an observation equation $$y_{t_i} = H_i x_{t_i} + \epsilon_i$$ We assume $$\epsilon_i$$ is mean zero multivariate gaussian with covariance $$R_i$$. For brevity we refer to $$y_{t_i}$$ as $$y_i$$, $$x_{t_i}$$ as $$x_i$$ and so forth. We assume the evolution of $$x$$ in between the times specified can be written $$x_{i+1} = A_i x_i + u_i$$ where $$u_i$$ are also gaussian. In this linear gaussian system the recursive estimation of $$x_t$$ is achieved by the well known Kalman filter, and the contemporaneous impact of the next observation $$y_{k+1}$$ is also (it is merely proportional to the Kalman gain).

But less well appreciated is a related computation, the derivatives of the Kalman filter estimate with respect to a past observation $$y_i$$. This note establishes how said computation can be achieved by making two observations. The first is a re-representation of the Kalman estimate in the form of a weighted least squares problem (not dissimilar to the Duncan Horn representation). The second observation is that sensitivities of any weighted least squares problem can be computed using an adjoint trick.

Step 1: The Kalman filter solution as a (particular) least squares problem

We shall set up a least squares problem involving the current state $$x_k$$ only, and all the previous observations. We argue that the solution to this problem is identical to the Kalman filter. Since the current estimate $$\hat{y}_k$$ is a simple linear function of the current state $$x_k$$, this allows us to compute the derivative of the current estimate with respect to all previous observations.

In the Kalman filter we assume a gaussian prior on the initial state $$x_0$$. This can introduce annoying special cases in what follows, but we can clean up the notation instead by introducing: \begin{eqnarray} y_{-1} & = & H_{-1} x_{-1} + \epsilon_{-1} \\ x_0 & = & A_{-1} x_{-1} + u_{-1} \end{eqnarray} provided $$H_{-1}$$ and $$A_{-1}$$ are identity matrices, $$\epsilon{-1}$$ is identically zero, $$y_{-1}$$ is set equal to the mean of our prior and $$u_0$$ adopts its covariance. With the boundary conditions cleaned up in this fashion we can invert the dynamical equations, assuming only that $$A$$'s have left inverses $$A^{-1}$$, as follows: $$x_j = A^{-1}_{j}\left( x_{j+1} - u_j \right)$$ and then re-arrange the observation equations so that the only value of $$x_i$$ that appears is $$x_k$$. \begin{eqnarray} y_k & = & H_k x_{k} + \epsilon_k \\ y_{k-1} & = & H_{k-1} x_{k-1} + \epsilon_{k-1} \\ & = & H_{k-1} \left( A^{-1}_{k-1}\left( x_{k} - u_{k-1} \right) \right) + \epsilon_{k-1} \\ & = & H_{k-1} A^{-1}_{k-1} x_{k} - H_{k-1} A^{-1}_{k-1} u_{k-1} + \epsilon_{k-1} \\ y_{k-2} & = & H_{k-2} x_{k-2} + \epsilon_{k-2} \\ & = & H_{k-2} \left( A^{-1}_{k-2}\left( x_{k-1} - u_{k-2} \right) \right) + \epsilon_{k-2} \\ & = & H_{k-2} A^{-1}_{k-2} x_{k-1} - H_{k-2} A^{-1}_{k-2} u_{k-2} + \epsilon_{k-2} \\ & = & H_{k-2} A^{-1}_{k-2} \left( A^{-1}_{k-1}\left( x_{k} - u_{k-1} \right) \right) - H_{k-2} A^{-1}_{k-2} u_{k-2} + \epsilon_{k-2} \\ & = & H_{k-2} A^{-1}_{k-2} A^{-1}_{k-1} x_{k} - H_{k-2} A^{-1}_{k-2} A^{-1}_{k-1} u_{k-1} - H_{k-2} A^{-1}_{k-2} u_{k-2} + \epsilon_{k-2} \\ & \dots & \end{eqnarray} from which it is apparent that if we write $$Y = (y_k, y_{k-1}, y_{k-2},...,y_{-1} )$$ then $$Y = G x_{k} + \eta$$ where $$G$$ is the concatenation of the coefficients of $$x_k$$ given above and $$\eta$$ is the gaussian random variable equal to the sum of $$u_k$$'s and $$\epsilon_k$$'s (again, with coefficients as above, leading to a non-trivial covariance structure).

Step 2. (A review of the adjoint trick)

Suppose $$x$$ solves $$Qx = b(y)$$. The adjoint trick can be used to compute the derivative of $$g(x)$$ w.r.t. y. In particular, if $$y$$ is the observation and $$x$$ the solution of a generalized least squares problem with error covariance $$R$$ we can cast it in this form by writing: \begin{eqnarray} g(x)& = & H x Q & = & H^T R^{-1} H \\ b(y) & = & H^T R^{-1} y \end{eqnarray} Consider now $$f(x,y) = 0$$ where $$f(x,y) = Q x - b(y)$$ We use derivatives of $$\tilde{g} = g - \lambda^T f(x,y)$$ with respect to $$y$$ as a means of computing derivatives of $$g$$ with respect to $$y$$. Note that $$\frac{\partial \tilde{g}}{\partial y} = \frac{\partial g}{\partial x}\frac{\partial x}{\partial y} - \lambda^T \left( \frac{ \partial f }{\partial x }\frac{\partial x}{\partial y} + \frac{\partial f}{\partial y} \right)$$ and this will simplify if we choose $$\lambda$$ judiciously as a solution of $$\frac{\partial g}{\partial x } = \lambda^T \frac{\partial f}{\partial x}$$ which we call the adjoint equation. For then \begin{eqnarray} \frac{\partial \tilde{g}}{\partial y} & = & \frac{\partial g}{\partial x}\frac{\partial x}{\partial y} - \lambda^T \left( \frac{ \partial f }{\partial x }\frac{\partial x}{\partial y} + \frac{\partial f}{\partial y} \right) \\ & = & -\lambda^T \frac{\partial f}{\partial y} \\ & = & \lambda^T \frac{\partial b}{\partial y} \end{eqnarray} Now specializing to $$g(x) = H x$$ and $$b(y)$$ as above we can solve for this convenient choice of $$\lambda$$ by writing \begin{eqnarray} H & = & \frac{\partial g}{\partial x} \\ & = & \lambda^T \frac{\partial f}{\partial x} \\ & = & \lambda^T Q \\ & = & \lambda^T H^T R^{-1} H \end{eqnarray} where the second equality is the adjoint equation. It should be clear from this how to compute derivatives of $$\tilde{g}$$ with respect to $$y$$, and thereby compute derivatives of $$g$$ with respect to $$y$$.