next up previous
Next: Converted feedforward equations Up: Computing the expectation of Previous: Expectation of a function

Variance of a function

We still need the variances vj. Since each error $\tilde{\theta}_i$is evenly distributed in the range $[-\epsilon_{\theta_i}/2,
\epsilon_{\theta_i}/2]$, the variances of the parameters are given by $v_i = \epsilon_{\theta_i}^2/12$. The variance of the inputs and desired outputs can be assumed to be zero, or they can be assigned some values if there is prior knowledge about, for example, the equipment used to measure the values.

In order to compute the variance of the function fi, we shall approximate it with first order Taylor's series expansion.  
 \begin{displaymath}
 \xi_i = f_i(\xi_j \vert j \in \mathcal{J}_i) \approx f_i(\m...
 ...hcal{J}_i} \frac{\partial f_i}{\partial \xi_j} (\xi_j -
 \mu_j)\end{displaymath} (7)
According to this approximation, $\mu_i = E\{\xi_i\} \approx f_i(\mu_j
\vert j \in \mathcal{J}_i)$, which yields
\begin{multline}
v_i = E\{(\xi_i-\mu_i)^2\} \approx \\  \sum_{j \in \mathcal{J}_...
 ...partial f_i}{\partial \xi_k}
 E\{(\xi_j - \mu_j) (\xi_k - \mu_k)\}\end{multline}
Again we can drop out the cross terms if all $\xi_j$ are mutually uncorrelated. This yields our final approximation for the variance of a function:  
 \begin{displaymath}
 v_i \approx \sum_{j \in \mathcal{J}_i} \left( \frac{\partial f_i}{\partial \xi_j}
 \right)^2 v_j\end{displaymath} (8)
Notice that when computing the variance, one cannot mix the first and second order approximation for $\mu_i$, since that might result in negative values for vi.



Harri Lappalainen
5/19/1998