next up previous
Next: Application to Bayesian Noisy Up: Fast Algorithms for Bayesian Previous: FastICA as EM-Algorithm with

Application to General ICA Algorithms

The procedure giving faster convergence derived in previous sections is a general approach and FastICA was seen to be a special case. Since the faster convergence was achieved by comparing the re-estimation step to Gaussian noise removal, the approach is valid for any situation where the general noisy ICA model holds with Gaussian noise and linear mixtures. It is not required that the E-step uses the approximation $\hat \mathbf s \approx \mathbf s_0 + \sigma^2 f(\mathbf
s_0)$; instead, it can be any method that can use s0 to compute $\hat \mathbf s$. Denote this estimation by

\begin{displaymath}\hat \mathbf s=\mathbf g(\mathbf s_0) \, .

Then it is always possible to replace the source with Gaussianized source s0G and obtain

\begin{displaymath}\hat \mathbf s_G=\mathbf g(\mathbf s_{0G}) \, .

Having estimated two sets of sources, we can apply any method whatsoever to estimate the mixing matrix using the newly estimated sources. This gives us two new estimates of the vectors $\hat \mathbf
a$ and $\hat \mathbf a_G$ of the mixing matrix. The final estimate is obtained as the normalized difference as above.

Harri Lappalainen