Neural Networks

The employment of Neural Networks (NNs) has been extensively addressed for digital signal processing (DSP) applications (like nonlinear system identification, time series prediction, pattern matching and recognition just to name a few) and several architectures and learning algorithms have been proposed. The literature indeed offers a big choice of neural architectures and learning algorithms. 
Regarding the former, the primer classification we can make is between recurrent and feedforward neural networks (RNNs and FFNNs) on the basis of the presence or no respectively of loops, typically due to the occurrence of feedback connections. Recurrent structures allows representing context information and therefore a more general class of tasks w.r.t. the feedforward counterpart, even though the training usually presents more difficulties to face. 
Concerning the learning strategies, we can distinguish among supervised and unsupervised techniques, depending on the availability of target signals during the parameter adaptation phase. An important class of algorithms employs the usage of gradient-based information related to a certain cost function which normally represents how much distant we are from optimal performing NNs. Such algorithms are easy to implement but also suffers of relevant drawbacks as local minima entrapment and vanishing gradient effect (in the case adaptation of recurrent networks).

In this scenario the A3LAB research work finds a location, and the following are the main issues addressed:

  • Time Variant Neural Network (TVNN). Leaving RNNs apart (that are often difficult to learn and with stability problems, see below), a common FFNN is basically able to learn the statistical properties of the system to identify only when the working environment is time-invariant. Unfortunately, the signals we usually deal with are non-stationary and the classic neural structures and learning methods do not suffice to get suitable identification performances. That is why some researchers at A3LAB have introduced a sort of time-variant neural models, as an attempt to overcome the non-stationary tracking capability of common approaches. The basic idea consists in using synapses which are linear combination of basis time functions.
  • Extreme Learning Machine (ELM). In the NN research field an interesting training approach has been recently proposed in the literature, namely the Extreme Learning Machine. The most important peculiarity of standard ELM is that only the NN output weights are adapted, while the other weights in the hidden layers remains unchanged, allowing a certain increment of training time w.r.t.common learning algorithms like BP (backproapagation). A3LAB is actively working on this field and in particular it is interested in applying such an approach for TVNN learning.
  • alternative recurrent architectures and echo state networks. The problem of vanishing gradient occurs any time we use RNNs to obtain information latching and we want to train them with gradient-based learning algorithm. Several solutions have been proposed to mitigate this effect: some methods use alternative learning algorithms, or special architectures (additional memory for example) or combination of them. In particular Echo State Networks proved to have interesting properties from this perspective and some studies are actually on-going to achieve a better understanding of this issue, specially for applications in real-world tasks.
  • Signal Flow Graph (SFGs) approach for learning. SFGs have been employed in several subjects to describe complex systems in discrete domain. Moreover, new algorithms for gradient computation in gradient-based linear and non-linear adaptive systems have been performed by means of SFG techniques, able to take into account also variations of the internal parameters of the original graph at previous time steps. This allows deriving gradient information for dynamical non-linear system learning by a suitable adjoint network and therefore in a complete automatic way.
  • NN applications to real-world problems. Neural Networks have many applications in engineering and in particular in the DSP research area. System identification and pattern classification tasks are the one typically addressed by A3LAB guys in their project, with special attention (but not only) to the speech/audio processing field.

Related pubblications

Giacomo Ferroni, Roberto Bonfigli, Emanuele Principi, Stefano Squartini, Francesco Piazza, "A Deep Neural Network approach for Voice Activity Detection in Multi-Room Domestic Scenarios" in International Joint Conference on Neural Networks (IJCNN), 2015, to appear.

Giacomo Ferroni, Roberto Bonfigli, Emanuele Principi, Stefano Squartini, Francesco Piazza, "Neural Networks Based Methods for Voice Activity Detection in a Multi-room Domestic Environment" in 13th AI*IA Symposium on Artificial Intelligence, 2014

Cingolani , C. , Squartini , S. , Piazza , F. (2008), "An extreme learning machine approach for training Time Variant Neural Networks", Circuits and Systems, 2008. APCCAS 2008. IEEE Asia Pacific Conference on:384-387.

Squartini , S. , Cecchi , S. , Rossini , M. , Piazza , F. (2007), "Echo State Networks for Real-Time Audio Applications", Advances in Neural Networks - ISNN2007 .

Squartini , S. , Paolinelli , S. , Piazza , F. (2006), "Comparing Different Recurrent Neural Architectures on a Specific Task from Vanishing Gradient Effect Perspective", Networking, Sensing and Control, 2006. ICNSC '06. Proceedings of the 2006 IEEE International Conference on :380-385.

Titti , Alessio , Squartini , Stefano , Piazza , Francesco (2005), "A new time-variant neural based approach for nonstationary and non-linear system identification", ISCAS (5) :5134-5137.

Arcangeli , A. , Squartini , S. , Piazza , F. (2004), "Calculation of non-mixed second derivatives in multirate systems through signal flow graph techniques", Circuits and Systems, 2004. ISCAS '04. Proceedings of the 2004 International Symposium on ,

Pomponi , E. , Squartini , S. , Piazza , F. (2004), "An approach employing signal sparse representation in wavelet domain for underdetermined blind source separation", Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on ,3: 2099-2104.

Squartini , S. , Hussain , A. , Piazza , F. (2003), "Attempting to reduce the vanishing gradient effect through a novel recurrent multiscale architecture", Neural Networks, 2003. Proceedings of the International Joint Conference on ,4: 2819-2824.

Squartini , S. , Hussain , A. , Piazza , F. (2003), "Preprocessing based solution for the vanishing gradient problem in recurrent neural networks", Circuits and Systems, 2003. ISCAS '03. Proceedings of the 2003 International Symposium on ,5:713-716.

Squartini , S. , Hussain , A. , Piazza , F. (2003), "A recurrent multiscale architecture for long-term memory prediction task", Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International Conference on ,2:789-92.