Authors: MOHAMMED AWAD
Abstract: This paper provides an enhanced method focused on reducing the computational complexity of function approximation problems by dividing the input data vectors into small groups, which avoids the curse of dimensionality. The computational complexity and memory requirements of the approximated problem are higher when the input data dimensionality increases. A divide-and-conquer algorithm is used to distribute the input data of the complex problem to a divided radial basis function neural network (Div-RBFNN). Under this algorithm, the input data variables are typically distributed to different RBFNNs based on whether the number of the input data dimensions is odd or even. In this paper, each Div-RBFNN will be executed independently and the resulting outputs of all Div-RBFNNs will be grouped using a system of linear combination function. The parameters of each Div-RBFNN (centers, radii, and weights) will be optimized using an efficient learning algorithm, which depends on an enhanced clustering algorithm for function approximation, which clusters the centers of the RBFs. Compared to traditional RBFNNs, the proposed methodology reduces the number of executing parameters. It further outperforms traditional RBFNNs not only with respect to the execution time but also in terms of the number of executing parameters of the system, which produces better approximation error.
Keywords: Approximation algorithms, computational complexity, divide-and-conquer algorithm, RBF neural networks
Full Text: PDF