A key problem of modular neural networks is finding the optimal aggregation of the different subtasks (or modules) of
the problem at hand. Functional networks provide a partial solution to this problem, since the inter-module topology is
obtained from domain knowledge (functional relationships and symmetries). However, the learning process may be too restrictive in some situations, since the resulting modules (functional units) are assumed to be linear combinations of selected families of functions. In this paper, we present a non-parametric learning approach for functional networks using feedforward neural networks for approximating the functional modules of the resulting architecture; we also introduce a genetic algorithm for finding the optimal intra-module topology (the appropriate balance of neurons for the different modules according to the complexity of their respective tasks). Some benchmark examples from nonlinear time-series prediction are used to illustrate the performance of the algorithm for finding optimal modular network architectures for specific problems.