|

Noncompact uniform universal approximation.

Researchers

Journal

Modalities

Models

Abstract

The universal approximation theorem is generalised to uniform convergence on the (noncompact) input space Rn. All continuous functions that vanish at infinity can be uniformly approximated by neural networks with one hidden layer, for all activation functions φ that are continuous, nonpolynomial, and asymptotically polynomial at ±∞. When φ is moreover bounded, we exactly determine which functions can be uniformly approximated by neural networks, with the following unexpected results. Let Nφl(Rn)¯ denote the vector space of functions that are uniformly approximable by neural networks with l hidden layers and n inputs. For all n and all l≥2, Nφl(Rn)¯ turns out to be an algebra under the pointwise product. If the left limit of φ differs from its right limit (for instance, when φ is sigmoidal) the algebra Nφl(Rn)¯ (l≥2) is independent of φ and l, and equals the closed span of products of sigmoids composed with one-dimensional projections. If the left limit of φ equals its right limit, Nφl(Rn)¯ (l≥1) equals the (real part of the) commutative resolvent algebra, a C*-algebra which is used in mathematical approaches to quantum theory. In the latter case, the algebra is independent of l≥1, whereas in the former case Nφ2(Rn)¯ is strictly bigger than Nφ1(Rn)¯.Copyright © 2024. Published by Elsevier Ltd.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *