A trained neural network can be understood as a data structure. The weights store a compressed representation of what the network has learned, not as explicit data like vertices or pixels, but as a parameterized function. You query it with an input and it produces an output. A 3D surface, in this form, is a network that maps coordinates to quantities like signed distance or occupancy, rather than a fixed mesh.

The same principle applies to voice. Instead of isolating components from a raw recording, separate surrogates can be trained to approximate distinct aspects of speech inflection, timing, vocal characteristics and conditioned independently. The same approach works at inference on a live stream, where a trained model applies its learned function to incoming audio in real time, filtering or modifying aspects of the signal as it arrives.

The same idea extends to scenes. Rather than storing video as frames, a scene can be represented as functions mapping spatial coordinates, time, and viewing direction to appearance. These decompose naturally different objects modeled as separate components and combined at render time, each operating in the coordinate frame of another: a hat above a hair representation, above a head. Manipulation happens at the surrogate level, not in the raw data.

What matters is that none of these are static. Each surrogate is a function that can be queried and conditioned. The parameters are fixed after training, but the outputs vary continuously with the input. Because manipulation happens at the level of the approximation, aspects that would be entangled in the original become independently addressable.

This extends to physical systems. A surrogate approximates the mapping from inputs to solutions rather than storing every state in the case of PDEs, an approximation to the solution operator itself. This reduces storage and computation at inference while generalizing within the training regime. The process that produces a surrogate is the same class of function as the surrogate itself, which suggests that modeling, compression, and generation are not three separate problems.

Automating surrogate construction extends this further. A system that can read documentation, identify relevant structure, generate implementation, and adapt as it encounters unfamiliar domains removes the bottleneck that makes surrogate modeling impractical in less studied contexts. In well-characterized domains, neural networks are already deployed for narrow, well-defined tasks on stable distributions defect detection in factories, quality control pipelines, fixed industrial processes. The models work because the domain does not change in ways they were not designed for. In others biomechanics of specific sports, hydrodynamics of a particular coastline, acoustics of an unusual material the barrier is not compute but the effort required to formulate the model at all. In those settings, where the domain is partially observed and conditions vary, the ability to construct and adapt surrogates as new structure is encountered becomes most valuable. That is what automation addresses.

Published