Banner by Matt Stoss
The Second Derivative of the Mind
by Jesse Kaminsky
This sentence physically alters your brain. It recognizes these symbols and identifies associated information. After you interpret and reconcile these semantic representations with one another, you come to understand the message of the whole sentence. But something interesting occurs throughout this process. Like an ant following its leader’s scent trail and reinforcing it for those further behind, the act of reading leads to neuronal signaling that has physical consequences on future interactions with other neurons. When a neuron is activated, it releases chemical messengers to other linked neurons at interfaces called synapses. The neuron receiving these messengers could then become activated, depending on how much its synapse impacts the cell as well as the degree to which the cell is already excited by other synapses. Not all synapses are equal.
Some synapses have a greater ability than others to excite the receiving neuron. Through a process called synaptic plasticity, synapses can physically change over time such that one neuron’s signal becomes more or less potent to the other. At its simplest, this allows neurons that frequently activate linked neurons to have a greater effect than those that activate them less frequently. Through this complex modulation of the web of synapses connecting neurons in the human brain, information is processed and correlated with other information, semantical units are memorized, and human intelligence is established. How this is orchestrated and how it produces these high-level capabilities is not entirely understood.
Further complicating this process is the very interesting phenomenon of metaplasticity, or the plasticity of synaptic plasticity. If the strength of these connections can be modulated, the synapse’s ability to do so may also change. A given synapse may be more plastic than others and can adjust its capacity to alter its impact on a connected neuron. If this seems unintuitive, consider an object’s initial position. An object has an initial position, just as a neuron has initial level of influence on another neuron. The velocity of an object is the change in its position over the variable of time, just as the plasticity of a neuron is the change in its potency over the variable of past activity. The acceleration of an object is the change in its velocity over the variable time, just as the metaplasticity of a neuron is the change in its plasticity over the variable of activity. This “calculus” of neural circuitry adaptability has great ramifications for, among other things, the artificial reproduction of human intelligence.
This idea of neural webs and synaptic plasticity has already greatly benefited artificial intelligence research through the development and analysis of neural network models. Neural nets are neurobiologically inspired mathematical models that allow the training of a set of “neurons” to analyze a dataset and produce a model that is fitted to it. Just like in the brain, these neurons possess connections that directly affect the activation of one another. By associating certain patterns of activation with outputs of interest, the strengths of these connections can be gradually adjusted to provide more useful information from the input data. Applications range from recognizing handwritten text to predicting economic trends to differentiating between malignant and benign tumors.
Neural nets have even been used to interpret the activation patterns of actual neural circuitry in living human brains through functional magnetic resonance imaging data. Just as neuroscience initially inspired these models, it is now benefiting from their insights. There are also ways to use neural nets to learn about the human brain by exploiting their neurological nature. How can neural nets be made to more accurately reflect human neurobiological function? In the simplest neural nets, connections are single values that directly and proportionally impact the activation of the next neuron. In reality, synapses are dramatically more complicated, grounded in large networks of proteins and intricate molecular signaling pathways that exist within physical systems that present chaotic environmental variables. The single value of a connection could be considered a function of the whole synaptic process. If a neural net could more faithfully apply true synaptic mechanisms, it may shed light on how these mechanisms generate the aspects of intelligence that neural nets already express.
Just as synaptic plasticity manifests as the alteration of a synapse’s strength, neural net training is the gradual alteration of its connections’ strengths given some input data. The conversion of input to a desired output is governed by synaptic plasticity. But what if these networks were more grounded in neurobiology? Overall neural net structure has already been adapted into many varieties in an attempt to mimic the large-scale architecture of neural circuitry in the human brain. It is more challenging to consider how the connections themselves might be made more biologically faithful. Adjusting the function of a synapse to account for the mechanisms by which neurotransmitters are communicated and the pathways through which this activation can be permanently altered is no simple task. The mechanisms underlying synaptic plasticity are not yet fully understood. Once they are, one could imagine a neural net connection that presents a number of values generated from functions of each aspect of plasticity, some of which will interact while others will not, within a given neuron’s decision to activate.
Only recently have computationalists begun to consider the concept of metaplasticity in their models. Some have developed synaptic models that are capable of expressing the phenomenon of metaplasticity, while others have taken inspiration from the general concept in designing their networks. What does it mean for a neural net to possess metaplasticity? If plasticity is the ability to change the impact of a neuron on others associated with it, metaplasticity must be the ability to change the degree to which these changes occur. The strengths of the connections in the simplest neural nets are altered based on the final output of the network, changing until they come closer and closer to the desired outcome. Metaplasticity would be a change in this process as data is provided.
Just as plasticity can be applied to artificial neural nets to generate models possessing optimal connection values for a given dataset, perhaps metaplasticity can be used to generate models possessing optimized plasticity mechanisms. Metaplasticity-inspired models may provide a way to identify novel computational mechanisms of plasticity that improve upon previous performance and provide insight into how the brain functions. The challenge is developing a method for introducing metaplasticity into an overall model. How will metaplasticity guide computational inquiry into the nature of the human brain in the years to come? In his 1977 book The Biological Origin of Human Values, Emerson Pughs speculated that “if the human brain were so simple that we could understand it, we would be so simple that we couldn't.” If this is correct, there is a simple solution: make complicated models to do it for us!