The Multiple Threads of My Interests
The last several months have been really exciting for me. The book project is wrapping up and we should be getting the final manuscript draft to the publisher in early June. But several other areas in which I have been working are beginning to come together, to converge, as it were, in a grand synthesis. I must admit I have been startled by what is happening. I think, in part this is due to my stopping my obsession with how the world is falling apart due to human stupidity and going back to my roots in research in systems science. In any case the multiple threads of my interests are starting to weave together in what is, to me, a profound tapestry. I will try to explain.
Back in Feb. 2009 I wrote a blog post inspired by an e-mail question from a reader. I called it “Subjects of Interest.” The reader was puzzled about my penchant for writing about so many seemingly disparate topics such as energy, brains, and economics. I tried to show that these subjects are actually all interrelated and form a larger scope system; that I had used my understanding of systems science to successfully explore each subject from that perspective.
Over the years that I have been writing Question Everything I have delved into a fair number of subjects that were either explicitly systems oriented, or implicitly so. Here is a list of the topics in which I have done some pretty active exploration.
- Brain Science in general, but especially the functions of the prefrontal cortex in consciousness and sapience
- Information/knowledge theory and hierarchical cybernetics
- Auto-organization, emergence, and evolution (esp. the origin of life problem)
- System dynamics and approaches to modeling systems with a new process-based semantics
- Computation (obviously)
- Artificial Autonomous Agents (robots) using brain-like control systems
- Energy flows (as in auto-organization, etc. above), esp. in biology and economics
What ties these all together is Systems Science (my textbook will show this). Up until just recently, however, I have pursued each somewhat independently from the other even while intuiting that somehow they were all part of a larger whole. Now, it seems, my intuition is starting to take more shape in my mind. A recent insight gotten from working on a research project with the artificial agent brain showed me how these all come together, how they can all be integrated, unified, into a single conceptual framework - a system.
In future blogs I will outline how I see this coming about. Right now I will just provide an overview and mention some of the ways to bring it to fruition.
The Brain is a Universal System Model Building and Simulation Machine
For several years I have been working on the design of a new modeling language meant to upgrade what is known as system dynamics, originally developed by Jay Forrester at MIT (also see Donella Meadows wonderful book, Thinking in Systems). Forrester's language relies on a semantics called “stocks, flows, and controls.” It is true that all dynamic behavior of a system can be simulated by developing a stock and flow model such as in Figure 1 below. The changes in flow rates and stock levels over time constitute the dynamics. These models are discrete time-based and simulated by repetitively computing the state variables at each time tick. Graphs of the output data then show the dynamic behavior of the system as a whole.
Figure 1. System dynamics (SD) modeling has been based on the use of stocks, flows, and controls as very abstract concepts. This shows the basic semantics of the SD languages. The way the component pieces hook up is the language syntax.
For me this kind of abstraction was useful for many kinds of “simpler” systems. But when I started trying to build more complex system models involving details of energy, matter, and message (information) flows the lack of specificity in flow designation started to become a problem. A flow of energy and a flow of material (not to mention messages) are subject to some different laws. For one, the Second Law of Thermodynamics requires that with every conversion of energy flow there is an unavoidable loss to waste heat of some of the energy, i.e. less work can be accomplished downstream of the conversion. Material flow is more constrained by the conservation principle. It often takes more energy to remove waste materials on top of this. I was unsatisfied with the treatment of messages, only seen in the thin arrows (called “influence arrows,” which represent the flow of information that affects controls (the valves in the figure).
Lastly, and very importantly, I wanted to be able to abstract all of these flows, stocks, and messages into an object called a “process.” What we see in systems are objects that take inputs and processes them into outputs, products and wastes. The products are generally used by some other process which is what gives the output value and becomes the raison d'être for the supplying process. The abstraction (in the sense of hiding all of the details of a system) would allow the modeler to combine multiple processes into meta-systems. In that way one can construct increasingly complex systems from simpler subsystems. I will provide an overview of this in a future blog.
The limitations of system dynamics semantics definitely showed themselves when I attempted to model a neuron and its synapses using it. My first neural/synapse models based on system dynamics were a good exercise and provided many useful insights, but I found the restricted semantics difficult. I ended up developing the model in the ‘old’ way — I wrote it in C. Even so, the experience with DYNAMO and system dynamic left me feeling that with a little more semantic support a language that allowed the creation of complex dynamic models would be a great “project”. I put it on the back burner because my interest in neural systems as a better approach to artificial intelligence had been piqued. For the next many years I concentrated on developing a more deeply biologically inspired model of a neuron with particular focus on the multi-time domain dynamics of synapses and their plasticity, which I had become convinced was the key to a really efficacious method for encoding memory engrams.
Fast forward from the mid-80's to about five years ago. When I had moved from Western Washington University in 2001, where I was making progress on the “Adaptrode” synapse model, using it in neural networks that controlled the behavior of a mobile robot, to the University of Washington Tacoma, where the hoped for support never materialized, my robot experiments came to a screeching halt. As a result I started casting about for something else to do research in. A few false starts later I went back to the system dynamic modeling language and started to coalesce my ideas into some concrete approaches.
As part of my desperate search for a new research arena, and hoping it might have some relation to my previous work, I had “stumbled” into the psychology of wisdom. I had also been studying the growing threats from global warming, population overshoot, etc.; all of the problems that beset mankind and threatened its very existence. In the psychology of wisdom I found a clue. The reason humans were consistently making stupid mistakes and had adopted a completely fallacious ideology (capitalism, profits for their own sake, and growth) is that they lacked the wisdom to make good judgements and hence good decision. Somewhere along the line my interest in brains and behavior along with this insight from psychology led me to a much deeper investigation of brain functions and their evolution to seek explanations for the human condition.
About the same time my curiosity about the state of our energy systems, namely the concept of “peak oil” was starting to drag me into a deeper study of that subject, and what I found made me once and for all realize that humanity was doomed by sheer lack of sapience. All fossil fuels are clearly, unambiguously finite resources that if you extract and use at increasing rates will run out sooner or later. Moreover the supposed substitutes (neoclassical economists insist there are always substitutes when the price gets too high), alternatives such as wind and solar, were highly suspect in terms of their capacities to provide the same level of power that our societies had grown accustomed to. I started using my evolving modeling language to try and model the net energy production of photovoltaic generation given that the energy inputs into the process of producing the cells and installing them seemed to me to be excessive. Net energy is all that counts in system dynamics and my preliminary attempts convinced me that chasing these alternatives to keep society going with business as usual was a fool's errand. I discovered the work of Charles Hall at SUNY-ESF on ‘energy return on energy invested’ (EROI) and given the deep implications for humanity, if my model was giving reasonable answers, I decided to study this phenomenon much more closely. I had a sabbatical leave coming up so I decided to go to SUNY and study EROI with Hall. My objective was to understand the math better, and to see if my new process-based language would be able to produce viable models of energy systems.
Long-time readers know the rest. What I found out in this background research settled matters in my mind once and for all. The world of human societies was doomed to collapse (while at SUNY I met and struck up a friendship with Joe Tainter, who had penned the infamous The Collapse of Complex Societies. Joe had been working on the thesis that collapse of prior civilizations had been greatly influenced by what he described as the declining marginal returns on complexity; that being the result of societies trying to solve problems, the solutions then leading to more problems. With his association with Hall, et. al., Tainter had come to see that the energy flow through the society was related to the complexity issue so he came to recognize that complexity can only increase and succeed if there is adequate energy flow to support it. The Romans, for example, ran out of energy and could no longer support their complex infrastructure (to put it simply).
After returning from SUNY I was inspired to increase my efforts on the modeling language project. I've had several graduate and undergraduate students working on bits and pieces but the whole concept is, itself, rather complex and requires many pieces working together. We've made some progress, but not enough to go ask for money yet.
Meanwhile I had all but forgotten the neuron modeling as I pursued these other lines of interest, The seeming inability of the NN community to understand my models had more or less left me cold for struggling with pursuing the concepts furter. Then, two years ago, roughly, I came across a research paper that signaled something profound. It seems that in the years I had been away from neural network research a fair number of researchers had discovered the problem that I had tried to articulate to the neural network community without much success back in the early '90s and had been working on with my Adaptrode model; that of learning and forgetting non-stationary relations. The Adaptrode works by encoding memory traces in multiple time domains (i.e., real-time, short-, intermediate-, and long-term traces). In a future blog I will provide more, non-technical details about how the Adaptrode model solves some pretty serious problems in memory trace encoding in the real world.
With that kick I realized I needed to get back in that game. I had the solution to the problem and now that the community was paying attention I might actually be able to make some progress. For the past two years I have been steadily pushing my previous work to incorporate a number of new ideas regarding how to actually simulate a brain of greater complexity than that of a ‘moronic snail’, namely, how to construct a neocortex structure to do much more advanced processing than I had accomplished previously. This was greatly accelerated by reading Jeff Hawkins' amazing book, On Intelligence. Hawkins had independently hit on some of the same principles I had developed in my research years before, which gave me confidence that I had been on the right track. I've revived that research agenda and have several graduate students starting to rebuild my simulation software.
The cerebral cortex (neocortex in mammals) is an amazing structure for capturing causal relations at multiple scales of space and time. It literally is a machine for constructing complex models of the systems it encounters and using those models to anticipate the future. Hawkins and many neuroscientists studying the functions of areas of the cortex in behavior and thinking have all come to very similar conclusions. As yet, however, I have not seen an explicit discussion of the role of multiple-time domain encoding in how these models are constructed and validated (i.e. committed to long-term memory).
Within the last several months realization that I had been working on the exact same concept but under different guises all along dawned on me. Actually I have to confess it came to me during a lucid dream one night! I woke up realizing that my modeling language, my brain model, my attempts at modeling systems energy flows, and my grasp of higher brain functions were actually all one and the same. If I could successfully build a deep model of a brain (especially the neocortex with a human-like architecture) that simulator would do implicitly what I had been trying to do explicitly with a modeling language. I would be able to let such a device ‘learn’ the world — that is, construct a model of the world from experience. It would be able to simulate the world it had learned to project anticipatory scenarios, having the same efficacy as humans. And maybe more so.
It is still unclear to me what value it might have to achieve a breakthrough at this late date. The world of human civilization based on extreme power is coming to an end sooner than we previously thought possible. The latest news on global warming and climate change indicates that the major impacts from this phenomenon are already underway and will accelerate in the years to come. So solving the problem of emulating natural intelligence in machines may simply be coming too late to be very helpful. Even so, my attitude has always been one of curiosity and discovery. As selfish as that may be, I have been motivated by an incredible need to understand things. Some time back I thought that was because I had the conceit to believe if I understood the way the world worked I could do something to correct the mistakes. When I was younger and naive that seemed like a reasonable thing to think. Now that I am older, recognize the foibles of humanity, and my own lackings, I realize that it doesn't matter what I do. My contributions, if any, will not change the course of our fate. Even so, I am still intensely curious. So I will continue to research the subject. I am able to see how systems science, modeling ‘language’, energy flow, evolution, and everything come together in the way the brain works and that will be my focus from here on. I will try to outline these disparate threads in future posts, and show how they all come together in a single consistent understanding.
Wish me luck.