February 11, 2024

Charles Darwin meets Rube Goldberg: a tale of biological convolutedness


Charles Darwin studying a Rube Goldberg Machine (Freepik Generative AI, text-to-image)

For this Darwin Day post (2024), I will discuss the paper Machinery of Biocomplexity [1]. This paper introduces the notion of Rube Goldberg machines as a way to explore biological complexity and non-optimal function. This concept was first highlighted on Synthetic Daises in 2009 [2], while an earlier version of the paper was discussed on Synthetic Daisies in 2013 [3]. The paper was revised in 2014 to include a number of more advanced computational concepts, after a talk to the Network Frontiers Workshop at Northwestern University in 2013 [4]. 

Figure 1. Block and arrow model of a biological RGM (bRGM) that captures the non-optimal changes resulting from greater complexity. Mutation/Co-option removes the connection between A and B, then establishes a new set of connection with D. Inversion (bottom) flips the direction of connections between C-B and C-A, while also removing the output. This results in the addition of E and D which reestablishes the output in a circuitous manner.

Biological Rube Goldberg Machines (bRGNs) are defined as a computational abstraction of convoluted, non-optimal mechanisms. Non-optimal biological systems are represented using flexible Markovian box and arrow models that can be mutated and expanded given functional imperatives [5]. Non-optimality is captured through the principle of "maximum intermediate steps": biological systems such as neural pathways, metabolic reactions, and serial interactions do not evolve to the shortest route but is constrained (and perhaps even converge to) the largest number of steps. This results in a set of biological traits that functionally emerge as a biological process. Figure 1B shows an example where maximal steps represents a balance between the path of least resistance and exploration given constraints on possible interconnections [6]. The paths from A-E, E-B, and C-D are the paths of least resistance given the constraints of structure and function. In the sense that optimality is a practical outcome of physiological function, a great degree of intermediacy can preserve unconventional pathways that are utilized only spontaneously.

This can be seen in a wide variety of biological systems and is a consequence of evolution. Evolutionary exaptation, the evolution of alternative functions, and serial innovation all result in systems with a large number of steps from input to output. But sometimes convolution is the evolutionary imperative in and of itself. As fitness criteria change over evolutionary time, traces of these historical trajectories can be observed in redundant pathways and other results of subsequent evolutionary neutrality. One example from the paper involves a multiscale model (genotype-to-phenotype) that exploits both tree depth and lateral connectivity to maximize innovation in the production of a phenotype (Figure 2). While our models are based on connections between discrete states, bRGMs can also provide insight into the evolution of looser collections of single traits and even networks, where the sequence of function is bidirectional and hard to follow in stepwise fashion.

Figure 2.  A hypothetical biological RGM representing a multi-scale relationship. Each set of elements (A-F) represents the number of elements at each scale (actual and potential connections are shown with bold and thin lines, respectively). Examples of convolutedness incorporate both loops (as with E5,1 and E5,5) and the depth of the entire network.

The paper also features extensions of the basic bRGM, including massively convoluted architectures and microfluidic implementations. In the former, interconnected networks represent systems that are not only maximal in terms of size or length, but also massively topologically complex [7]. One example of this is cortical folding and the resulting neuronal connectivity in Mammalian brains. The latter example is based on fluid dynamics and combinatorial architectures that are more in line with discrete bRGMs (Figure 3). 

Figure 3. A microfluidic-inspired bRGM model that mimics the complexity of biological fluid dynamics (e.g. blood vessel networks). G1, G2, and G3 represent iterations of the system.


References:

[1] Alicea, B. (2014). The "Machinery" of Biocomplexity: understanding non-optimal architectures in biological systems. arXiv, 1104.3559.

[2] Non-razors, unite! January 30, 2009. https://syntheticdaisies.blogspot.com/2009/01/non-razors-unite.html

[3] Maps, Models, and Concepts: July Edition. Synthetic Daises blog. July 13, 2013. https://syntheticdaisies.blogspot.com/2013/07/maps-models-and-concepts-july-edition.html

[4] Inspired by a visit to the Network's Frontier....  Synthetic Daises blog. December 16, 2013. https://syntheticdaisies.blogspot.com/2013/12/fireside-science-inspired-by-visit-to.html

[5] when dealing with a large number of steps or in a polygenic context, these types of models can also resemble renormalization groups. For more on renormalization group, please see: Wilson, K.G. (1975). Renormalization group methods. Advances in Mathematics, 16(2), 170-186.

[6] this balance is as predicted by Constructive Neutral Evolution (CNE). For a relevant paper, please see: Gray et.al (2010). Irremediable Complexity? Science, 330(6006), 920-921.

[7] in the paper, this is referred to as Spaghettification, a term borrowed from the physics of gravitation. See this reference for an interesting implementation of this in soft materials:  Bonamassa et.al (2024). Bundling by volume exclusion in non-equilibrium spaghetti. arXiv, 2401.02579.

August 24, 2023

Saturday Morning NeuroSim Discussion Thread: Physical Computing

 

From the “Macy Conference Redux” feature form our July 1 meeting

Over the past three years, the Saturday Morning NeuroSim group has met weekly on Saturdays (mornings in North America). The Saturday Morning format continues in the tradition of Saturday Morning Physics and covers a wide variety of topics.

One recent lecture/discussion thread is on Physical Computation. Our approach to the topic begins with the debate around the role of computation in Cognitive Science and the Neurosciences. And so we begin in Week 1 with a discussion of the connections between computation, information processing, and the brain, largely focusing on the work of Gualtiero Piccinini and Corey Maley. A starting point for this session is their Stanford Encyclopedia of Philosophy article on “Computation in Physical Systems”. Many current assumptions about computation in the brain stem from the Church-Turing thesis, which often leads to a poor fit between model and experiment. Piccinini and Maley propose that the Church-Turing-Deutsch thesis is preferable when talking about systems that perform non-digital computations. Amanda Nelson pointed out the it makes sese to think of evolved biological systems (brains) as instances of analogue computers. Another interesting point from the session is the distinction between the digital (Von Neumann) computers and alternatives such as “physical” or “analog” computation, which would be picked up on in the next session.

Physical Computation Session I from June 24 (roughly one hour in length).

The second session focused on physical computation, and led us to discuss the idea of pancomputationalism. While pancomputationalism is the fundamental assumption behind the phrase “the brain is a computer” [1], we we also introduced to pancomputationalism in ferrofluidic systems and mycelial networks. We discussed the works of Richard Feynman (Feynman Lectures on Computation) and Edward Fredkin (Digital Physics), which helped us form an epistemic framework for computation in nature [2]. We also discussed Andy Adamatsky’s work on unconventional computation, particularly his work on Reaction-Diffusion (R-D) Automata, that while discrete in nature has connections to excitable (e.g neural) systems via the Fitzhugh-Nagumo model.

Physical Computation Session II from July 1 (roughly one hour in length)

After taking a break from the topic, our July 15 meeting featured an alternative viewpoint on pancomputationalism. This was made manifest in a shorter discussion on physical computation, with views from Tomasso Toffoli and Stephen Wolfram. We covered Toffoli’s paper “Action, or the funcgability of computation”, which connects physical entropy, information, action, and the amount of computation performed by a system. This paper is of great interest to the group in light of our work and discussions on 4E (embodied, embedded, enactive, and extended) cognition [3]. Toffoli makes some provocative arguments herein, including the notion of computation as “units of action”. A concrete example of this is a 10-speed bicycle, which is not only not a conventional computer, but also has linkages to perception and action. Amanda Nelson found the notion of transformation from one unit into another particularly salient to the distinction between analogue and digital computation. The physical basis of all forms of computation can also be better defined by revisiting “A New Kind of Science” [4], in which Wolfram sketches out the essential components and analogies of a computational system with a physical substrate. We can then compare some of the more abstract aspects of a physical computer with neural systems. This is particularly relevant to engineered systems that include select components of biological networks.

Physical Computation Session III from July 15 (about 15 minutes in length)

The next session followed up on computation in natural systems as well as Wolfram’s notion of universality, particularly in terms of computational models. In particular, Wolfram argues that cellular automata models can characterize universality, which is related to pancomputationalism. Universality suggests that any one computational model can capture system behavior that can be applied across a wide variety of domains. In this sense, context is not important. Rule 30 produces an output that resembles pattern formation in biological phenotypes (the shell of snail species Conus textile), but can also be used as a pseudo-random number generator [5]. In “A Framework for Universality in Physics, Computer Science, and Beyond”, this perspective is extended to understand the connections between computation defined by the Turing machine and a class of model called Spin Models. This provides a framework for universality that is useful form defining computation across the various levels of neural systems, but also gives rise to understanding what is uncomputable. This sessions natural system examples featured computation among bacterial colonies embedded in a colloidal substrate along with computation in granular matter itself. The latter is an example of non-silicon based polycomputation [6].

Physical Computation Session IV from July 22 (about 12 minutes in length).

After talking a more extended break from the topic, we returned to this discussion four weeks later (August 19). Our sixth (VI) session occurred in our August 19 meeting, and covered three topics: physical computation and topology, morphological computation, and RNA computing/Molecular Biology as universal computer.

We have discussed category theory before in our discussions on Symbolic Systems and Causality. In this section, we revisited the role of category theory, but this time with reference to Physical Computation. John Carlos Baez and Mike Stay give a tour of category theory’s role in computation via topology. The idea is that category theory forms analogies with computation, which can be expressed on a topological surface/space.

Computable Topology, Wikipedia.

Baez, J. and Stay, M. (2009). Physics, Topology, Logic and Computation: A Rosetta StonearXiv, 0903.0340.

Mapping category theory operators to a topological description.

We aslo covered the role of Morphological Computation by reviewing three papers on this form of physical computation that intersects with digital computational representations. Morphological Computation is the role of the body in the notion of “cognition is computation”. One idea that is critiqued with in these papers is offloading from the brain to the body. Offloading is moving computational capacity from the central nervous system to the periphery. If you grab a ball with your hand, you recognize and send commands to grasp the ball, but you must grasp and otherwise manipulate the object to fully compute the object. Thus, this capacity is said to be offloaded to the hand or peripheral nervous system.

Interestingly, offloading and embodiment are integral parts of 4E (Embodied, Embedded, Enactive, and Externalized) Cognition, which itself critiques the brain as computation idea. But as an analytical tool, morphological computation is much more utilitarian than Cognitive Science theory, and is concerned with how the robotic bodies and other mechanical systems interact with an intelligent controller. In non-embodied robotics, body dynamics is treated as noise. But in morphological computation, body dynamics play an integral role in the intelligent system and contribute to a dynamical system.

Muller, V.C. and Hoffmann, M. (2017). What Is Morphological Computation? On How the Body Contributes to Cognition and ControlArtificial Life, 23, 1–24.

Fuchslin, R.M., Dzyakanchuk, A., Flumini, D., Hauser, H., Hunt, K.J., Luchsinger, R.H., Reller, B., Scheidegger, S., and Walker, R. (2013). Morphological Computation and Morphological Control: Steps Toward a Formal Theory and ApplicationsArtificial Life, 19, 9–34.

Milkowski, M. (2018). Morphological Computation: Nothing but Physical ComputationEntropy, 20, 942.

The three insights from our morphological computational discussion.

While these papers do not get too deeply into the role of pancomputation in Morphological Computation, it is implicitly stated and plays a central role in our last topic: RNA computing and Molecular Biology. For more information, see this talk on YouTube and the paper below. Basically, while the pancomputationalism perspective is missing from biology, the structure and potential function of DNA and RNA provide a route to phycial computation.

Akhlaghpour, H. (2022). An RNA-based theory of natural universal computationJournal of Theoretical Biology, 537, 110984.

Bringing pancomputationalism into biology? What is its value?

Thanks to Morgan Hough for joining us from Hawaii (4:00 am!) on August 19.

References

[1] Richards, B.A. and Lillicrap, T.P. (2022). The Brain-Computer Metaphor Debate Is Useless: A Matter of SemanticsFrontiers in Computational Science, 4, 810358.

Should we just simply “shut up and calculate”, or debate some more?

[2] Fredkin, E. (2003). An Introduction to Digital PhilosophyInternational Journal of Theoretical Physics, 42(2), 189.

This work is the Rosetta Stone for many comparisons between modern AI systems and human-like intelligence, at least in terms of computation.

[3] Newen, A., DeBruin, L., and Gallagher, S. (2018). The Oxford Handbook of 4E Cognition. Oxford University Press.

[4] Wolfram, S. (2002). A New Kind of Science. Wolfram Media.

This is a link to the 20th Anniversary edition, with a full set of Cellular Automata rules, defined by number.

[5] Zenil, H. (2016). How can I generate random numbers using the Rule 30 Cellular Automaton? Quora post.

[6] Bongard, J. and Levin, M. (2023). There’s Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-Scale MachinesBiomimetics, 8(1), 110.

Saturday Morning NeuroSim Discussion Thread: Causality

Over the past three years, the Saturday Morning NeuroSim group has met weekly on Saturdays (mornings in North America). The Saturday Morning format continues in the tradition of Saturday Morning Physics and covers a wide variety of topics.

Our discussion thread on causality begins with Causality and Circles on May 13. From a Mastodon post by Yohan John, we considered how spatialized diagrams are confused with temporal sequences in a feedback loop. We also covered three papers in this session.


Vernon, D., Lowe, R., Thill, S., and Ziemke, T. (2015). Embodied cognition and circular causality: on the role of constitutive autonomy in the reciprocal coupling of perception and actionFrontiers in Psychology, 6, 1660.

Raginsky, M. (2023). Directed Information and Pearl’s Causal CalculusarXiv, 1110.0718.

Laland, K.N., John Odling-Smee, J., Hoppitt, W., and Uller, T. (2013). More on how and why: cause and effect in biology revisitedBiological Philosophy, 28, 719–745.

Our conversation continued after the last week of Neuromatch Academy, when the NMA curriculum featured causal networks. Our July 29 meeting featured a collection of references on Bayesianism, Probabilistic Graphical Models, methods of integration, time-series applications, and more. Some core readings are given below.

Stanford Encyclopedia of Philosophy: causal models. This article takes an epistemological approach and provides us with a baseline for structural equation model, graphical probabilistic models, and other statistical formulations of causal relationships.

Daphne Koller’s Probabilistic Graphical Models course. Hosted on Stanford University’s Open Classroom platform, this course includes units on representation, inference, learning, and causation. The causation unit covers decision theory, utility functions, influence diagrams, and the notion of perfect information.

Pearl, J. (2000). Causality. Cambridge Press, Cambridge, UK. This classic book by Judea Pearl builds from a theory of inferred causation, starting at causal diagrams, and continuing through direct effects, indirect effects, confounds, counterfactuals, bounding effects, and probabilities. The book also covers structural models, decision analysis, and Simpson’s Paradox as the basis for methods for detecting causal relationships.

Scholkopf, B. (2019). Causality for Machine LearningarXiv, 1911.10500.

Heckman, J.J. (2005). The Scientific Model of CausalitySociological Methodology, 35, 1–97. Causality from an econometrics point-of-view. Counterfactuals are a set of possible outcomes generated by determinants. A causal effect is defined by the change in the manipulated factor where amongst a set of factors, in a situation where all but one is held constant.

Taskesen, E. (2021). A step-by-step guide in detecting causal relationships using Bayesian structure learning in PythonTowards Data Science, September 7.

Which variables have a direct causal effect on a target variable? Hint: association and correlation are not equivalent to causation.

Bayesian Models:

Neuberg, L.G. (2003). Causality: models, reasoning, and inference. Econometric Theory, 19, 675–685.

Pearl, J. (2001). Bayesianism and causality, or, why I am only a half-Bayesian. In “Foundations of Bayesianism”, pgs. 19–36. Kluwer Press.

Methods of Interaction: networks and non-directional graphs, as opposed to directed acyclic graphs (DAGs), require a different set of considerations. The methods below cover highly interacting systems like graphs and how change over time can be properly interpreted as causal.

Leng, S., Ma, H., Kurths, J., Lai, Y-C., Lin, W., Aihara, K., and Chen, L. (2020). Partial cross mapping eliminates indirect causal influencesNature Communications, 11, 2632.

Park, S.H., Ha, S., and Kim, J.K. (2023). A general model-based causal inference method overcomes the curse of synchrony and indirect effectsNature Communications, 14, 4287.

From the Granger Causality Wikipedia entry.

Time-series using Granger Causality: the first two references apply Granger Causality to time-series datasets. In such cases, the datapoints are dependent with respect to time. Given two time-series x and yx is the cause of y if x predicts y (lagged with respect to x over a certain time interval) given x and prior values of y. This is in comparison with simply predicting the current value of y given previous values of y, which would be the counterfactual case.

The final paper in the group (Stokes and Purdon) critiques Granger Causality from a Neuroscience perspective.

Carlos‐Sandberg, L. and Clack, C.D. (2021). Incorporation of causality structures to complex network analysis of time‐varying behaviour of multivariate time seriesScientifc Reports, 11, 18880.

Runge, J., Nowack, P., Kretschmer, M., Flaxman, S., Sejdinovic, D. (2019). Detecting and quantifying causal associations in large nonlinear time series datasetsScience Advances, 5(11), aau4996.

Stokes, P.A. and Purdon, P.L. (2017). A study of problems encountered in Granger causality analysis from a neuroscience perspectivePNAS, 114(34), E7063-E7072.

The third session (August 5) was a focus on causality specifically as it is treated in Neuroscience. This session followed up on a Twitter debate by Kording Lab and Earl Miller about the role of causality in neuroscience. The consensus to the question “Why is Neuroscience so into causality?” was that it provides a means to identify mechanisms for function. Causality in neuroscience differs from philosophical discussions about causality in that Neuroscience must infer causality from data, while philosophers (and statisticians) do the work of proving causality.



One interesting point from Kording Lab is that there is a difference between proximate causes and ultimate causes. In some fields, causality is obvious and so causal methods are not always necessary. But Neuroscience is partially about the behavioral substrate, and so we can turn to Niko Tinbergen’s four questions. The four questions concern 1) how a trait arose in development (proximate, dynamic), 2) how a trait arose in evolution (ultimate, dynamic), 3) what is the mechanism or structure of a trait (proximate, static), and 4) what is the adaptive value or function of a trait (ultimate, static).

You can read more about Tinbergen’s four questions and their causal implications in the following papers.

Beer, C. (2020). Niko Tinbergen and questions of instinctAnimal Behaviour, 164, 261–265.

Nesse, R.M. (2019). Tinbergen’s four questions: two proximate, two evolutionaryEvolution, Medicine, and Public Health, 2, doi:10.1093/ emph/eoy035.

Mayr, E. (1961). Cause and effect in biologyScience, 134, 1501–1506.

The other papers from this session focused on mental representations and causal functional connectivity in the brain, respectively.

Sloman, S.A. and Lagnado, D. (2015). Causality in ThoughtAnnual Reviews in Psychology, 66, 223–247.

While Bayesian approaches are good for theory-building, they are an incomplete account of what goes on in the cognitive world.

Biswas, R. and Shlizerman, E. (2022). Statistical perspective on functional and causal neural connectomics: The Time-Aware PC algorithmPLoS Computational Biology, 18(11), e1010653.

The fourth session (August 19) picks up on a point covered in the second session, namely how causality can be inferred from network data. This covers related ideas of transitivity, weak interactions, and anti-causal models. Papers for this session include networks in ecology, anticipative and non-anticipative control theory, and anti-causal systems.

Typology of causal models for past, present, and future events.

Naghshtabrizi, P. and Hespanha, J.P. (2006). Anticipative and non-anticipative controller design for network control systemsLecture Notes in Control and Information Science, 331.

Sugihara, G., May, R., Ye, H., Hsieh, C-H., Deyle, E., Fogarty, M., Munch, S. (2012). Detecting Causality in Complex EcosystemsScience, 338, 496–500.

Chattopadhyay, I. (2014). Causality NetworksarXiv, 1406.6651.

Anticausal SystemWikipedia.

McCurdy, T. (2007). Causal Systems: understanding the basicsPhysics Forums. September 23.

From the Necessity and Sufficiency Wikipedia entry.

Finally, some fields (cell and molecular biology) have working models of causation that while useful, are not particularly illuminating. In the cell and molecular biology example, the traditional model of necessity and sufficiency (a mechanism being necessary but not sufficient) can be criticized for not being complete with respect to incorporating counterfactuals or multiple potential causes. See this paper for more information:

Bizzarri, M., Brash, D.E., Briscoe, J., Grieneisen, V.A., Stern, C.D., and Levin, M. (2019). A call for a better understanding of causation in cell biologyNature Reviews Molecular Cell Biology, 20, 261–262.

Printfriendly