The ideas I have and intend to present in this Substack evolved from a unique conglomeration of personal interests that have ensnared me over the past decade. The origin of this started while pursuing my Space Systems Engineering degree at University of Michigan. Systems Engineering is the study of how the lifecycle (design, build, operate, decommission) of an arbitrary “System” can go as well as possible. In this case, a “System” is an engineered construction designed and built by a large heterogeneous group of discipline specialists. For Space Systems those specialists design and build rockets and satellites, but there are multiple different types of system engineer out there.
At SpaceX, I used this degree to help ensure the software that flies the rockets and satellites was built quickly, efficiently, and to an exceeding degree of quality. Generally, what systems engineers actually do is find and solve a dizzying array of coordination problems. Each sub-discipline involved in system development is resource constrained in terms of time, monetary budgets, and engineering budgets like power and mass. Because of these pressures, sub-disciplines naturally start to develop zero-sum thinking when they work with one another.
To solve this type of issue, most standard industry practices boil down to injecting a third party into the mix who’s notionally responsible for solving the coordination difficulty – i.e. someone to coordinate a shared energy budget, or someone to independently review the safety of any design proposals. This type of solution starts to destroy morale, sapping the energy and agency from everyone involved. I think you agree that such effects are counterproductive to solving the coordination issue that brought upon the “solution” in the first place.
At SpaceX I started thinking in earnest about whether there are any general lessons or ways to frame the natural business of engineering to navigate these coordination problems more effectively. How can we preserve or even enhance the agency of the domain experts while still ensuring interdisciplinary issues are analyzed with utmost care and rigor? Given modern software tools, it seems that better solutions are possible than the tried and true “insert a third party” approach. On top of that, the promise of software is that it enables a high-fidelity record of each interaction such that decisions made can be retroactively reviewed. This works as a training and coaching feedback mechanism for the experts themselves as well as a method to identify, isolate, and recover from poor decisions after the fact.
In summary, at SpaceX I worked to remove myself from the product workflow by creating the protocols and interfaces by which products were designed and built. I didn’t want to single handedly interact as an agent myself in each and every coordination problem, but I did want to prove that each type of those coordination problems was rigorously solved by the discipline experts themselves.
After I left SpaceX, from 2018-2023 I was employed as a type of software developer, a “software release engineer”. In this role I designed the software infrastructure that acts as an automated factory of sorts for building and testing my employer’s application software. Here I build the tools whose sole job it is to make those aforementioned coordination problems less tricky to navigate.
In the meantime, the nature of mind and matter has always been fascinating to me, and I love what you might call pop philosophy — Stephen West’s Philosophize This! podcast is one of my favorites — but I don’t have any real philosophical education. Nature and mind show up in other interests of mine, such as how I love to learn whatever I can from my wife and Neuroscience PhD Dr. Brandalyn Riedel, and or how I avidly follow the latest advances in both Machine Learning and the boundaries of math and physics (mostly by reading the Quanta, Nautilus, and MIT Technology Review magazines).
In 2022, these threads culminated in a few notable developments. One, I started listening to Prof. John Vervaeke’s “Awakening from the Meaning Crisis”. He introduced me to the question of meaning itself as well as a few terms he has popularized which really resonate with me such as “Relevance Realization”, “Salience Landscape”, “Transjectivity”, and the Four Ways of Knowing. Entering this intellectual orbit also introduced me to Tyson Yunkaporta’s work Sand Talk. This, plus earlier ruminations I had about the power of viewing the self in terms of interrelated independent agents (heavily inspired by the sequence Multiagent Models of Mind by Kaj Sotala), brought clearly into focus the timeless wisdom of animist ways of thinking. As profoundly social creatures, our minds are primed for solving social problems, so why not harness that capacity by inserting it into the most basic elements of our ontology? In general, this led me to become convinced that we as a society need to radically embrace pluralism as a core axiom within our thought and social structures. The clearest exposition of this inherently political stance is the essay Why I Am a Pluralist, by E. Glen Weyl.
The other intellectual flauneuring thread that connects to this work is my perverse fascination with the social and technological developments related to machine learning and the pursuit of Artificial General Intelligence. Last year (2022), my wife and I were finalists in a worldbuilding contest, hosted by the Future of Life Institute. The objective of this contest was to imagine a generally optimistic scenario of what the world could look like in the year 2045 within the constraints provided by the contest, most notable of which is that in 2045 AGI is real, and plays an important role in how the world works. As part of this exercise I read extensively on the latest arguments about AI Alignment, such as put forward in Nick Bostrom’s 2016 book Superintelligence or within posts to alignmentforum.org.
In case you have not followed this discourse, the concern around Alignment goes something like this: We are designing intelligence, we know that the substrates we’re designing that intelligence upon is much more fungible than a human brain and can therefore expand its available ‘computing power’ in a manner no human mind is capable of. We have no idea of the fundamental limits of intelligence nor of any inherent limits to reciprocal improvement. Therefore we cannot rule out the possibility of this intelligence bootstrapping itself and reaching truly godlike powers of ability to manipulate us and our world. In addition, we have no reason to trust that if such an intelligence did explode in capability it would in any way care or value human (or planetary) well being. The study of AI Alignment is then the study of this problem, and finding ways to understand and control the potential risk posed by the continuing advancement of the frontiers of Artificial Intelligence research.
On one hand I cannot argue with the premises behind “Alignment”, but on the other something feels off. There is a strong set of implicit assumptions embedded in the worldview behind the very premise of the problem. AI is developed and analyzed as a tool to further “our” aims. It views intelligence and agency as independent variables without any acknowledgement that neither has any meaning without the other. To a large degree, Animistic Agency and other ideas I present here are my attempts at reframing the very premise of the Alignment problem into a more tractable set of concepts. I believe I made progress, but I’m too isolated in my thinking to know if that is the case or not. Hence, think of this substack as my attempt to broadcast my attempts at this reframing in order to dialogue with those of you who are also interested in this problem domain.
-Andrew Lyjak