Everyday Chaos by David Weinberger is a hard book to pin down.

It's partly about the future, and computers, and AI, and how we navigate an ever-changing world filled with uncertainty. How much control are we going to cede to algorithms? How are we going to judge them? Who is going to get to make these decisions?

But in the middle of reading it a few days ago, I said to my friend, "It's about... um..." and I stared off into space trying to finish the sentence.

Because it's also about the nature of reality, and our ability to make predictions, and the differences among models, and simulations, and scientific theory.

It made me uncomfortable.

I kept thinking, "No, wait... that can't be right." But I kept reading, because there's a good possibility that somebody who has bothered to write a book probably has some insights, and just because those insights make me uncomfortable doesn't mean that they are wrong.

The parts that make me uncomfortable point at a decreased relevance of what I think of as science, and I'm terribly fond of science.

But Is it Science?

I have been contemplating this for some time, studying Data Science while comparing it to my past work in physics. Physics, I think, is still the prototypical "science," with theory and experiment informing one another. It has (in principle) testable hypotheses and falsifiable models, and makes predictions about the world that we then check by measuring them. You know... science.

I know that that is only one way of defining science. Categorizing, perceiving patterns and describing them... these are all processes of investigation that proceed without the tidiness of "F = ma." Yet they still proceed apace, and they expand our knowledge in clearly scientific ways.

My father is a biologist, and I eventually became a mostly-sociologist; I'm not here to disparage the other sciences.

Aside: If you happen to have 24 spare hours or a long commute, CBC radio made a fabulous series called, "How to Think About Science." I highly recommend it.

Back to the Book at Hand

In this book Weinberger hints at a coming paradigm shift away from the idea of explicability and these sorts of theoretical models: away from the idea that there are simple underlying mechanisms that we can discover, if only we think about it long enough, and hard enough, and dedicate enough resources to the pursuit, and towards the acceptance of good-enough simulated AI-based models as being... well... good enough.

This is an uncomfortable possibility to entertain as a thinker-about-science. I want to know why things are the way they are, not just what things are likely to be. I don't want to give up mechanisms in favour of pure empiricism. That's why I left engineering to become a scientist in the first place. I have... worries.

Some concerns

This book hints at chaos and complexity, but steers clear of the math, and I think that there is some important stuff in the math. (1)

The thing about chaos and complexity is that things are predictably unpredictable. You can observe and model chaotic systems and make claims about how they are likely to wind up, for example what the attractors (stable states) are and how rapidly they diverge given different initial conditions. (2)

And I did take issue with at least one part. In a chapter titled, "Beyond Causality," Weinberger quotes Daniel Hillis:

Causal explanations "do not exist in nature" and are "just our feeble attempts to force a storytelling framework onto systems that do not work like stories."

But one of the weirdest things about genuinely chaotic systems is that they are deterministic. Not random, not probabilistic, but step-by-step governed by the previous state of the system, at least the way we currently understand and define them. (3)

With identical starting conditions, the system will proceed to the same end state via the same 'trajectory.' The problem for prediction is that tiny changes in the starting conditions (out in the tail of the decimal points) send the system off down a completely different trajectory.

But there is a big difference between saying (probably correctly) that we can never have sufficient precision to predict the trajectory and outright throwing out the idea of causality.

The "Beyond Causality" chapter also does uses a weird discursive approach, going back and forth between science as a practice and science as a metaphor... in a section titled, "Action at any distance," the author compares collisions and gravity, and then claims that the hyperlink is an example of a technology whose "gravity of interoperability does not diminish over distance."

Um. No. The hyperlink (as far as we know) is confined to the realm of the planet earth, and is still subject to the laws of physics and the speed of light. I just spent a whole day reading about latency problems and network performance, and how to situate your data centres and maintain your TCP connections to minimize lag. (That's an amazing book, entirely by the way.)

The hyperlink operates faster than a series of phone calls, and can duplicate information many orders of magnitude faster than a printing press, but it is not a special thing that no longer is subject to the laws of causality. It's just that gravity is operating over scales of light years, and hyperlinks are all on the surface of the same 13,000 km sphere. (4)

More to the point, we also have (in physics again... sorry, it's my baby) the idea of correlation distances (and relaxation times). Yes, hyperlinks create astonishing levels of interconnection, but we can study the networks thus formed, and they still have predictable patterns. They diminish over time and space. Hyperlinks go stale. Viral videos run their course. People stop passing them on.

The fact that we haven't been able to (and may not be able to) perceive the causal links doesn't mean that they don't exist. There's a leap in logic there, and I don't think he justified it.

To quote a sociologist friend of mine: "But/And"

Our existing processes and structures are not explicable, and... it is (perhaps?) unfair to demand that AI be more transparent than what we already have.

At the same time, I found myself nodding at many other sections, and thinking, "Hrm," a lot. At one point, he made the not-unreasonable argument that our existing processes and structures are not explicable, and that it is (perhaps?) unfair to demand that AI be more transparent than what we already have.

Picture me gazing off into the distance and contemplating.

Is it, then?

It seems like a good point, but I still find myself wondering about how complex we can allow systems to get and still expect people to be able to function in them. If you are allowing an AI to make hiring decisions, how is anybody ever going to be able to navigate it? If the model changes every few months (or days, or minutes), and you are trying to accomplish something... how are you ever going to guess what the process is to do that?

But/and in the other direction (on the gripping hand, as it were...) It occurs to me that the same lack of transparency and the mysterious and arcane processes by which (for example) people are admitted to Ivy League colleges is exactly what leads to grade inflation, SAT prep courses, and the Admissions Scandal... people are trying to manipulate and game a system and (presumably) they perceive something about how the "game" (5) works that the rest of us don't have access to. Or at least they think they do.

Interestingly, this very point about interactions between people and the systems they are trying to navigate becomes key to the end of the book.

The structural and the systematic come into contact with, impact, and are built from the particular

As Weinberger points out, the structural and the systematic come into contact with, impact, and are built from the particular. They are emergent properties that cannot be predicted from the underlying components. Order will arise from the chaos, but we have no way of knowing exactly what, or when, or what the precipitating event will be.

Which, uncomfortably, lends more weight to his argument that we need to give up the illusion that we can predict the future. Oh, no... my poor science-loving, predictable-world-wanting heart.

"Make. More. Meaning."

The ending of the book, though, warms the cockles of my heart. The final chapter is titled, "Make. More. Meaning." I'm tempted to quote a whole paragraph out of it, and add a [Spoilers] tag. (6)(7)

The most surprising turn in this book was towards the Ethic of Caring, which I first encountered in a course on Feminist Pedagogy. In this section, he says that we do need to concern ourselves with the particular when we reason about the ethical implications of technology and the systems and structures that it creates. This section of the book is not the most challenging for me, because I've tripped over these ideas and thought about them before.

There was a lot to chew on in this book, and I'm quite sure that I'm not finished with it. (I'm still grappling with several of the thinkers from that CBC series, and I listened to that years ago.) I highly recommend that you read it if you are in any way interested in these issues. Then send me an email; I'd love to discuss it more!

_________________________

  1. I looked up Lyapunov exponents in the index, for example, and they were not there.
  2. The rapidity of that divergence is what the Lyapunov exponents describe.
  3. I did stop in the middle of this sentence to consult with another physicist about whether he thinks this is an artifact of the mathematics or whether it reflects a deeper reality. We're still attached to the deeper reality.
  4. Ok. Let's include the satellites through which they are relayed... it's still a very very small part of the universe.
  5. That is, the model that is embedded in the structures, rather than in a computer algorithm
  6. Can you spoil a non-fiction book? Are arguments like narrative? Do we need surprises at the end?
  7. You're lucky. I had to return the book to the library and now I can't spoil the ending for you. Now you get to read it yourself!!! :)

This was originally published on LinkedIn.