Jasun Horsley takes us on a tour of our AI future

Summary: The rise of semi-intelligent machines – powered by algorithms – will change the world in ways we cannot even imagine. Only looking at the biggest picture with the most open mind will let us see the future. Here artist Jasun Horsley gives us different perspectives on our AI future.

Maria in Fritz Lang's "Metropolis" (1927).
Maria in Fritz Lang’s “Metropolis” (1927).

The Lost Language of the Body

By Jasun Horsley at his website, Auticulture.
Posted with his generous permission. Graphics and links added.

Algorithms, Occultism, & the Limits of Knowledge.

Science, Religion, Dogma.

“This figure of the algorithm as a quasi-mystical structure of implemented knowledge is both pervasive and poorly understood. We have never been closer to making the metaphor of fully implemented computational knowledge real than we are today, when an explosion of platforms and systems is reinventing cult practice and identity, often by implementing a me downloaded as an app or set up as an online service.”

What Algorithms Want: Imagination in the Age of Computing by Ed Finn (2017, The MIT Press).

There is a joke among computer coders: “Software and cathedrals are much the same – first we build them, then we pray.” The joke has more than a kernel of truth. In a similar way to religion, reliance upon computer code, software, and algorithms is an act of faith.

It’s only in recent years that ordinary people – end-users – have become fully cognizant of this, as the architecture of algorithm-directed technology has steadily encroached into our inner realms. For “The architecture of code relies on a structure of belief as well as a logical organization of bits” (Finn, p. 6). We appear to be locked into a symbiotic relationship, one between our consciousness and our technology. With culture – which is at the root of worship [1] – as the binding medium.

  1. The term “cult” first appeared in English in 1617, derived from the French culte, meaning “worship,” which in turn originated from the Latin word cultus meaning “care, cultivation, worship.” See Wikipedia.

More and more with each passing day, just as we once did with religion, we are placing our faith and trust in algorithms to determine our decisions. At the same time, it’s not entirely clear which is the original model here – science or religion – because, if we go back to ancient Egypt, there is evidence for both a “sacred science” and a scientistic kind of religion. As it was at the beginning, so it will be at the end, perhaps, because this seems to be the sort of society we are turning into. Finn writes …

“{T}he house of God that exists beyond physical reality: transubstantiation, relics, and ceremonies are all part of the spectacle of the cathedral that reflect the invisible machinery of faith. Yet most of that machinery inevitably remains hidden: schisms, budgets, scandals, doctrinal inconsistencies, and other elements of what a software engineer might call the “back-end” of the cathedral are not part of the physical or spiritual facade presented to the world.” (p. 7)

The perilous intersection between science and religion is called “scientism.” In odd, perhaps surprising, ways, these supposed enemies make quite cozy bedfellows. Both religion and science offer an interpretation of reality that claims to be absolute and final, even while acknowledging a degree of incompleteness. For Christianity, there’s still a “revelation” to come, things yet to unfold. And so it is with science, in which there is (generally) an admission of things still to be worked out. Yet both offer an all-encompassing interpretation of reality, along with the promise that their method – and this is the key –  is sound, valid, and provides all that’s required to fully understand existence.

China's Sunway TaihuLight Supercomputer
China’s Sunway TaihuLight Supercomputer, third most powerful in the world.

A Computational Theocracy.

Returning to Finn’s book …

“A cathedral is a space for collective belief, a structure that embodies a framework of understandings about the world, some visible and some not. [W]e have fallen into a ‘computational theocracy’ that replaces God with the algorithm: ‘Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.’ We have …adopted a faith-based relationship with the algorithmic culture machines that navigate us through city streets, recommend movies to us, and provide us with answers to search queries.” (p. 7)

The more we get into this algorithmic state of consciousness, the more we are replacing a direct sensory experience of our physical environment with a technologically mediated one. Eventually, there will be no need to refer directly to organic reality at all. (I had to supplant the word “physical” with organic, since even a virtual realm has some physical aspects.)

As far as I know, members of the intelligentsia who claim to believe we’re living in a simulation generally don’t have a hypothesis about where our real bodies are. I presume this is partially because, if they started to try and hypothesize where their real bodies are, they would start to sound like idiots. If we’re in a simulation, either we are code that is also simulated, in which case it’s all irrelevant, game over; or, our bodies are somewhere else, and we have to figure out how to get back to them.

Probably, simulation theory is so compelling because it works as a metaphor, and metaphors have enormous power over our consciousness. The metaphor in question seems to have to do with how both scientific and religious dogma, when too heavily relied upon, become traps; and maybe this is due to how, at a certain point, they renege on their own principles? Scientism happens when science betrays itself by raising up the scientific method to the apex of a pyramid that is supposed to represent all of existence. A truly rigorous scientific method has to leave space for things that cannot be understood through the scientific method – in other words, for “divine revelation.”

In the same way, religion betrays itself by turning divine revelation into dogma, which breaks the covenant of divine revelation. In order to know anything, we need divine revelation – reference to God; but in order to know that, we need to refer to a scripture that has been received through divine revelation. This means that holy scripture is telling us that, essentially, we can’t trust holy scripture. The Bible doesn’t say this, of course. It doesn’t say “You cannot trust this book,” because this would be both self-contradictory and self-sabotaging. It’s the cosmological equivalent of the Cretan warning that “all Cretans are liars.”

Enter Occultism.

There is another ideological framework (besides scientism) that has often been described as a synthesis of religion and science, and that is occultism. In Charles Upton’s 2018 book, Dugin Against Dugin: A Traditionalist Critique of the Fourth Political Theory, Upton describes a kind of magical “creative visualization” that either rejects “an objective metaphysical order” entirely or is blind to the need to conform to that order as “the precondition for any spiritually-based action.” He argues that magical thinking of this sort has become “a central praxis in a post-structuralist world.”

“And the notion that belief is a tool,” he continues, “that the use of words is not primarily to express truth but rather to make things happen, is obviously also an integral part not only of the craft of magic but of the practice of politics – right, left or center, green, red or blue in today’s world.”

This is also a good description of computing and of the function of code – not exactly “first build it, then pray,” but rather that prayer is an essential component in the building of (the ritual of entering) these virtual realms. Just so, computer code doesn’t actually describe or express anything real, but it’s becoming more and more efficient at causing things to happen (html code, CGI, and so on). If it can be made operational, it will bring about changes in what we recognize as “reality.” If we are living in a “post-truth” world, it is because belief has become a tool to generate artificial realities rather than a conduit to understanding objective reality, which is accordingly rendered obsolete, like God and Patriarchy. Truth then becomes nothing more than whatever people can be persuaded to believe it is.

There is a curious void at the center of this circle. Belief in magic is necessary to make magic effective. Magic is a tool, or a method, for manipulating perception that can thereby “restructure reality.” Yet a reality that can be restructured by human whim throws into doubt the very possibility of objective reality. This ideology is self-confirming; but also self-contradicting. It depends on affirming the belief that there is no objective, eternal reality, that there is no higher spiritual principle outside of the temporary and the subjective.

In occultism, these are the psychic realms, inter-subjective realms that are susceptible to influence by our own will and belief, yet which also allow us to affect other people’s subjective experience. For this reason, they provide us with the feeling of power – to alter and even generate reality by convincing others to submit to or enter into our own dream-state.

Both religion and science claim to offer a universal route to truth, a claim which rests on the assertion of an objective reality. Occultism – like postmodernism and its offspring, identity politics – seems to wish to trump both by making such an assertion both obsolete and unnecessary. If so, the idea of occultism as the synthesis of religion and science doesn’t hold up to closer inspection: a more accurate description would be that occultism has co-opted science, in order to turn it into a new religion. And that it has reformatted religion, to create a kind of pseudo-science.

It may even be (since Newton and many of other pioneers of western science were alchemists and astrologers) that occultism has created what we think of as Western science, as a Trojan Horse for itself.

These are the small ones.

The Apple of Knowledge.

How does all this relate to algorithms? One way to define algorithms is as a set of symbols that function to interpret reality, combined with a computational model that will measure the changes in reality. And magic is “the Science and Art of causing Change to occur in conformity with Will” (in Aleister Crowley Thelema).

Occultism, in part at least, is about gathering knowledge – which is to say a set of symbolic beliefs – in such a way that it can be used to affect change, by reinterpreting the world through that lens. Finn writes …

“Through black boxes, cleanly designed dashboards, and obfuscating application program interfaces, we are asked to take this computation on faith. …And we believe it because we have lived with this myth of the algorithm for a long time, much longer than computational pioneers Alan Turing or even Charles Babbage and their speculations about Thinking Machines. The cathedral is a pervasive metaphor here, because it offers an ordering logic, a super structure or ontology, for how we organize meaning in our lives.”

The creation of a system of knowledge that synthesizes all symbols is akin to “the one-world religion” of scientism so feared (not wrongly) by Christian conspiracists. It can be traced back at least to the Enlightenment, but presumably further. Today it is taking a concretized, manifest form through the computerized superstructure of “the global village.” The ascended algorithm is the new totem and taboo that regulate our thoughts, perceptions, and behaviors.

“The problem we are struggling with today is not that we have turned computation into a cathedral, but that computation has increasingly replaced a cathedral that was already here. This is the Cathedral of the Enlightenment’s ambition for a universal system of knowledge. When we juxtapose the two, we invest our faith into a series of implemented systems that promise to do the work of rationalism on our behalf, from the automated factory to automated science. Computation offers a pathway for consilience, or the unification of all fields of knowledge into a single tree, an ontology of information, founded on the idea that computation is a universal solvent that can untangle any complex system, from human consciousness to the universe itself.”

It’s not simply that we are seeing algorithms in action, then, but that we’re becoming algorithmic ourselves. When we create a system of knowledge, and believe it’s complete or wholly accurate (when it isn’t), we effectively surrender all aspects of our experience that can’t be explained by that knowledge set over to it. It is like creating a map and then referring to it so blindly that we cease checking it against the territory: we end up lost. Worse, we end up compounding the error because our faith in the map (the algorithm cathedral) is so unshakeable that we no longer trust our senses to course-correct. We end up pretending that there is no territory, at all, and that the map is all we need.

The simplest way to understand this is by referring to the bodily senses. Our sensory experience in any given moment far outstrips the capacity of our minds to flatten it out into a linear narrative. Think about (!) trying to describe, mentally, all of the sensory data we are receiving and processing via our bodies – both internally and externally – in any moment, and perform this fast enough that we never fall behind. We may as well try and count snowflakes in a blizzard.

The more we try and process our lived experience through algorithms of knowledge, mind, and technology, through social media and phone apps, the less we are able to experience the living reality unfolding outside the confines of our minds. Of course, the conceptual realm comes up with an endless menu of reasons to stay plugged in, all driven by FOMO – the Fear Of Missing Out. By such subterfuges, our thoughts about snow become more compelling than snow itself, and our Smart phone interactions become more appealing than face to face encounters. Once the mind-tech has us, the supposedly essential data it is providing becomes secondary, even irrelevant, to the buzz provided by the tech itself. The medium has become the message, and it is us who are being mediated.

Eventually, we may decide never to leave the techno-mind realm. We may start to believe it is all there is, that there is no outer reality being referred to, because outside, where the blizzard rages, reality has become overwhelming to us. As we move further and further away from our bodies, we may end up telling ourselves they don’t exist, that we’re just consciousness, flying free and forever young like Peter Pan, inside a simulated dream realm of endless permutations.

The paradox about systems of knowledge is that, like simulations, they’re designed to help us navigate our experience, to understand better so we can live better lives. They’re designed to help us get free of whatever oppresses us, to solve problems and improve our circumstances. But the more immersed we become inside any system of knowledge, the more we convince ourselves it’s infallible, complete, and all-contained, the more trapped we become by it.

If such progress is allowed to progress indefinitely, we may regress to a literally infant state, in which we need our technology to feed, wash, cloth us, and remove our bodily wastes. We will have been assimilated.

The best AI film ever.

Colossus - The Forbin Project
Available at Amazon.

Outside the Black Box.

So is there a way out of this trap, when we can’t even have a conversation without referring to a system of knowledge?

If knowledge – perceptual experience that coagulates into code – is what has ensnared us, over and over throughout the ages, is there a way to use this awareness to break the pattern and sneak past the ancient algorithms imprinted onto our souls, to freedom? Can we use a nail to drive out a nail? In other words, is there some way of approaching systems of knowledge that leads us away from reliance upon them rather than to increased dependence, without rejecting the systems outright? Can we apply knowledge in such a way that we can see the limits of our knowledge, without reifying the knowledge we’re using to see those limits?

To ponder this might be our opportunity, as techno-mediated humans, to experience an almost literal case of the head-fuck. And it’s no doubt fitting, if ironic, that such an anti-Promethean task seems akin to a kind of self-deprogramming. (We must know our enemy in order to know ourselves.)  Just as the programmer is not the program, truth is not located in any knowledge set, but in the consciousness that assembled it – ours.

We are left like the heroine of many myths, surrounded by seeds – endlessly streaming digital code – with barely a clue of how to ever sort the ones from the zeroes. The only hope seems to be – if we can crack open enough of those data bytes to rediscover the original language-transmission (pre-Tower of Babel) hiding, like a nut inside a shell, inside them – we may start to remember, dimly but with a growing sense of excitement, that the signal we are seeking is within ourselves.

Simply stated: what if the body is the only algorithm we need to locate our souls?

—————————————–

“Unless you expect the unexpected you will never find truth, for it is difficult to discover.”
— Heraclitus, the pre-Socratic “Weeping Philosopher” of Ionia (Wikipedia).

Jasun Horsley

About the author

Jasun Horsley is a writer, filmmaker, artist, and musician. He is on the autism spectrum and sees creativity, spirituality, and the autistic experience (in its purest form) as synonymous: a going inward in order to make sense of what’s outside, and vice versa. See his website, AutiCulture, and his bio.

See his Twitter feed @JaKephas. See his YouTube channel. Here are some of his books …

The Lucid View: Investigations into Occultism, Ufology, and Paranoid Awareness: 2013 Edition (2013) – “It traces parallels between paranoid awareness, magikal thinking, and Surrealism, and finds they all share a common goal: to forge new perspectives out of the raw material of the unconscious, and to shatter the frozen lake of the consensus with the axe of the impossible.” Published as Aeolus Kephas.

Seen and Not Seen: Confessions of a Movie Autist (2015) – “What’s the difference between entertainment, instruction, and ideology?”

Prisoner of Infinity: UFOs, Social Engineering, and the Psychology of Fragmentation (2018) – “It examines modern day accounts of UFOs, alien abductions, and psychism to uncover a century-long program of psychological fragmentation, collective indoctrination, and covert cultural, social, and mythic engineering.”

The Vice of Kings: How Socialism, Occultism, and the Sexual Revolution Engineered a Culture of Abuse (2019) – “It uncovers an alarming body of evidence that organized child abuse is not only the dark side of occultism, but the shadowy secret at the heart of culture, both ancient and modern.”

For More Information

Ideas! For shopping ideas, see my recommended books and films at Amazon.

The new industrial revolution has begun. New research shows more robots = fewer jobs. Also see the famous book by Wassily Leontief (Nobel laureate in economics), The Future Impact of Automation on Workers (1986). Also see the Frequently Asked Questions page at the website of the Machine Intelligence Research Institute.

If you liked this post, like us on Facebook and follow us on Twitter. See all posts about robots and automation, and especially these…

  1. How Robots & Algorithms Are Taking Over.
  2. Economists show the perils and potential of the coming robot revolution.
  3. Three visions of our future after the robot revolution.
  4. Films show us how smart machines will reshape the world.
  5. Machines take another big step to superintelligence.

Books about the coming of smart machines

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford (2015).

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies Erik Brynjolfsson and Andrew McAfee (2014).

Rise of the Robots
Available at Amazon.
The Second Machine Age
Available at Amazon.

 

30 thoughts on “Jasun Horsley takes us on a tour of our AI future”

    1. Isaac,

      (1) That’s a powerful insight! We’re living in an episode of “Black Mirror.” More apropos than my usual “Twilight Zone” analogy.

      (2) Lesson for Palmer: don’t write after than second line. Or fourth shot. Or whatever. It’s a rant, with the level of illogic typical of rants. Palmer castigates Greenwald for writing about what interests Greenwald, not what interests Palmer (I get such comments occasionally, and mock them). He condemns Greenwald for saying bad things, but never explains Greenwald’s reasoning – or gives a rebuttal.

      It reads like the script of a two-minute hate.

      1. Isaac Gaston

        Okay sorry about that. I thought that the article was somewhat intereting, as I have seen conflicts of this sort back and forth over the years. For example, Sam Harris and greenwalds feuding.

        I can send you a link if you wish.

        PS despite the irrationality of the article, was there any insight in it?

    2. I like Glenn Greenwald and generally think he is an excellent and principled journalist. However I also think he was/is wrong about Charlie Hebdo and ought to rethink his position.

  1. Isaac Gaston

    Thank you. I brought this to your attention because I have noticed similar kerfuffals over the years, for example the conflict between Sam Harris and greenwald. If you want I can send you links.

    PS was there anything in the article that warrants attention? Also, could you be a bit more specific as to its problems? I know that it seems to be begging the question, but I am curious.

    1. Isaac,

      Greenwald has become persona non grata for many on the Left, reflecting their new total intolerance for dissent. Hence the frequency with which they are mocked, with reason, as NPCs (ie, non player characters in computer games).

      I don’t bother reading rants, sifting thru the gibberish for insights. I have a long list of high quality material to read, more than I could do in 48 hour-long-days.

      1. Isaac Gaston

        That is understandable. Sam Harris and other other members of the proverbial intellectual dark web have had the same thing happen to them too, with Harris himself being labeled as alt right.

        The kerfuffle between Greenwald and Harris cropped up when Greenwald caused Harris of being a racist, due to his critiques of Islam. While Harris is a member of the new atheist movement, this critique seems somewhat unjust, being as how Harris critiques pretty much every religion that there has ever been. They exchange some emails and they have had bad blood ever sense.

        Thank you for this discussion.

        Ps
        Sorry about the double post. I am really bad with technology: another reason to not like our black mirror world.

  2. Afraid this is several pages of pure nonsense. As an example:

    If knowledge – perceptual experience that coagulates into code – is what has ensnared us, over and over throughout the ages, is there a way to use this awareness to break the pattern and sneak past the ancient algorithms imprinted onto our souls, to freedom? Can we use a nail to drive out a nail? In other words, is there some way of approaching systems of knowledge that leads us away from reliance upon them rather than to increased dependence, without rejecting the systems outright? Can we apply knowledge in such a way that we can see the limits of our knowledge, without reifying the knowledge we’re using to see those limits?

    How does personal experience coagulate into code? What does it look like? How exactly has knowledge ensnared us? The next phrases, breaking the pattern,and sneak past… And then the wonderful phrase, in other words… Whatever he was trying to say in the previous phrases, the idea that in some way the incomprehensible next passage is a restatement of it is truly bizarre.

    The author seems to have spent too much time on the Left Bank in the Parisian nonsense factory, perhaps after taking a long draught of Hegel, and needs to come back to Planet Earth and take Analytical Philosophy 101. Learn to use language to say things.

    I guess it must be a long prose poem perhaps? But a very bad one.

    1. henrik,

      Don’t judge what you don’t understand. I lived for 30 years in the San Francisco Bay Area. Computer guys often talk like this.

      “How does personal experience coagulate into code?”

      Coding is creating a representation or model of knowledge and experience. This is a commonplace insight in computer science, and a foundational insight in AI. AI Magazine, produced by the AAAI, has many many articles discussing this.

      1. Larry, its the word ‘coagulate’. What on earth does this mean? How do we tell if its happened? This is a way of talking that is completely unhelpful. It does not lead to understanding, because its empty of clear and specific meaning. it will turn out when you probe that the metaphor adds nothing, and that what is being said is either false or trivial.

        Perhaps what he is trying to say is, when we write code (and I have) we draw on our personal experience? Yes, of course we do. When we do almost anything we draw on personal experience. So what?

        He says that knowledge has ensnared us. What does this mean? How can knowledge do any such thing? Is he saying that sometimes we do not know what we think we know? Obviously.

        Is he saying that sometimes what we think we know gets in the way of recognizing counter examples? Yes, of course.

        But neither one has any profound implications, or any that are specific to AI.

        What I am judging is that there is nothing there to understand. I don’t know if the author has any ideas about how AI is going to evolve and affect us. But if he has, he hasn’t put them in terms which are understandable enough to argue about.

        You notice in this thread no-one is coming and saying they think he is wrong on this or that particular point. That’s because its impossible to extract anything specific to agree or disagree with.

      2. Henrik,

        I suggest reading some of the literature about AI, and related subjects. Books by Arthur Kessler, Ray Kurzweil, Nick Bostrom, Max Tegmark, and others. Much of it is in the same vein as Jasun’s article.

        Just because you don’t like it or gain much from it, does not mean it has no value.

      3. I’ve profited from your reading suggestions in the past and will follow up on these. As you say, the problem is having too many important and interesting things on the to read list. But I will have a go.

  3. I would say, to be more constructive, that I agree with Larry that AI is going to make very profound differences to how we live and even how we feel. I am not sure yet whether we will ever, or more important any time soon, get to the point where we find we are dealing with entities which appear on an equal level to us, similarly autonomous and predictable or unpredictable in the same way.

    But it does seem clear that even in a much more restricted context, of machines designed to self learn and perform tasks which are increasingly ones central to how we live, their arrival and particularly self learning will have profound effects.

    I don’t feel however that the post helps get any grasp or insight into how or to what extent or when this is going to happen. This is surely best not approached with rhetoric and metaphor, but with quite concrete imagining of specific scenarios. Its very hard, but this is because its predicting the future. First what will happen? And this is hard enough. And then, how will our society and our way of thinking and feeling change in response? And this is even harder.

    If we recall how poorly people did predicting the evolution and impact of computers, we see the difficulty. I recall once being told that a senior group of IBMers in the heyday were given a demo of the early PCs.

    At last one of them got his head around this interesting oddity. Ah, he said, now I see what you’re showing us. Its just local computing.

    Right. That is all it was. A very smart terminal, very smart indeed. And yet this was one of those occasions when such an apparently small thing was a revolution.

      1. Jasun,

        Most people have no imagination. It’s not really needed. Sometimes I wonder if it is an asset for most.

        More broadly, in my experience (which is considerable, after 16 years at this gig) the prescient posts — those looking beyond the consensus view – receive mostly critical responses. That’s not a guess. I track my predictions – hits and misses. Look at the comments to the hits. Everyone knew our mad wars in Iraq and Af w/b certain victories, that terrorists with manpads (eg, stingers) and emps were an imminent threat, and that Iran was a few years from having a bomb. My posts saying otherwise were condemned as folly.

        I suspect my posts about the new industrial revolution now beginning will prove equally true.

      2. “Most people have no imagination. It’s not really needed. Sometimes I wonder if it is an asset for most.”
        Well, according to some very bright people, imagination could be a measure of intelligence. In America, that would suffice to explain why the education system has been such a failure. Perhaps the One Percent doesn’t need any second guessing or an intelligent resistance and criticism to / of its blunders.
        Is there any other explanation short of “redefining the term of ‘imagination’?”

      3. Jako,

        “Perhaps the One Percent doesn’t need any second guessing or an intelligent resistance and criticism to / of its blunders.”

        Take the tin foil hat off.

  4. As much as I liked the previous post by this author (about liminality) I do have some problems with this one (for sake of brevity I try to point out just the very first one):
    “Both religion and science offer an interpretation of reality that claims to be absolute and final, even while acknowledging a degree of incompleteness.”
    It is quite an eye opening realization what “science” may have degraded into, not in essence, but in popular perception of it (Popper is turning in his grave anyway) — certain “scientific bodies” impersonate the real science and that lends the real science a bad name. The paradigm of science was supposed to be the fallibility of any thesis brought up to scrutiny (aka peer review) and not a “consensus” of a board or any pose of a “fair judgment” of a “court of peers.”

    1. Jako,

      “certain “scientific bodies” impersonate the real science and that lends the real science a bad name.”

      Standard right-wing nonsense. Science – whether individuals or the opinions of “bodies” – has often been wrong. Also, there is no such things as “real science” – unless you are a believer in Platonic philosophy. There is just the process of science, conducted by fallible people.

      “The paradigm of science was supposed to be the fallibility of any thesis brought up to scrutiny (aka peer review) and not a “consensus” of a board or any pose of a “fair judgment” of a “court of peers.”

      That’s very confused. The use of a consensus opinion of experts is as an input to the public policy process – not to guide the process of science. You would know this if you had ever even glanced at any of the IPCC’s reports.

      1. “Also, there is no such things as “real science” – unless you are a believer in Platonic philosophy. There is just the process of science, conducted by fallible people.”

        Exactly; ironically, the belief in science as some pure principal is exactly the point I was trying to make, the point at which the scientific method becomes scientism, which of course claims to be Science – or rather so do the people practicing it.

        Even the notion that scientific method is inherently superior to religious inquiry itself seems to me to be unscientific, ironically. At most, surely a truly scientific method can only say, this is beyond the jurisdiction of the scientific method?

        Useful to get a better sense of what readers aren’t receptive to, though.

      2. Jasun,

        “Useful to get a better sense of what readers aren’t receptive to, though.”

        Look instead at what they want. The FM website specializes in making readers uncomfortable. We have gone through several cycles of pursuing themes that gain a large audience on the Left or Right, then switching to themes that attack their tribal truths – and they flee in horror, like vampires at dawn.

        Modern Americans want to be flattered – that we are bold and strong. Hence the popularity of stories about The Great Day In The Indefinite Future When We Rise Up And Smite Our Foes (big biz, fascists, whites, blacks, conservatives, liberals, etc). We want to be told our tribal truths are the only truth, and that are problems results from the evil others – not ourselves. We ant simple stories about our side (angels) and the evil others.

        Do not mention that the Others have insights we can learn from. Above all, do not mention the “R” word: responsibility. Do not remind them that America was built by people willing to sacrifice their “lives, fortunes, and sacred honor.”

        Most of the major websites about politics follow this formula religiously.

      3. [quote]Even the notion that scientific method is inherently superior to religious inquiry itself seems to me to be unscientific, ironically. At most, surely a truly scientific method can only say, this is beyond the jurisdiction of the scientific method?[/quote]

        Its a confused and too general way of approaching the question, unless we specify what we mean by ‘religious inquiry’ its impossible to say whether its better or worse than ‘scientific method’. Impossible even to say whether there is such a thing, and whether its even different.

        We probably know well enough what scientific method is: its the method of generating hypotheses and testing them for correctness. Does this thing behave in the way our theory says it should, and if not, why not? Popper gives one illuminating account of it, but whether you accept his account completely or not, the basic element of testing a theory against observations is the essence of the scientific method.

        What is ‘religious inquiry’ in similar terms? What do we have to do to conduct one? How do we know when it has yielded something which we should regard as true? How do we test propositions we arrive at as a result of it?

        The last sentence of the quoted passage is a non-sequitur. Why is any proposition beyond the jurisdiction of the scientific method? And what is the force of the metaphorical claim? What does it mean to be ‘beyond the jurisdiction’ of the method? Does it mean that you cannot validly apply the scientific method to some propositions or theories?

        Show me one, then.

        As to what the scientific method says or doesn’t say? Its meaningless. Its not the kind of thing that can say anything. Its a thing that we apply to testing the truth or falsity of propositions and theories. Its a metjhod. Methods don’t ‘say’ anything.

        To make any progress in thinking about these matters we have to use language much more carefully, generalize less, get more specific, stop using metaphors which may feel like they illuminate, but which in fact simply confuse or give the illusion of saying something clear when they do not.

        I am going to read at least some of Larry’s book recommendations but my problem with the piece and the approach is not that I disagree with it, its that I don’t think its saying anything that I or anyone else can agree or disagree with. I’m not finding anything I can extract to disagree with. The author may have interesting substantive ideas about AI and other topics, but if so, none seem to have found any clear expression in this piece.

  5. To return to AI, may I recommend looking at the chess games of Alpha Zeroi? A selection with notes by Michael Sadler can be found in the book Game Changer:

    https://www.amazon.co.uk/Game-Changer-AlphaZeros-Groundbreaking-Strategies/dp/9056918184

    You will have to be at least Expert strength to understand them. They are simply astonishing. Here we have an AI program outplaying the previous best, Stockfish, which is itself unbeatable by any current human players. Doing it while playing in a human fashion – that is, examining far fewer possible moves and variations. And finding really deep insights into positions.

    Stockfish is to a large extent a guided brute force approach. Alpha Zero is something entirely different. One very interesting and thought provoking question it raises is this. We thought that the style of Anderssen and the 19th century combinational players was refuted and unsound. But when we look at Alpha Zero playing in something reminiscent of that style, we have to wonder whether some positions, where we accept Stockfish’s evaluation of them as lost, are in fact winning.

    Its just that humans cannot see their way through them against Stockfish.

    I find it hard to know what to make of it. One counter argument to astonishment would be, this is just a computer aided method of finding the objective nature of the position, which in the end consists of a series of variations. All we have done is produce a better algorithm if you like than what we had before.

    I see the force of that argument. Its on the same lines as the argument that modern factory automation is a continuum with the early 19c use of machines to replace human manual processes, nothing very new or revolutionary about it.

    On the other hand, when confronted with a similar performance to Alpha Zero’s in a more social context in areas we consider distinctively human, I think the argument would start to seem very dubious. If, for instance, we found musical works which impressed us as being worthy of Bach being produced by a program? Or a humanoid robot talking and interacting in ways that we could not really tell apart from a human?

  6. “I am going to read at least some of Larry’s book recommendations but my problem with the piece and the approach is not that I disagree with it, its that I don’t think its saying anything that I or anyone else can agree or disagree with. I’m not finding anything I can extract to disagree with. The author may have interesting substantive ideas about AI and other topics, but if so, none seem to have found any clear expression in this piece.”

    And yet this doesn’t stop you from commenting voluminously at this post, even while you have nothing constructive to add, understandably enough, since you claim to have found nothing substantive in the piece. This begs the question as to why you are commenting at all, as opposed to finding an article you do have a useful response to, or writing your own article, or just going offline and doing something more useful with your time.

    This is one of the recurring problems with internet forums: everyone wants to express an opinion, but few want to take the time to try and understand what they are opining about.

    To blame the article because you don’t understand it and have no meaningful responses to it strikes me as a narcissist’s commentary, and your comments seem only meant to show off your intellectual superiority (at which they fail, but anyway). I also find nothing of any substance in your comments, but I have less of a luxury of ignoring them, since they take up the space where readers who do find something meaningful in my article, of which I know there are some, though possibly not here, might otherwise be commenting if they weren’t put off by this empty posturing.

    1. Jasun,

      Be happy that Henrik closely read the post. That’s a compliment, as is his engagement. That’s a big win in this game.

      Some large fraction of comments are by people who read just the title, and sometimes the summary. Pretty much everyone running active websites is driven crazy by comments, as shown by these quotes.

      Just relax when reading comments. If you reply, do so to the specific points they raise.

      1. He didnt raise any that I could see, that was my point.

        His response indicated to me the sort of algorithmic mindset the piece was discussing, that can only see meaning according to a very narrow bandwidth (hence he saw it as a bad poem, where others have found profound meaning in it).

        So when you say “be grateful he read the piece,” my response is that his eyes may have passed over every sentence, but he didn’t actually absorb it, because, by his own admission, he found no substance in it even tho substance is there to be found.

        A normal response would be to admit a failure to understand, and ask for clarification, as you indicated in your own initial response to henrik.

        It’s curious that you now seem to want to police me, the author of the article, rather than question the motives, or value, of henrik’s “points.”

        Telling me to relax is patronizing, and telling me how to respond is censorious. Not that I am opposed to censorship, but in this case it seem misdirected.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top
%d bloggers like this: