When I set out the build up this idea I labeled ‘Digitally Intelligent Neohuman’ I did so with the conscious effort of not making loud declarations.
I’ve tried making big commitments before, and they didn’t work out because setting such a large goal to myself inevitably kicked my perfectionism into high gear. And when I could deliver on the ideal I had in my head, I became discouraged and abandoned the whole thing.
I’ve decided to take the slow approach, to build it up, to tell the story of what I think about these topics, and let that become the larger idea.
Then I’ve listened to and read Ezra Klein’s interview with Tristan Harris.
I think that my response below pretty much sums up the foundation of what I’m trying to accomplish — which is to highlight the fundamental problems with how we tend to judge technology.
It’s kind of a manifesto, in the sense of the word that I tried to avoid. At the same time, it is a good starting point, and I’ve done everything I could to keep it from being overly grandiose. (Heh.)
“Video killed the radio star” — no, it did not
Let’s start with what I see as the most obvious problem with the reasoning we hear from critics of technology. Which is:
When you judge change from the perspective of the changed, it’s necessarily attaches a negativity to it.
Tristan and Ezra are right: outrage tends to spread a lot faster than positivity. But outrage doesn’t happen because of technology.
Outrage is brewing inside people for a time. And when it reaches critical mass, it finds a way to be expressed: whether that’s through a revolution like in France in 1789, electing a far-right politician in 1932 in Germany, or women marching on the streets in 2017 in the United States. Fundamentally not one of these events are different.
Outrage can even be a good thing. Sure it wasn’t in Germany, historically, but was it good or bad in 2017? Depends on who you ask. Without debating where anyone’s personal preferences and perspectives are, we can all agree that change is always bad from the viewpoint of those who are changed.
Which is why it’s extremely dangerous basing a movement on keeping things the same. (And even going a couple of steps back. More on that later.)
Technology doesn’t have morality
Technology isn’t inherently good or bad: it’s a tool. It has no moral attributes. People do.
We’re anthropomorphizing technology and technology companies as if they were cartoon animals. The problem is, while we understand what we’re doing when it’s cartoons, we accept it as truth and reality with technology.
What is the human condition if not overcoming the human condition?
The most interesting paradox I see with those who criticize today’s technology is how they seem to always focus on yesterday’s.
Neither Tristan nor anyone else who voices these thoughts ever want to go back two technological paradigms. Nobody wants to go back to before the Civil War. Nobody wants to go back to the Dark Ages or to ancient times. Everybody wants to go back to their technological paradigm.
Sentimentalism is the worst enemy of humanity.
I mean that. People’s sentimentalism holds back innovation. It’s nothing new of course (think Luddites), but as technology’s progress speeds up, the effect is scaling to seem a lot more damaging than before.
And that’s because technological progress is no longer filtered through industrial or economical layers — today’s technology is changing communication, directly affecting people’s lives. Without a buffer that would soften the effects of change, it looks more alarming, no matter how it’s the very same every time it happens.
For my part, I do think that the point of human existence is to constantly change. I don’t necessarily have an answer as to why, but for thousands of years it has been happening — and if there’s one takeaway from history it’s that whenever the old status quo tried to keep things from changing, there was blood on the streets. (Speaking both metaphorically and literally.)
This is why when Tristan talked about “defining the dimensions and limits of humanity” I didn’t understand. The dimensions and limits of humanity are only useful insofar as to mark whatever we’ll overcome next.
Humanity isn’t a thing; it’s progress itself
I really do believe that. Setting arbitrary standards based on observations made in arbitrary points of time is… useless.
My main problem with such an approach is that it generates conflict; and it’d be so easy to avoid those conflicts by changing our mindset. Accepting the natural flow of change means there’s no right or wrong values assigned to what was and what’s to come. One period isn’t better or worse than the next or the previous one.
I’m a big supporter of the idea of “mind uploading” because it eliminates an important limitations of the human mind. Without the constraints of a physical body, the mind can — in theory, since there’s no way of telling it yet — expand. Understanding concepts that aren’t limited to the space-time continuum we exist in due to our bodies could be possible.
It also eliminates a host of other friction within society: bits and bytes have no skin color, no gender, no sexual orientation.
It’s interesting to me that virtual reality keeps sticking to physical constraints. One the one hand I understand: in a world of new concepts it’s perhaps the newest ones. It takes time for our brains to adjust to the idea of not having limits. (By the way, if you take a look at Minecraft, it went through a similar adjustment period.)
In science-fiction, the concept of VR is often at odds with the physical reality through the lack of limitations. The question of “Why would anyone want to live in the physical world when there’s a limitless digital one?” is moot. They wouldn’t. That’s not a bad thing. A strange one, perhaps.
But as strange as living as a purely digital entity may seem to us, how strange do you think someone from Ancient Rome would look at the world we live in today?
Speed is not a problem
There was a lot of talk about mindfulness, which is a good thing. We should be more mindful in our decision-making, we should take time to make choices that have depth, instead of simply reacting. No argument from me, however…
… mindfulness isn’t an antithesis of speed. It’s associated with spending more time on decisions because there was no other way.
We’re heading into a direction where speed and depth are both possible, at the same time, through technology.
They don’t have to be mutually exclusive; in fact, the work we put into AI isn’t (or shouldn’t) so it can take over the human decision-making process, but to augment it.
Can’t have it both ways
What I also find fascinating about this debate, and its participants in particular, is the double standards.
On the one hand, people complain that social media (mostly the visually oriented part, such as Instagram) create a skewed perception of reality, making staged compositions look like everyday snapshots. On the other hand, there’s also a complaint that technology exposes our “dark side”.
Which is it?
Both are valid concerns. Perception is a powerful thing, and it’s important to view the world — including social media posts — with that in mind. And it’s also true that online anonymity can be a concern, because people aren’t afraid of retribution.
Both are temporary phenomena, though. I think both come from the unfamiliarity with the tools and their capabilities. No matter how long it seems, these things are extremely young. Most of the things we now take for granted didn’t exist ten years ago.
Would it be better if they didn’t come with such problems? Sure. But it’s impossible to prevent them: you can’t predict how people are going to misuse your product. You can try and minimize the chances, but people are people, they’ll find a way to test a tool’s limits, in ways the creators couldn’t even fathom beforehand.
The answer isn’t limiting the scope of innovation, though, either.
Taming the “human animal”
One phrase that keep coming up during the interview was “human animal”, and how a solution to this crisis Tristan thinks we live in need to account for the instinctive behavior of a human animal.
While it may seem innocent and beneficial on the surface, logic doesn’t stop there. We’d be undoing thousands of years of evolution, because by the same logic there’d always be another step back. Until the point where we actually become animals again.
I’ve said this many times, but here it is again: humanity has survived through technology, and nothing else. Our entire evolutionary path was made possible by external devices, from the tools with which we first made fire, to the wheel, to the printing press, to the industrial revolution, to the internet.
Take technology away, and we are, indeed, animals. Extinct animals, too, because we’re weaker in almost all respects than virtually anything else.
But you can’t take it away. Technology has carried us through the millennia. And it’ll keep carry us, because that’s what we do. If anything, what makes us human is technology.
Technology can also augment us, our minds, to move from a binary world view to one that can recognize, account for, and deal with shades between polarities.
This is a good time to talk Trump, Russia, and how the US had it coming
Have the Russians meddled with the 2016 election? They would’ve been stupid not to. It’s one of those rare, rare occasions where I agree with Trump’s words (but not his logic or intention): Putin is a smart cookie.
Of course Russia has meddled with the election. It’s in their best interest to ensure that whoever leads one of the few countries that can rival them is someone they can manipulate. Not (necessarily) directly, but still.
The question isn’t whether or not they did it. (And there’s plenty of examples, from any point of history, that the same thing occurred. It’s not like it’s new.) The question is how they did it.
It’s simply that the Russians (I’m using that term collectively to refer to the ones who did the election meddling, regardless of nationality) were smarter at adapting to the capabilities of technology.
Just because Russians used Facebook to fool people with fake news, people simply cannot blame Facebook. The only ones to blame are those who believed them.
It’s that simple. And the reaction shouldn’t be over-regulating Facebook. It should be to educate people.
They can spend millions, if not billions, of dollars developing firewalls and tools that’d prevent the 2016 election meddling to be repeated. (And, for the record, they should.) It doesn’t matter, because the next time around they’ll be doing something different.
As the saying goes, “give a man a fish and you feed him for a day; teach him how to fish and he’s fed for a lifetime.” Yes, addressing what has happened works for what has happened, but won’t prevent something else — with the same goal — happening again.
However, if you teach people to fish, as in: educate them in the ways of logic and common sense, you’ll feed their protection against manipulation for a lifetime. (And throw in a couple of million bucks to buy some proper hardware and software, for God’s sake. You’d think a global superpower’s democratic institutions would deserve at least that much.)
Quis custodiet ipsos custodes?
One of my favorite parts of the interview was when Ezra asked about Tristan’s thoughts on regulation. Which is basically asking that “alright, you made your point about the problem; now, what is your solution?”
I liked this part because it betrays the fundamental flaw in how Tristan and other people like him view the world.
First of all, the phrase ‘Silicon Valley’ was said so often it was starting to get annoying. It may have been true once, but tech innovation is increasingly spread out. I acknowledge the brain power that’s concentrated in Silicon Valley, but in return I expect others to acknowledge that it’s not just about one place.
Moreover, it’s arrogant to think that the US is the only player in this game. It’s even arrogant to think that it’s a leading player, as the establishment crumbles around tech companies and will severely damage them once it falls.
You can’t expect to regulate on a global scale using local tools.
Let’s take Facebook: assuming every single person in the US is on Facebook (which is not true) that makes for a small minority of the users. 200 million versus the remaining 800 million. Why would Facebook be forced to accommodate the cultural and legal context of the US if the majority of their users aren’t there?
And it’s the same everywhere. And even if it wasn’t, who decides that technology that have a possibly global reach and effect must adhere to the cultural expectations of a small minority?
This is why the whole concept of nations and national regulations become problematic. We’re not there yet, of course, but we’re seeing the earliest warning signs.
Technology is like the superheroes in Marvel’s Civil War: their power and responsibility goes beyond national borders. So how do you keep them in check? Because I agree* with Tony Stark (surprise! 😛) and superpowers need to be held accountable. But by whom?
*: for what it’s worth, I also agree in a way with Captain America, in not trusting any traditional human legislative oversight. (Which leads back to agreeing with Tony Stark again about why he created Ultron, but let’s not go down that rabbit hole for now.)
Speaking of: a word on democracy and the US Congress
Putting aside the illogicality of a natinal government regulating global culture through technology, it's very clear that whatever the solution will be, it simply cannot be how we used to do things.
We're dealing with issues that our race have never encountered before. While we saw changes that fundamentally shifted power and wealth, we've never seen those things shifted onto the people themselves.
Democracy is a flawed concept of government. That's nothing new, Aristotle clearly didn't like it, and Churchill famously said "democracy is the worst form of government"...
(... which quote ends with: "except for all the others." Lots of people forget about that. Anyway.)
What I see as the largest problem with democracy is that it isn't scalable. What may have worked, barely, for a population of a city-state with limited means of mobility and communication simply cannot scale to a world of global entities and instantaneous communications.
I don't have a solution for it. Wish I had. But I do know this: the democratic establishments of the world have moved Heaven and Earth to stay where they are. But the world left them behind long ago.
Are we seriously going to trust people who, beyond clearly serving power and not those who gave it to them, cannot even begin to understand technology? I'm not talking about the scope of technological innovation, the power of global communications systems, or the reprecussions of mishandling said systems. I'm talking about how they have no idea how to use Facebook for goodness sake!
These are the people we're going to trust with making decisions that can, not only possible but highly likely, shape the future of human civilization for centuries to come?
In the spirit of the legendary Archer: Do you want societies to fall apart? Because that's how they fall apart!
Summa: let’s keep talking
It was interesting to dive into these topics. All of them will come back time and again, and with more detail. I’m not claiming I could solve these problems, or that Tristan is wrong and I’m right. (Although from where I’m standing he is and I am. 😎)
I’m a culprit the same way Tristan is: my fundamental perspective on the world, and this world in particular, leads me to truths that may not hold up if put into his context. Truth isn’t universal.
What we do agree on, I think, is the urgency to conduct discussions and thought experiments on the subject.
I knew from the beginning that I’ll be disagreeing with much of the sentiment of the interview, but I value disagreement. The solution is very likely not in either of our approaches, but somewhere in the middle neither of us have thought about.
Without debate, without multiple perspectives to challenge each other, we’ll never know.
In spirit of the AV Club’s regular recaps of TV shows, I figured it’d be a nice idea to collect some responses to specific things that struck me as odd in the interview and article.
In no particular order:
- regarding social media distortion: we’re neither here nor there; analytics and vanity metrics are in fact misleading, but the correct response isn’t declaring them not working, but to bring the two platforms, human and tech, together where they start working
- if tech is able to tear apart families, and make it more difficult for people to connect to each other, then the problem isn’t with the tech: the problem is the users’ inability to use it (and the answer isn’t to limit technology but to expand human capability)
- how can we say tech people cannot go inward because they never sat in silence for seven hours? How did that arbitrary practice become the only way of attaining mindfulness?
- on that note: for what it’s worth, creativity always comes from within; and creative technology, because of that, always carries mindfulness and inward perspective at its core
- the phrase “destroying the fabric of society” is meaningless; the industrial revolution did it, and social/cultural changes do it on an almost daily basis…
- examining where our choices are coming from and how they happen is useful: but it shouldn’t be used to model future decision-making processes, simply because it may be wrong; it’s definitely “old”, which is the only reason we need not to build everything around it
- Twitter isn’t algorithmic: it’s chronological, with an optional (!) feature to help users catch up. Facebook and Instagram are algorithmic, and that leads to a host of problems Ezra and Tristan correctly recognized. (Although mistakenly analyzed, but that's neither here nor there.)
- the example of Snapchat streaks is also a fascinating thing: when you really think about it, making it a point to keep in touch with other human beings is now considered bad?!?
Phew. Am I right? Am I wrong? Little wrong? Little right? Do let me know what do you think!
Subscribe to Dinchamion
Get the latest posts delivered right to your inbox