How Do You Regulate What Can Outthink You? | Gregg Hurwitz
Table of contents
- Equality of opportunity isn't just about fairness; it's about unlocking the potential of the most talented individuals to benefit society as a whole.
- Freedom of speech is vital, but it doesn't mean freedom to break the law or hide behind anonymity to spread hate. We need transparency in online discourse and accountability for actions, both virtual and real.
- Human intervention is essential in a world where not everything can be automated; consciousness thrives in the gray areas that algorithms can't touch.
Equality of opportunity isn't just about fairness; it's about unlocking the potential of the most talented individuals to benefit society as a whole.
One of the things we should focus on, very briefly, with regards to equality of opportunity is that we have to understand that opening the door to opportunity for everyone is very good for the individuals involved. However, you could make a sociological case that that's not the fundamental issue. The fundamental issue is that you want to open the doors on the equality of opportunity side because you want the broad culture to be able to benefit from the specific contributions of the most able people. Any arbitrary barriers are going to work against that.
The reason that you want extremely intelligent, hardworking, and creative kids at Harvard isn't solely so they can have stellar careers—though that's part of it and good for them. That's not the issue. The issue is that you want to educate those people like mad because they're going to produce products that are so useful for everyone else. If those particular people have a few privileges along the way, that's just fine. Therefore, equality of opportunity is the best sociological solution as well as the best psychological solution.
When I was there as an undergraduate, one of the first things they told us was that they gathered everyone in Sver Hall and said, "You're going to learn more here from your classmates than from your professors." I thought that was a silly kind of old saw, but it is absolutely true. That cohort included people from all over the world and all over the country, which was incredible in terms of the strengthening of one's mind. It allowed us to see people from every reach of America and internationally, all trained under a joint narrative, fostering continuing friendships across different states of being.
Merit and merit-based selection is the best way to ensure that opportunity is distributed fairly. We know, for example, that the alternative to merit-based selection historically has been dynasty and nepotism. There is no productivity in dynasty and nepotism because it means that your right to a position is determined by your birth, not by your competence.
In this context, we should focus on gratitude, not grievance, uphold the rule of law, and pursue truth. Reality is where ideology goes to die—that's something I wrote and taped to my wall. When solving problems, we should look for measurable outcomes. A lot of libertarianism has crept into my worldview as I have pursued these ideas. Measure something not by its intentions but by its outcomes; in a way, everything else is irrelevant. I don't care what your intentions are. This is particularly true on the social intervention side because you have to ensure that your intervention is producing the consequences that you desire. It’s very unlikely that this will happen, as there are a million ways things can go wrong and generally only one or two ways they can go right.
Concrete steps to take include upholding free speech and prosecuting illegal actions. It’s fairly easy to address situations where people are breaking laws—such as throwing bottles at police officers, blocking traffic, making true threats against individuals, and vandalizing buildings and homes. They can be arrested and prosecuted without needing to make exceptions for them, just as was not done for Harvey Milk or the leaders of the Civil Rights Movement. However, people are allowed to have their opinions; they are allowed to criticize any state, including Israel, and any leadership, which includes Netanyahu. They are allowed to peacefully protest and compete in a free society.
Freedom of speech is vital, but it doesn't mean freedom to break the law or hide behind anonymity to spread hate. We need transparency in online discourse and accountability for actions, both virtual and real.
It's fairly easy to understand that if people are breaking laws—such as throwing bottles at police officers, blocking traffic, making true threats against individuals, and vandalizing buildings and people's houses—they can be arrested and actually prosecuted. We don't need to make exceptions for them any differently than were made for the Harvey Milk or the leaders of the Civil Rights Movement. However, people are allowed to have their opinions; they are allowed to criticize any state, including Israel, and any leadership, which includes Netanyahu. They are allowed to peacefully protest and compete in a free marketplace of ideas—no problem. But we must remember that we don't break the law, and this applies to both sides of the fence.
We have fringes who engage in illegal activities on both sides. For example, face coverings and masks at protests, if used to menace and terrorize, should be illegal. That's the purview of the KKK; that's not what we do. You can't cover your face to commit illegal acts or to terrorize people.
The algorithms in social media present another significant issue. There are ways to maintain freedom of speech, but that doesn't mean freedom of reach for profit. If I say the most outrageous, misogynistic, anti-Semitic, or insane thing, the algorithms should not drive that for profit for corporations. When the algorithms are hidden covertly behind firewalls, we don't even know who we're talking to or if they're American. We should also be cautious about presuming that the tech people themselves can solve these problems.
As we've discussed, even figures like Zuckerberg and Musk, who may be on opposite sides of the political spectrum, face the same issue: no one knows how to regulate online discourse to bring the rule of law and order to it. Half of online activity is criminal across the board—ranging from pornography to outright crime, and then there are quasi-crimes that constitute trolling. No one knows how to regulate that, and we shouldn't expect tech engineers to manage it without assistance.
However, there are concrete steps we can take. One of them is the need for transparent algorithms. For instance, if 60% of the people who are screaming about anti-Semitism and encouraging it are Russian bots, that's a good thing to know. We must distinguish between human actors and non-human actors. If someone is anonymous and doesn't want to stand behind their words online, they don't necessarily need to be censored. There is a whistleblower issue, but they could certainly be placed in a second tier of comments below an interface of people who are willing to stand behind their words.
This is not much different than stopping masking, as online anonymity is the virtual equivalent of masking. Additionally, everything to some extent needs some degree of human intervention—that's okay. Whether it's a Tesla factory or any other setting, everything cannot be automated. You can't automate the edge cases. That's what consciousness is for. As we can transform something into an algorithm neurologically speaking, we become unconscious of it. For instance, we have transformed regulating our heartbeat into an algorithm; it runs on its own, and we are never conscious of that.
Human intervention is essential in a world where not everything can be automated; consciousness thrives in the gray areas that algorithms can't touch.
The other thing that you pointed to quite sanely is that everything, to some extent, needs some degree of human intervention. That's okay, whether it's a Tesla factory or any other automated process; everything cannot be automated. You can't automate the edge cases, and that's what consciousness itself is for. As we can transform something into an algorithm, neurologically speaking, we become unconscious of it. For example, we have transformed regulating our heartbeat into an algorithm; you're never conscious of that—it runs on its own. Once you've got something down, it should run on its own, but there's always an edge of transformation.
This edge of transformation can't be algorithmized, and it is very tightly associated with free speech. Thought is internalized speech, and the way that consciousness navigates that transformative edge, which can't be transformed into an algorithm, is through the mechanism of free discourse. There has to be a wide variety of opinions because we don't know how to navigate these complexities.
Moreover, algorithmization can't go into a private company and simply say, "Look, we've identified 150 to 250 people who are clearly bent on sowing chaos and terrorizing America. Here they are, and here are the processes that we have undertaken." These processes should be completely transparent, and it doesn't necessarily mean you even deplatform them. However, could you perhaps turn down their reach that you are taking advantage of for profit? This is crucial because they are driving outrage and hatred, and more and more people are being turned into a swirl of hatred. That is not a good long-term strategy for any company or any country, not unless it wants to be overrun by manipulative psychopaths.
Any platform will become rife with this issue, and people will leave. You, Michaela, and Jordan Fuller have solved this with the Peterson Academy. Everyone has to have their name, and there’s a social board. People pay a reasonable but low price of entry to have access to the classes, and the discourse there is entirely sane, approximating an honor code. This means that if you act like a jerk, you can have your money back and leave. You might ask, "Who decides that?" The answer, at the moment, is twofold: the community itself is deciding that, but we are watching too. We’ve identified three people out of 30,000 who’ve caused trouble—three people.
As the discourse builds out, we can have these interfaces just like kids. Jonathan Haidt is suggesting limitations on when kids have their phones. Is there any reason we need to have access to them from 8:00 a.m. or 4 in the morning if a tweet alerts? Private companies can also make limitations on how they want to conduct their marketplace of ideas. After all, what's one person in a classroom having a constant temper tantrum? That means nobody can learn.
I was trying to distinguish the other day between a referee and a censor. There are game rules by which civilized discourse has to proceed. A referee makes sure that the rules are being applied fairly and across the board; everyone knows what they are. A censor, on the other hand, is someone who makes arbitrary behind-the-scenes decisions. I believe we can discriminate between censors and referees, especially if we do it early and set the ground rules.
This brings us to the fascinating statistic that 3.5 more Americans believe that American news organizations and social media platforms should be owned by U.S. entities to prevent the spread of foreign propaganda and disinformation. Would Iran allow us to have a major networking effort through social media that goes to their entire populace? Would China, Russia, or Brazil allow us to do that? Most notably, they would not.
Thus, it is perfectly acceptable to understand that America is allowed to have a national identity, one that is shared and creates a lot of space for people of different groups to compete. However, we have a lot of obstacles that we need to address to remove those obstacles to equality of opportunity. This is what is driving a lot of these problems. The more we can focus on solving those real problems, the more we can ensure that we are allowed to have ownership of who is educating our kids and driving our discourse in the hands of Americans.