Debunking The AI Reset Alien Mind Fear, Chat GPT, Future of AI & Slow Productivity | Cal Newport

Debunking The AI Reset Alien Mind Fear, Chat GPT, Future of AI & Slow Productivity | Cal Newport

We’re creating AI minds we barely understand—like playing with fire! 🔥🤖

So there are a lot of concerns and excitements and confusions surrounding our current moment in artificial intelligence technology. It's a hot topic that everyone seems to have an opinion on. One of the most fundamental of these concerns is this idea that in our quest to train increasingly bigger and more capable AI systems, we might accidentally create something smarter than we expected.

I want to address this particular concern from the many concerns surrounding AI. As a computer science professor and one of the founding members of Georgetown Center for Digital Ethics, I've spent a lot of time thinking about this. I've been looking at it from a technological perspective, especially the idea of runaway or unexpected intelligence in AI systems. I have some new ideas I want to preview right here. These are in rough form, but I think they're interesting and hopefully will give you a new, more precise, and comforting way of thinking about the possibility of AI getting smarter than we hope.

One way to think of the fear that I want to address is what I call the alien mind fear. Picture this: we are creating these minds, but we don't understand how they're going to work. That sets up the fear of these minds getting too smart. We have summoned an alien intelligence we don't know much about, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We don't really know how this thing works, so we don't really know what it might be capable of.

The whole structure of this paper is that these researchers ran intelligence tests, sort of human intelligence tests that had been developed for humans, on GPT-4 and were really surprised by how well it did. It's as if we've opened Pandora's box. We keep extrapolating this curve—the GPT-5, GPT-6—it's going to keep getting more capable in ways that are unexpected and surprising. It's very rational to imagine this extrapolation bringing these alien minds to abilities where our safety is at stake.

These machines are becoming rapidly powerful. If you were worried about GPT-3, wait till you see what's coming next. There's a general rational extrapolation to make here. This new one, GPT-4, seemed even better. We keep extrapolating this curve—the GPT-5, GPT-6—it's going to keep getting more capable in ways that are unexpected and surprising. It's very rational to imagine this extrapolation bringing these alien minds to abilities where our safety is at stake.

We're uncomfortable about how smart they are. It's like we're playing with fire. They can do things we don't even really understand. We are going to be very uncomfortable with what we build. A large language model in isolation can never be understood to be a mind. I'm being very precise about this. A large language model like GPT-4, by itself, cannot be something we imagine as an alien mind.

The reason is that what a large language model does is it takes an input, and out of the other end comes a token, which is a part of a word. That's all a language model can do. It's simpler than it sounds, really. As this goes through the large language model, it can come up with candidates for the next word or part of a word to output next. It's not too hard to be like, "this is kind of the pool of grammatically correct, semantically correct next words that we could output."

These models go through something like complex pattern recognition, a massive checklist. For example, this is a discussion of chess, or this is a discussion of Rome. We're sort of understanding as it goes through these recognizers, these really complex rule books. Given this combination of properties of what we're talking about, which of these possible correct, grammatically correct next words or tokens we could output makes the most sense. It's all about finding the most fitting piece to complete the puzzle.

AI is like a magic word machine with a mind of its own! 🧠✨

We’re sort of understanding as it goes to these recognizers. This is what we’re in the middle of talking about. You have these really complex rule books that look at the combination of different properties. The rule books are combinatorial. They combine these properties to say, given this combination of properties of what we’re talking about, which of these possible correct, grammatically correct next words or tokens we could output makes the most sense.

Imagine it's a chess game, and here are the recent chess moves. We're supposed to be describing a middle game move. These are legal moves given the current situation. You have possible next words, a checklist of properties, and combinatorial combinations of those properties with rules that then help you influence which of these correct words to output next. The rules can have novel combinations of properties—combinations of properties that were never seen in the training data. That's how you can produce output with these models that don’t directly match anything they’ve ever seen before.

In the end, you can imagine it like a giant metal machine with dials and gears. You're turning this big crank, and hundreds of thousands of gears are all cranking and turning. At the very end, at the far end of the machine, there’s a dial of letters. These dials turn to spell out one word. No matter how sophisticated its pattern recognizers and combinatorial rule generators, it’s a word spitter out.

When you begin to combine this really, really sophisticated word generator with control layers—something that sits outside of and works with the language model—that’s really where everything interesting happens. It chooses what to activate the model with, what input to give it, and it can then actuate in the real world or the world based on what the model says. It’s that control logic with its activation actuation capability that, when combined with a language model—which again is just a word generator—that’s when these systems begin to get interesting.

Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models. This control logic can actuate, that is, take action—do something on the internet, move a physical thing. It’s this control logic with its activation actuation capability that, when combined with a language model, makes these systems interesting.

There are different layers to this. Layer zero control logic is actually what we got right away with the basic chat bots like ChatGPT. So, level zero control logic basically just implements what’s known as autoregression. A large language model spits out a single word or part of a word. But when you type a query into ChatGPT, you don’t want just a one-word answer; you want a whole response. So, there’s a basic, what I’m calling layer zero control logic that takes your prompt, submits it to the underlying large language model, gets the answer from the language model—which is a single word or part of a word—that expands the input in a reasonable way, and appends it to the input.

It then inputs a fresh copy of the model, inputs this slightly longer input, and generates the next word of the answer. The control logic adds that and now submits this slightly longer input to the model. It sort of keeps doing this until it judges, "this is a complete answer," and then it returns that answer to you, the user, typing into the ChatGPT interface. That’s called autoregression. Control logic will take your follow-up question but also take all of the conversation before that and paste that whole thing into the input.

Now, layer one control logic. Here, we get two things we didn’t have in layer zero. We might get a substantial transformation of what you typed before it’s passed on to the actual language model. There’s actuation, so it might also do some actions on behalf of you or the language model based on the output of the language model. Instead of just sending text back to the user, it might actually go and take some other action.

An example of this would be the web-enabled chat bots like Google’s Gemini. You can ask it a question where it’s going to do a contemporary web search—like, stuff that’s on the internet now, not what it was trained with when they trained the original model. It can actually look at stuff on the web and then give you an answer based on stuff it found contemporaneously.

AI is like a smart assistant that not only understands you but also does tasks for you! 🤖💼

Transformation by the control logic before it's passed on to the actual language model. The other key part of layer one is actuation. So, it might also do some actions on behalf of you or the language model based on the output of the language model. Instead of just sending text back to the user, it might actually go and take some other action.

An example of this would be the web-enabled chat bots like Google's Gemini. You can ask it a question where it's going to do a contemporary web search. It can actually look at stuff on the web and then give you an answer based on what it actually found. This is layer one control logic.

What's really happening here is when you ask something like Gemini or Perplexity a question about a current web search. The control logic, before the language model is ever involved, actually just goes and does a Google Search. It then takes the text of these articles and puts it together into a really long prompt. This prompt is then submitted to the language model.

The prompt written by the control logic might say something like, "Please look at the following text pasted in this prompt and summarize from it an answer to the following question," which is then your original question. Below it is 5,000 words of web results. The prompt that's actually being submitted under the covers to the language model here is not what you typed in. It's a much bigger, substantially transformed prompt.

We also see actuation. If we consider OpenAI's original plugin, these things you can turn on in GPT-4 can do things like generate a picture for you, book airline flights, or show you airline schedules. In the new Microsoft Copilot integrations, you can have the model take action on your behalf in tools like Microsoft Excel or Microsoft Word. There's actual action happening in the software world based on the model, which is also being done by the control logic.

So, you're saying something like, "Help me find a flight to this place at this time." The control logic, before we get to a language model, might make some queries of a flight booking service or actually create a prompt to give to the language model. It might say, "Please take this question about a flight request and summarize it in the following format for me." The language model then returns to the control logic a better, more consistently formatted version of the query you originally had.

Now, the control logic, which can understand this well-formatted request, talks over the internet to a flight booking service, gets the results, and then it can pass those results to the language model. It might say, "Okay, take these flight results and please write a summary of these in polite English." Then it returns that to you. What you see as the user is that you asked about flights and got back a nice response like, "Here are your various options for flights." Maybe you then say, "Hey, can you book this flight for me?" The control logic takes that, and again, puts it into a precise format. The language model does that, and now the control logic can talk over the internet to the flight booking service and make the booking on your behalf.

If you're asking Microsoft Copilot to do something like build a table in Microsoft Word, it's taking your request, asking the language model to reformat your request into something much more systematic and canonical, and then the control logic talks to Microsoft Word. These language models are just giant tables of numbers in a data warehouse somewhere being simulated on GPUs. They don't talk to Microsoft Word on your computer; the control logic does.

In Layer Two, we now have control logic able to keep state and make complex planning decisions. It's going to be highly interactive with the language model, perhaps making many queries to the language model en route to trying to execute whatever the original request is. A less well-known but illustrative example of this is Meta's bot called Cicero.

The control logic will use the language model to take the conversations happening with the players and explain to the control program in a consistent, systematic way what's being proposed by the various players. This way, the control program understands without having to be a natural language processor. Then the control program simulates lots of possible moves. It figures out all these possibilities to determine which play gives it the best chance of being successful.

It tells the language model, "Okay, here's what we want to do. Now please generate a message to send to this player that would be convincing to get them to do the action we want them to do." The language model generates the text, which the control logic then sends. Devon AI has been building these agent-based systems to do complicated computer programming tasks. It has control logic that continually talks to a language model to generate code, but it can actually keep track of multiple steps in a task.

A language model can't keep track of a long-term plan like this; it can't simulate novel futures because, again, it's just a token generator. The control logic can. The layer that doesn't exist yet, but we speculate about, is what I call Layer Three. This is where we get closer to something like a general AI.

AI's power lies in human-coded control, ensuring it acts predictably and responsibly! 🤖🛠️

Now, we're on step two of this task. We need code that integrates this into the system. A language model can't keep track of a long-term plan like this; it can't simulate novel futures because, again, it's just a token generator. That's Layer Two. This is where a lot of energy in AI is focused right now — these sorts of complex control layers.

The layer that doesn't exist yet, but we speculate about, I call it Layer Three. This is where we get closer to something like general intelligence. Here, we have very complicated control logic that keeps track of intention, state, and understanding of the world. It might be interacting with many different generative models and recognizers.

The control logic is not self-trained. The control logics are hand-coded by humans; we know exactly what they do. The developers of Cicero, for example, were uncomfortable with having their computer program lie to real people. So, they said, "Okay, though other people are doing that, our player Cicero will not lie." They coded it themselves. It's a simulator that simulates moves, but it just doesn't consider moves involving lies.

At least so far, the control is just hand-coded by people to do what we want it to do. It can produce tokens using very sophisticated digital contemplations, but it cannot control the control logic. The plugins have a lot of control over these things. The control logic is just programmed right there.

The control logic in these systems right now is not at all difficult to understand because we're creating them. Okay, we have gotten a request; we've asked for a formatted request to book a flight from the LLM. We think about appropriate places to fly or whatever it is — the control logic is just programmed right there.

The biggest practical concern, especially about Layer Two or below artificial intelligence systems of this architecture, is exceptions. For example, we're doing flight booking, and our control logic doesn't have a check that says, "Make sure the flight doesn't cost more than X and don't book it if it costs more than that." The LLM gives us a first-class flight on Emirates that costs $20,000 or something.

Or, we have a Devon-type setup where it's giving us a program to run, and we don't have a check that says, "Make sure that it doesn't use more than just computational resources." That program actually becomes a giant resource-consuming infinite loop and uses $100,000 of Amazon Cloud time before anyone realizes what's going on. If your control logic doesn't check for the right things, you can have excessive behaviors.

That's a very different thing than the system itself being somehow smarter than we expected or taking intentional actions that we don't expect. When we get to Layer Two, these really complicated control layers, in theory, one could imagine hand-coding control logic that we completely understand, which is working with LLMs to produce computer code for better control logic. Maybe then you could get this sort of runaway super-intelligent scenario of Nick Bostrom.

We're nowhere close to knowing how to do that — how to write a control program that can talk to a coding machine like an LLM and get a better version of the control program. I call this whole way of thinking about things "intentional artificial intelligence" (II, lowercase i, uppercase AI). We should really lean into the control we have in the control logics to ensure predictability in what these systems actually do.

We do not develop a legal doctrine that says AI systems are unpredictable, so it's not your fault as the developer of an AI system for what it does once actuated. We say it is your responsibility; you're liable. The language model can be as smart as we want, but we're going to be very careful about the actions that our control logic is willing to take on our behalf.

It is important that we separate the emergent, hard-to-predict, uninterpretable intelligence of self-trained generative models from the control logics that use them. The control logics aren't that complicated; we are building them. This is where the actuation happens; this is where the activation happens.

AI models are just tools; it's the human-made control layers we need to watch! 🛠️👀

We really want to be careful here, and exactly what we put in these control layers matters, especially once there's actuation. The language model can be as smart as we want, but we're going to be very careful about the actions that our control logic is willing to take on our behalf. It is important that we separate the emergent, hard-to-predict, uninterpretable intelligence of self-trained generative models from the control logics that use them. The control logics aren't that complicated; we are building them. This is where the actuation happens.

If we go back to our analogy of the giant machine, we're not afraid of a machine like that in that analogy. We do worry about what the people who are running the machine do with it. Don't let them spend money without constraint; don't let them fire missiles without constraint. Don't let the control logic have full access to all computational resources. Don't let the control logic be able to install an improved version automatically of its own control logic. You are liable; the whole system you build, you're liable for it.

Hopefully, it diffuses a little bit of the sort of incipient idea that GPT-6 or 7 is going to become Hal. The language model is just a feed-forward network; it has no state, it has no recursion, it has no interactivity; all it can do is generate a token. Yeah, but people are writing programs that sort of keep track of things outside of the language model, and they talk back to the language model, and that's where the sophistication is going to come out. All it takes is one person to write Layer Three control logic that says, "Write control logic program and then install it and replace myself with that program."

But I think that's a very hard problem. We don't even know if it's possible to write a significantly smarter control program. It's just a very hard problem. A language model is like a coder; we can tell it to write code that does something very constrained, but we can write this function, write that function. It's a very hard problem to sort of work with a different type of control program. There's no reason to write that program.

We don't even know if it's possible to write a significantly smarter control program. The control program is limited by the intelligence of what the language model can produce. We don't have any great reason to believe that a language model trained on a bunch of existing code can produce code that is somehow better than any code a human has ever produced. It's been trained to try to expand text based on the structures it's seen in text it's already seen. I think that whole thing is more messy than people think, and we're nowhere near there; no one's working on it.

What I care about mainly is Layer Zero through Two, and in Layer Zero through Two, we're in control here; nothing gets out of control. It's very hypothetical to think about a control layer that's trying to write a better control layer. The control layer value is stuck on what the language model can do, and the language model can only do so much. There's a lot of interesting debates at Layer Three, but they're also very speculative right now. They're not things we're going to stumble into in the next six months or so.

I keep coming back to the language model being inert. The control logic can autoregressively keep calling it to get tokens out of it, but it is inert; the language model is not an intelligence that can sort of take over. It's just a giant collection of gears and dials that, if you turn long enough, a word comes out the other side.

AI's power is in your hands—use it wisely! 🤖🧠

Let's keep this nerd thing going, but first, I want to briefly talk about one of our sponsors. Grammarly is an AI writing partner that helps you not only get your work done faster but also communicate more clearly. 96% of Grammarly users report that Grammarly helps them craft more impactful writing. It works across over 500,000 apps and websites. Grammarly is there to help you make that writing better.

It can now do sophisticated things, for example, like tone detection. It can help you get the tone just right. It can generate not just correct or rewrite, but generate in ways that, as you get more used to it, helps you. Grammarly is where you're already doing your writing. It's the gold standard of responsible AI in the sense that they have, for 15 years, had best-in-class communication trusted by tens of millions of professionals.

Jesse, let's do some questions. First question is from Bernie: Should I be worried about the spread of disinformation on a grand scale? If so, how should I manage this? One of the big concerns is that you could use it to generate misinformation, right? Generate text that's false and that people might believe. Of course, it could then be used equally for disinformation where you're doing that for particular purposes.

I have two takes on this. In the general sense, I'm not as worried, and let me explain why. What do you need for, let's just call it, high-impact negative information events? You need a combination of two things: a tool that is really good at engendering viral spread of information that hits just the right combination of stickiness, and you need a pool of this sort of available negative information that's potentially viral.

Because of social media curation algorithms, which are engagement-focused, this tool exists that's basically surveying this pool of potential viral-spreading information that can take this negative information and expand it everywhere. What does generative AI change in this equation? It makes the pool of available bad information bigger. What matters is only if AI can create content in this pool that is stickier than the stickiest stuff that's already there.

If large language models are just generating a lot of mediocre bad information, that doesn't really change the equation much. The exception to this would be very niche topics where that pool of potential bad information is empty because it's so niche; there's just nothing there. Increase internet literacy. We keep having to update what, by default, we trust or don't trust. We have to keep updating that sort of sophisticated understanding of information.

The battle isn't against AI; it's about boosting your internet smarts! 🌐🧠

When it comes to hyper-targeted misinformation or disinformation, especially around significant events like national elections, pandemics, or conspiracies involving major figures, there's already a ton of information out there. Adding more mediocre bad information isn't going to change the equation significantly. The right solution here is probably the same one we've been promoting for the last 15 years: increasing internet literacy. We need to continuously update what we trust or don't trust by default. Essentially, it's not changing what's possible; it's just simplifying the act of producing bad information, which already exists in abundance.

Right now, we're in a kind of arms race with these mega AI models. It’s not always clear what's different between them or what one model can do that another can't. Often, the differences are discovered after training on more parameters, and then we explore what it does better than the last model. These models are not profitable; they are computationally very expensive to train and run. What companies want are smaller, customized models that can perform specific tasks.

GitHub Copilot is a great example. Computer programmers can now interface with a language model built right into their integrated development environments. Microsoft Copilot, which has a confusingly similar name, is trying to do something similar with Microsoft Office tools. Apple Intelligence, which they've recently added to their products, uses ChatGPT as a backend to handle specific tasks on your phone. For instance, you could ask it to take a recording of a phone conversation, get a transcript, summarize it, and email it to you. This is where these tools are becoming more interesting—they're performing specific, actuated behaviors on your behalf.

OpenAI dreams of having a better voice interface to various things. Imagine you could summarize phone calls, produce computer code, or help with formatting queries on Microsoft Word documents. This is akin to car companies having a Formula One racer—not because they plan to sell Formula One cars to everyone, but because it showcases their capabilities and makes people think of them as a top-tier car company.

We're about a year and a half past the ChatGPT breakthrough, and while the chat interface to a large language model is impressive, we haven't seen major industry disruptions yet. We still hear anecdotal stories, like a company replacing six customer service representatives with AI. We’re in the phase of passing along a small number of examples, indicating that these models are not in their final form. Stay tuned, though, as their capabilities will become much clearer when they are more integrated into our daily workflows.

Microsoft is also calling their Office integration Copilot, which adds to the confusion. AI is the next big thing, but another significant one on the horizon is augmented reality. The rise of virtual screens over actual physical screens could be a game-changer.

The future isn't just AI; it's about making our lives virtual! 🌐✨

Everyday life is going to be transformed by simulating what we're doing now in a way that's better for companies. The whole goal will be to take our current activities and make them virtual. This shift will be hugely economically disruptive because so much of the hardware technology market is based on building very sleek individual physical devices. Both AI and this virtual shift are vying to be the next big disruption.

On one end of the spectrum, these changes will become a part of our daily life where they weren't before. Think about how email fundamentally altered work patterns without changing the essence of work itself. On the other end, the shift could be as comprehensive as personal computing, which fundamentally changed how we interact with the world and information. This could land anywhere on that spectrum.

We have to admit that the current form factor of generative AI, talking to a chat interface through a web or phone app, has largely been a failure in causing the predicted disruption. It hasn't changed most people's lives. There are heavy users who like it, but it still has a novelty feel. Another form factor will be necessary before we see its full disruptive potential. Right now, we're impressed by AI, but not by its footprint on our daily lives.

So, stay tuned unless students are just using it to pass in papers. The situation with students and AI in paper writing is also more complicated than people think. What’s happening there might not be what you really think.

Next question is from Dipta: How do I balance a 30-day declutter with my overall technology use? I'm a freelance remote worker that uses Slack, online search, and stuff like that. Dipta is referencing an idea from my book "Digital Minimalism," where I suggest spending 30 days not using optional personal technologies. The goal is to get reacquainted with what you care about and other valuable activities. In the end, you only add back things that have a clear value.

However, Dipta mentions work stuff like Slack and online search. My book "Digital Minimalism," which covers the declutter, focuses on technology in your personal life, not at work. For work-related technology, my other books—"Deep Work," "A World Without Email," and "Slow Productivity"—tackle the impact of technology on the workplace and how to handle it.

Digital knowledge work is a main topic I'm known for. It's why I'm often miscast as a productivity expert. I'm more about how to do work without drowning and hating our jobs in a digital age. It looks like productivity advice, but it's really survival advice. How do we work in an age of email and Slack without going insane?

"Digital Minimalism" is not about that. We're looking at our phones all the time, at work and outside of work. We're on social media and watching videos constantly. Why are we doing this, and what should we do about it?

The digital declutter addresses technology in your personal life. For work communication technologies, read "A World Without Email," "Slow Productivity," and "Deep Work." The symptoms are similar, but the causes and responses differ.

You're looking at your phone and social media too much because massive attention economy conglomerates produce apps designed to generate that response and monetize your attention. You check your email frequently not because it profits someone, but because we've evolved a hyperactive hive mind style of on-demand, digitally aided collaboration in the workplace. We check our email to keep multiple timely conversations going, ensuring things unfold in a timely fashion.

It's about replacing this collaboration style with less communication-dependent methods. I sold "Digital Minimalism" and "A World Without Email" together. One editor thought they should be combined, since both deal with looking at screens too much. I was clear that they shouldn't be combined because they are so different. The causes and responses are different, even if they seem similar.

The only commonality is screens and looking at them too much. Originally, "A World Without Email" was supposed to come first, but the issues in "Digital Minimalism" became pressing so quickly that I had to write it first.

Overloaded? Do less, achieve more. 🚀💡

The responses are so different that they can't be one book; they are like two fully separate issues. The only commonality is that they involve screens and looking at them too much. I argued that point strongly, and we decided to keep those books separate.

It was originally supposed to be the other order. "A World Without Email" was intended to be the direct follow-up to "Deep Work." But the issues in "Digital Minimalism" became so pressing so quickly that I had to prioritize writing that book first. That's why "A World Without Email" did not directly follow "Deep Work."

In 2017 and 2018, issues surrounding our phone, social media, and mobile technology really took off. I had just written "Deep Work" and was thinking about what to write next. The very next idea I had was "A World Without Email," which was essentially a response to the question, "Why is it so hard to do deep work?"

In "Deep Work," I didn't delve too much into why we're so distracted. I just emphasized that focus is diminishing but is still important, and here's how you can train it. After that book was written, I started to explore why we got to the point where we check email 150 times a day. It's a long book, and I wondered who thought this was a good idea. It turned into its own sort of epic investigation.

I really like that book. It didn't sell as well as "Digital Minimalism" or "Deep Work" because it's less about shifting to a new lifestyle right now. It's much more critical and explores how we ended up in this place. It offers solutions, but they are more systemic. There's no easy fix you can implement as an individual.

Intellectually, it's a very important book and has had influence in that way. However, it's not a million-copy seller like "Atomic Habits." "Atomic Habits" is easier to read than "A World Without Email," and I say that with confidence.

Alright, what do we have? This question is from Hanzo. "I work at a large tech company as a software engineer and I'm starting to feel really overwhelmed by the number of projects getting thrown at us. How do I convince my team that we should say no to more projects when everyone has their own agenda, like pushing their next promotion?"

This is a great question for the corner because the whole point of the slow productivity corner segment is to ask questions relevant to my book "Slow Productivity." As we announced at the beginning of the show, it's the number one business book of 2024 so far, as chosen by Amazon editors. This is appropriate because I have an answer that comes straight from the book.

In chapter three of "Slow Productivity," where I talk about the principle of doing fewer things, I have a case study that I think your team should consider, Hanzo. This case study comes from the Technology Group at The Broad Institute, a joint Harvard-MIT genomics research institute in Cambridge, Massachusetts. This is a large interdisciplinary genomics research institute with many sequencing machines.

I profile a team that worked at this institute. These were not biologists; it was a team that built tech stuff that other scientists and people in the institute needed. You'd come to this team and say, "Hey, could you build us a tool to do this?" It was a bunch of programmers, and they had a very similar problem to what you're describing, Hanzo.

Ideas would come up, some from their own team, some suggested by other stakeholders like scientists or other teams in the institute. They'd say, "Okay, let's work on this. You do this; I'll do that. Can you do this as well?" People were getting overloaded with all these projects, and things were getting gummed up.

If you're working on too many things at the same time, nothing makes progress. You put too many logs down the river, you get a logjam, and none of them make it to the mill. They moved to a relatively simple, pull-based, agile-inspired project management workload system.

Whenever an idea came up, they'd put it on an index card and stick it on the wall. Each person could only have a couple of things in their column, preventing them from working on too many things at once. This eliminated the logjam problem.

If a project didn't get pulled over after a month or so, they'd take it off the wall. They realized they needed transparent workload management. You can't just push things onto people's plates in an obfuscated way and try to get as much done as possible.

Trust your friends' recommendations over algorithms. 🌐🤝

To be done, things need to exist separate from individuals' obligations. We need to be very clear about how many things each individual should work on at the same time. You need some version of this sort of vaguely Kanban, agile-style workload management pull-based system. It could be very simple. Read the case study in Chapter 3 of "Slow Productivity" to get details that will point you towards a paper from the Harvard Business Review that does an even more detailed case study on this team. Send that around to your team, or send my chapter around to your team. Advocate for that and I think your team's going to work much better.

Let's hear it. Hey Cal, Jason from Texas, longtime listener and reader, first-time caller. For the last couple of episodes, you've been talking about applying the distributed trust model to social media. I'd like to hear you evaluate that thought in light of Fogg's Behavioral Model. For an action to take place, motivation, prompt, and ability have to converge. I don't see a problem with ability, but I'm wondering about the other two. They're going to need significant motivation. What is going to prompt them to go look at those five sources? I think if those two things can be solved, this has a real chance. One last unrelated note, somebody was asking about reading news articles. I use Send to Kindle and I send them to my Kindle and read them later; works for me.

It's a good question. What's key here is separating discovery from consumption. The consumption problem is once I've discovered, let's say, a creator that I'm interested in, how do I then consume that person's information in a way that's not going to be insurmountably high friction? We've had solutions to that before; I mean, this is what RSS readers were. If I discovered a syndicated blog that I enjoyed, I would subscribe to it, and then that person's content is added to this sort of common list of content in my RSS reader. This is what, for example, we currently do with podcasts. Podcast players are RSS readers. The RSS feeds now are describing podcast episodes and not blog posts, but it's the exact same technology. It's not a centralized model like Facebook or Instagram where everything is stored on the servers of a single company that makes sense of all of it and helps you discover it. You have an RSS feed that every time you put out a new episode, you update that feed to say, "Here's the new episode, here's the location of the MP3 file, here's the title of the episode, here's the description of the episode." I think video RSS is going to be a big thing that's coming; you make really nice readers.

How did I discover a new blog to read? Well, typically it would be through these distributed webs of trust. I know this person; I've been reading their stuff; I like their stuff; they link to this other person; I trust them; so I followed that link; I liked what I saw. We still use something like RSS, so consumption is fine. How do I find the things to subscribe to in the first place? This is where distributed trust comes into play. It would be through these distributed webs of trust: I know this person; I've been reading their stuff; I like their stuff; they link to this other person; I trust them; so I followed that link; I liked what I saw over there; and so now I'm going to subscribe to that person.

It also solves problems about disinformation and misinformation. I argued this early in the pandemic; I wrote this op-ed for Wired where I said the biggest thing we could do for both the physical and mental health of the country right now would be to shut down Twitter. What we should do instead is go back to an older Web 2.0 model where information was posted on websites like blogs and articles posted on websites. This distributed web of trust is going to make it much easier for people to curate the quality of information. Webs of trust work very well for independent voices; they work very well; they're very useful for critiques of major voices. Podcasts or digital trust algorithms don't show us what podcasts to listen to; they don't spread virally and then we're just shown it and it catches our attention. We have to hear about it; we probably have to hear about it multiple times from people we trust before we go over and sample it, right? That's distributed webs of trust. Email newsletters are the same thing. How do people discover new email newsletters? People they know forward them individual email newsletters, like you might like this, and they read it and they say, "I do, and I trust you, and so now I'm going to consider subscribing to it."

Plan your week in advance to stay in control and thrive! 📅💪

We're doing it right now in some sectors of online content, and it's working great. Podcasts or digital trust algorithms don't necessarily show us what podcasts to listen to. We have to hear about it from people we trust before we go over and sample it. That's distributed webs of trust. Email newsletters operate in the same way. People get forwarded individual email newsletters from someone they know and trust. They say, "I trust you, so I’m going to consider subscribing to this." That’s the essence of webs of trust.

We should return more to distributed webs of trust. Recommendation algorithms are useful, but I think they're more effective in environments without user-generated content and feedback loops. They work well on platforms like Netflix or Amazon, suggesting books or shows based on your preferences. However, when you hook them up with user-generated content and feedback on popularity, it evolves the content in undesirable ways, leading to negative externalities. I believe that is the way to discover information, and hopefully, it’s the future of the internet.

I love your idea, by the way, of using Send to Kindle. It’s a cool app where you can send articles to your Kindle and read them later, free from ads, links, rabbit holes, and social media distractions. It’s a beautiful application; I highly recommend it.

I was struggling with a large client load, especially with one organization that didn't align well with my communication style and work values. This is where I started leveraging our work plan site, structuring it in terms of what I was working on during any given week. This included itemizing my recurring calls, office hours with clients, and a general estimate of how much time I would spend on client work per client.

The division had recently rolled out a work plan site for employees to plan out their weekly hours in advance. The issue here was that it was communicated as a requirement, so most of us saw this as upper management micromanagement. The site itself was also unstructured, so we didn’t see the utility, as we already logged our time retroactively anyway.

At this point, I had already read "Deep Work" and was using the time block planner but lacked a system for planning at a weekly time scale. This is where I began leveraging our work plan site, structuring it in terms of my weekly work items. I included sections for a top priority list and a pull list backlog.

I also added a section to track completed tasks, giving me a visual sense of progress as the week went by. After making this weekly planning a habit, my team lead highlighted my approach at a monthly team meeting. We presented on how I leveraged the tool for managing my work effectively.

I spoke about how this helped me organize my week, allowing me to take a proactive approach instead of constantly reacting to incoming emails and team messages. It emphasizes that there are alternatives to what I call the "list reactive method." The list reactive method involves reacting to daily tasks and trying to make progress on a to-do list, which isn’t very effective.

You get caught up in lower-value tasks, lose focus, and fall behind on high priorities. You have to be more proactive about controlling your time. Control is a big theme in how I talk about thriving in digital-age knowledge work.

Weekly plan discipline can be a big part of that answer. Look at your week as a whole and decide what you want to achieve. Identify when your calls and client office hours are and consolidate tasks around those times. Cancel tasks that make the rest of the week unworkable. Planning your week in advance helps you have a better week than if you only focus on daily tasks.

Multiscale planning is critical for maintaining control and rhythm. It’s the only way to survive in digital-era knowledge work. Weekly planning helps you feel like you have some autonomy over your schedule once again.

Stop fake productivity; focus on real results! 🚀📈

Do this not because our bosses are mustache twirlers or because they're trying to exploit us but because we didn't have a better way of measuring productivity in this new world of cognitive work. There's no widgets I can point to, no pile of Model Ts lined up in the parking lot that I can count. So what we do is like, well, to see you in the office is better than not. Come to the office, do factory shifts, be here for eight hours. We had this sort of crude heuristic because we didn't know how else to manage knowledge workers.

Pseudo productivity became a problem once we got laptops and then smartphones and we got the mobile computing revolution. Now, pseudo productivity meant I got to check every email I reply to as a demonstration of effort. Every Slack message I reply to is a demonstration of effort. This was impossible in 1973, completely possible in 2024. This is what leads us to things like having a piece of software that artificially shakes my mouse. That circle being green next to my name in Slack longer is showing more pseudo productivity.

The inanity of pseudo productivity becomes pronounced and almost absurdist in its implications once we get to the digital age. We have to replace pseudo productivity with something that's more results-oriented and that plays nicer with the digital revolution. Slow productivity gives you a whole philosophical and tactical road map to something more specific. It's based on results, not activity. It's based on production over time, not on busyness in the moment. It's based on sequential focus and not on concurrent overload. It's based on quality and not activity.

It's an alternative to the pseudo productivity that's causing problems like this mouse jiggler issue. New technologies require us to finally do the work of really updating what we think about knowledge work. It's also why I hate that status light in Slack or Microsoft Teams. If you're at your computer, it's fine for someone to send you a message, but why? Why is that fine if I'm at my computer? What if I'm doing something cognitively demanding? It's a huge issue for me to have to turn over to your message.

The specific tools we use completely disregard the psychological realities of how people actually do cognitive effort. We have such a mess in knowledge work right now. Digital age knowledge work is a complete mess. It gives us a lot of low-hanging fruit to pick that's going to cause advantages, delicious advantages. Broad pseudo productivity plus technology is an unsustainable combination.

I'll be in my undisclosed mountain location. The shows will be otherwise normal. See you next week and until then, as always, stay deep.

Watch: youtube.com/watch?v=OvlfCW3Ec1g