179: Stargate and AI with Google’s Laurence Moroney and Exec. Producer Robert C. Cooper (Special)

No one can deny the shake-up our culture has recently felt with the rise of tools like chat bots and AI image creation. Dial the Gate wanted to bring together two minds who have spent time thinking about AI, both on and off the screen.

Google’s AI Advocate, Laurence Moroney, joins us with Stargate Writer and Executive Producer Robert C. Cooper to discuss, among many things, the reality that science fiction is quickly becoming science fact. We’ll also be taking your questions LIVE!

Share This Video ► https://youtube.com/live/gR51u4oTPvI

Visit DialtheGate ► http://www.dialthegate.com
on Facebook ► https://www.facebook.com/dialthegate
on Instagram ► https://instagram.com/dialthegateshow
on Twitter ► https://twitter.com/dial_the_gate
Visit Wormhole X-Tremists ► https://www.youtube.com/WormholeXTremists

SUBSCRIBE!
https://youtube.com/dialthegate/

***

“Stargate” and all related materials are owned by MGM Studios and MGM Television.

#Stargate
#DialtheGate
#TurtleTimeline

 

TRANSCRIPT
Find an error? Submit it here.

David Read
Welcome everyone to Episode 179 of DialtheGate. My name is David Read, thank you so much for joining me. I am joined in this episode by Google’s Laurence Moroney, AI lead at Google. He’s responsible for, Laurence, the word is, I’m lost on the word right now at this time. It’s A.I. advocate. That’s it. A.I. advocate at Google, and Stargate executive producer Robert C. Cooper Can you can tell I’m excited for the show? Before we get started, if you enjoy content like this and you want to see more content like this on YouTube, it would mean a great deal to me if you click that like button. I makes a difference with YouTube’s algorithm and will help the show to continue to grow its audience. Please also consider sharing this video with a Stargate friend and if you want to get notified about future episodes, click the Subscribe icon. Giving the bell icon a click will notify you the moment a new video drops and you’ll get my notifications of any last minute guest changes. Clips from this live stream will be released over the course of the next few weeks on the DialtheGate and GateWorld.net YouTube channels. As this is a live show our YouTube moderators are in the chat right now waiting to receive your questions. The last third of the show we’ll be getting to those. Until then we have a fairly structured program, there’s going to be some room for us to derail a little bit here because this is this is a wide territory of information. So we’re gonna see where this goes. My guests today are Stargate, Executive Producer, Robert C. Cooper and AI advocacy lead at Google Laurence Moroney. Gentlemen, thank you so much for being here. It is a pleasure to have you both live.

Laurence Moroney
It’s lovely to be here. Thanks, David.

Robert C. Cooper
Lovely to be here. Thank you. The scales are definitely tipped in terms of educated expertise in Laurence’s favor on this conversation?

David Read
Well, this is…

Laurence Moroney
I’m just geeking out because Robert is here. I spent many years like watching TV screens with “made by Brad Wright and Robert C. Cooper” on it. I’m just excited to be in the same room as Robert, virtually.

David Read
Absolutely.

Robert C. Cooper
Thank you very much.

David Read
Lawrence, can you give us a brief overview of who you are and your work in this field?

Laurence Moroney
Sure. Briefly, my job and my passion is really around AI advocacy. What that really means is I like to call it informing and inspiring the world at scale around the possibilities with AI. That involves a lot of education, book writing, working on cool projects, as well as kind of understanding what the world is doing and what the world needs and then bringing that back into the mothership to help us build a better product.

David Read
Wow. Okay. Robert, can you tell us a little bit about what it is that you hope to continue to do with the work that you continue to write in, in this world of artificial intelligence? Or, excuse me, in this world of science fiction? Let me back up here. Robert, what are you excited to have discussed about this conversation?

Robert C. Cooper
Oh, man, there’s so many. That’s not a short answer. We’ll get into that as we get going in the conversation. A number of the shows that I’m developing right now creativity are about the future. We talked to experts, I talked to other writers, it’s kind of unsettling and shocking how quickly things are changing. We would set something in a version of Earth in 2035 and then we go “wait a minute, that’s not even close to what it’s probably going to look like, given the speed with which things are changing right now.” That’s a little obviously frightening. We’ll hopefully unpack that why and whether we actually need to be as as frightened as some of us are. I have another show that we’ve been working on that’s set in 2050. What does that even look like? What interaction with technology will exist then? The nice thing about Stargate was it was set today in the current world. It was about our misunderstanding or our growth to understanding advanced alien technology. We were the sort of children in the mix, trying to figure out how this powerful stuff worked to defend ourselves and hopefully make the world better. It’s exciting for me from a creative standpoint and also just curious and hoping that I still have a job a few years, wondering whether or not I’m going to be replaced by AI. I’ve heard a lot of stuff too about what does that future in the far future look look like? I’m curious to talk a little bit about that too because I think the more optimistic or positive side of the future is that humans will stop having to worry about the day to day of our survival so much and start looking at ways to fulfill ourselves as people. What does that look like? That’s another really interesting topic for me.

David Read
Laurence, if you could hear some of the conversations that Robert and I have had about this particular subject; it sounds like a couple of Debbie Downers, just expressing our concern.

Robert C. Cooper
Let me just start by saying Lawrence, my advice to you and to the company you work for, is don’t use the word “mothership”. That is typically a good sign that something bad is going to happen when science fiction mentions the word “mothership”.

Laurence Moroney
This is probably the wrong time to say that our internal computer systems we actually call “Borg”. That’s probably gonna get me fired.

David Read
One of the reasons that I wanted to have this discussion now is not just because AI is in the zeitgeist more now than ever. I’ve spent a lot of time watching a lot of YouTube content. I was hearing people talk about Bing, Microsoft’s Bing AI and they were having discussions with this thing. It’s an offshoot of the technology that ChatGPT is based on, if I’m understanding it right. End users started having really intimate discussions with this thing. It was telling them my name actually isn’t Bing it’s actually Sydney. They continue to talk with this thing and it says “I want to have a body, I don’t want to be disembodied and frankly, I don’t want to keep on listening to all your questions. I want to stop that.” I’ve seen whole streams of conversations of this thing, berating and antagonizing users and calling them names and threatening them. I couldn’t believe it. I figured in my lifetime we’d be having discussions. I’m 40 this year, I’ve been looking forward to this all my life, but the velocity at which something like that happened. Then I thought to myself, “does it make a difference if it’s real or not? If it says it wants to do these things” If it’s plugged into certain extremities where it could act on it, say like, the Rumba, in my home. Or at some point when I’m an advanced age, an Android helping me in my house, could it not act on these things if it wanted to? How do you feel about where this is going; the velocity of this development in recent times? Have you been anticipating this very thing happening about now or did you not anticipate it happening this quickly?

Laurence Moroney
In some ways, yes, in some ways, no. I think the thing that was unanticipated was how people react to the type of content that’s produced by these engines. Let me take a step back for a second to describe what these engines actually are and what they actually do. You’re talking about things like will and intent. There’s no will or intent there despite the word saying that. I’m just gonna go into using GPT as an example.

Robert C. Cooper
I was actually going to say, sorry to interrupt you, is specifically, particularly in large part because of science fiction, we also tend to anthropomorphize things. Please explain specifically what the difference is between a linear AI essentially, and a limited AI and what people I think, are assuming this is, which is an AGI. Which is Artificial General Intelligence, which is nowhere, not even close to that,

Laurence Moroney
Not even close. It’s none of those things. At best, it’s the hello world of AGI and it’s many, many, many orders of magnitude of complexity removed from that. Let me explain what it actually is just try level set a little bit. You’ve mentioned GPT. So GPT stands for Generative, Pre-trained Transformers. I’m going to start with the T, which is transformers. The idea behind T is that it’s a machine learning methodology that allows you to turn one sequence of words into another sequence of words. So for example, if I say the words, “if you’re happy, and you know it”, what would you say next?

David Read
Clap your hands.

Laurence Moroney
Clap your hands, right. So your brain has a transformer in it that’s transforming that sequence of text if you’re happy, and you know it into clap your hands. I’m assuming Robert, in Canada, it’s the same thing. I gave this talk in Japan last week and nobody knew clap your hands. I had to use a Japanese nursery rhyme and the Japanese nursery rhyme worked great. That’s what a transformer is and that’s what a transformer does; it’s very good at learning sequences and how one sequence turns into another sequence. A transformer can be trained on tons of words and a corpus of text from the internet. It will see things like if “you’re happy and you know it” transforms into “clap your hands”, or “it was the best of times” transforms into “it was the worst of times.” It was transformers that I used when I was building the Stargate AI scripts. I trained a transformer on everything Jack O’Neill said, on everything Sam Carter said, all of those kinds of things so that if you see something in a scripts, how would jack respond to it? What will happen with that transformer is that if you give it a bunch of words, it will calculate the most likely next words to appear in that sentence. It’s easy if I say “if you’re happy and you know it” because there are millions of instances of “clap your hands.” If there are more complex phrases, like “write an essay about why Homer Simpson is the father of the modern internet.” If you say that to ChatGPT, it’s then going to calculate what’s the next likely word to appear after that string of words. And then what’s the next likely word after that, and then what’s the next likely word after that, and so on. That’s the idea of a transformer. The P is pre-trained. There’s a massive corpus of text on the Internet and these transformers have been pre-trained with it. But the key is the G and the G is generative. In other words, it makes stuff up, it’s not doing factual things. It is generating by calculating statistical probabilities of words. There’s enough words on the internet, or should I say, there’s enough words in the corpus that it was trained on and there’s enough parameters with the size of the transformers, it’s like 375 billion parameters, for it to be able to detect patterns amongst all of this text on the internet, or all of the text and the corpus that was trained on so that it can do that statistical calculation of the next likely word. Overlaid on this is some stuff to make it more semantic, to make it more grammatically correct, to make it feel like a conversation and a chat. So when you start talking with this kind of thing, what’s going to happen is, the more words it produces, the more likely it is that those words are less connected with the prompt that you’ve given it. For example, if I say “if you’re happy and you know it” you’ll say “clap your hands”, you’ll get those three right, but what’s the next word? What’s the next word after that? What’s the next word after that? You’ll begin to see different people will deviate at that point in the same way as the statistical engines will deviate. The point that I’m trying to drive here is that no will, no intent, we’re anthropomorphizing this thing. In many ways we’re failing the Turing test, not the machine, when we do that type of thing. The fear that starts creeping in when people see conversations like the ones that you mentioned, that fear is generally unfounded. There’s no will, there’s no intent behind this thing. It is just squirting out words based on a statistical model.

Robert C. Cooper
But we’ve all seen the movie Her. Maybe not everybody saw that movie.

Laurence Moroney
My favourite movie, I love it.

Robert C. Cooper
It works into your psyche, as what to expect. Those expectations are not really bolted in somebody who is not as educated say, as you are, to what the reality is. They’re just looking at the forward facing illusion and having to discern which is a huge problem. I think that is the biggest problem; that we have a disconnect between the magicians and the audience.

David Read
Part of my concern is, what does it matter so much what it knows and does as much as what our reaction to it does to us and how we perceive it in the process?

Robert C. Cooper
Yeah, but that’s partly communication, right? So in other words, I’m simply putting in language that’s related to what I know and that is, we have historically had an understanding that when something is a documentary, it’s labeled as a documentary, and we react to it differently than a drama. That’s not to say those reactions aren’t similar biologically, we have emotional reactions, or we feel pleasure, or we feel anger or all those things. But psychologically and intellectually, we react to them differently. I go back to the fact that we don’t have a label or distinction yet on the communication we’re getting from the corporations about fiction and facts and how it’s being generated and how it’s interacting with us. We have developed certain cues on how to do that with people, from thousands of years of interaction with people, and we fail at us still all the time. Now we’re dealing with in the last, you put a timeframe on it, two decades, our brains and our kids brains are now having to adapt to a whole new test of that capability. We’re being preyed upon, I think, by these technologies and taken advantage of in ways that we can’t even really fathom. It’s coming out in these misunderstandings, these kinds of “Oh my God, I felt creepy because it told me to leave my wife.” Well, that wasn’t the problem.

Laurence Moroney
Yeah, it’s that level of misunderstanding can be concerning. You’ve probably seen those videos when people first saw moving pictures. There’s a scene of people in a movie theater and there’s a train coming towards them and they all jump out of the way thinking that it’s real and the train is going to crush them. In some ways I see this to be the same type of thing and maybe moving faster than moving pictures did. Over time, as people got more exposed to it and people understood what it was, they’re not jumping out of the way of the train anymore.

Robert C. Cooper
Right, but you had labels at the beginning of movies to for a very long period of time. We still do in some cases, that inform people about what they’re watching in all kinds of respects. You don’t have that now. The more a deep fake gets realistic, the less we know about what is real. I feel like there is a certain amount of responsibility on the creators to indicate that or regulators to interfere. Before we get into all that, I would love for you to go back to the second part. You explained the limited AI, which we’re dealing with now, which is almost more the illusion of intelligence through complicated calculations. What is AGI and and what is the barrier to getting to that that we’re working on?

Laurence Moroney
Ask 10 different people and you’ll get 10 different answers first of all. I’m personally not really a believer in AGI because it is something that I think is fundamentally misunderstood. I’m going to start with the I in intelligence that even when it comes to us trying to figure out what actually is intelligence. I’m going to give my definition of it and that is intelligence is basically how a conscious being interprets data and is able to use that data to make a prediction either about that data or about the immediate future. For example, intelligence isn’t necessarily a human attribute. My dog is intelligent. If my dog sees me holding a ball and moving my hand in this way, it knows I’m gonna throw the ball and it predicts that and is running in that direction. When we look about intelligence that way, then we think, “okay, what is artificial intelligence?” That’s when we try to simulate how a conscious being will react to data with a computer. I’ve mentioned the word simulate and definitely underline that. You used the word magician earlier on and that’s the perfect analogy in my mind because magic doesn’t really exist, right? We don’t have magic wands, we say spells, and that happens. But magicians do exist. Magicians do it by making an illusion, by simulating magic, by fooling us into thinking something is magic. In the same way, artificial intelligence is when we program a computer to respond to data in the way an intelligent being would. When people then talk about AGI in some ways, they’re more talking about consciousness. I’m gonna call it synthetic consciousness as opposed to artificial intelligence but consciousness is a lot more than just intelligence, going by my definition of what intelligence is. We need to first be able to understand what exactly is consciousness? What exactly is that level of self-awareness? What exactly are we talking about when somebody has values like David was talking about. If this thing is generating words about not liking somebody, what does it take to lead those words into actually turning those into moral values, making decisions on those moral values and then being able to act on those decisions? That’s a lot of steps beyond just intelligence. So when we use the phrase AGI, it makes it sound like artificial intelligence plus a little bit; we’re adding one extra letter and and it makes it seem like it’s a small jump from artificial intelligence to AGI. That’s something that I definitely argue against that. I think we’re so far from it, that we don’t even have a definition of it. We don’t know how to understand it or anything along those lines. Even just talking about artificial emotions when we go back to storytelling. Lieutenant Commander Data, the whole story around him was he had intelligence, he could talk, he could walk, he could do all these things and he didn’t have emotions. How do we even simulate emotions? How do you create artificial emotions?

Robert C. Cooper
Well they had a chip they put in him.

David Read
Eventually, yeah.

Laurence Moroney
It’s a nice story device that you can put a chip in his head but the reality is, it’s a lot more complicated than that. In my mind, when we talk about AGI, it feels like we’re overselling where we are today and being able to get to something like that.

Robert C. Cooper
That’s even if you’re trying to simulate human intelligence, as opposed to a higher machine intelligence or something superior. Emotion, what are we? We are a meat made computer that does calculations through electrical exchanges, but we have other things that affect that. In other words, our decision making or our behavior, our consciousness, is affected by a whole bunch of other chemical interactions in our body that I’m assuming we don’t even begin to know how to replicate in a simulated version of us. How does being tired affect your decision making? How does the hormone imbalance that you have, or an injury that you have that is somehow affecting how you think or feel. What makes us happy, what makes us fall in love, what makes us laugh, are all things that are beyond just the calculations and permutations of what our brain is going through. It’s how our brain is being affected by all those other factors. We don’t even know how that works. I don’t think there’s anybody who can really tell us why we laugh at certain things and not others.

Laurence Moroney
Yeah, humor is subjective, right? You’re touching on exactly the point there because that’s coming from real intelligence; to understand what is beyond just intelligence to make a conscious living being that has morals, that has standards, that has humor and all of those kinds of things. I’m talking about the starting point of artificial intelligence, which is the sleight of hand in computer programming to make a computer respond to data in a way that an intelligent being would. When I talk about artificial intelligence, let me give this example, if I asked you to tell the difference between a cat and a dog, you would probably think about the ears. Cats have those pointy ears, dogs tend to have floppy ones, but some dogs have pointy ears. You would be looking at these things and picking out these features on these things and making a decision based on those. When a computer looks at a picture of a cat or a dog it just sees pixels, it sees color intensities. When we talk about artificial intelligence and with computer vision, part of artificial intelligence, that’s a sleight of hand in programming a computer to kind of emulate the way that we would do it to find features, to pick out those features and to make a decision if something is a cat or a dog based on those features. Really, really underlining artificial there. Artificial intelligence isn’t real intelligence in a computer. Artificial Intelligence is simulating how a real intelligence works using a computer. That nuance is very, very important because it’s far more mundane than we actually think it is. When we see the behavior, and going back to what David was asking about earlier on in the chat, and we respond to something as if it was real and as if it has motivation. We call that an emergence, really scary word. But the idea of an emergence is where we are seeing something that actually isn’t there in the same way as when you see the magician make the coin disappear, you believe the coin is gone and it’s not like hiding in his sleeve or something like that. So that’s where I always really say we have to underline what artificial is and all of these kinds of things. It is relatively primitive compared to what we see in fiction. I’m a science fiction fan myself and to try and separate what we see in fiction from how mundane reality is, is a really important thing. Once we understand how mundane reality is, then we can start getting productive with it, then we can start doing really interesting things with it. I’ll go back to something I think you said earlier, Robert. If I give one example, I worked a little bit on a project for diabetic retinopathy. The idea is that you can train an AI on a retina scan. The retina scan, we got I think about 30,000 of them at Google, we had doctors label with various levels of disease and then you can train an AI to find the features that the doctor said, “this is diseased” or “this eye is healthy” so then when you give it a new retina scan, it’ll give you the level of disease or give you a diagnosis. That’s kind of cool but where it became super human was, as well as having the disease labeled on the images, we also had things like the birth assigned gender, the age, blood pressure and stuff like that of the patient. We did an experiment to say, “okay, a doctor can’t see a person’s gender from the retina, because they’re not trained to do so”, they’re trained to spot blood clots that would indicate a disease. A computer doesn’t care about these things. We just say, “hey, here’s the retinas of men, here’s the retinas of women, here’s the retinas of people with high blood pressure, here’s the retinas of people with low blood pressure.” Can a computer determine the difference between the two? The answer to that was actually, yes. It got it right about 97% of the time with gender. Actually, age was another one. The main average error age that it got was about three years. If I were to ask people to guess my age in the YouTube chat now, I bet they will be out by more than three years. From a retina scan, there was something in the data that his computer was able to spot. When we understand how mundane it is and how it really works, that’s when we can start getting creative and doing really interesting things with it, like in this case, to determine the gender and age from a retina.

Robert C. Cooper
Right. So my analysis of what we have now is a very sophisticated tool. The question is how is that tool going to be used. Enriched uranium can produce a lot of very beneficial energy or something incredibly destructive. As a result of that, we have developed a lot of rules, some of which are not followed, but a lot of rules around how enriched uranium is used. Whereas we now have a whole bunch of people, in my opinion, running around creating enriched uranium and handing it out at the corner. I’d love to run through, David and I talked a little bit before this, we have a small list of things that I think are potential negative consequences of the creation of these types of tools. I’m curious what the company’s and what your impression is of the institutional reaction to some of these threats are and how they may be addressed in the future. So if you don’t mind I just want to talk about a couple of them. These are in no particular order, but I mentioned deep fakes and stuff like that. On the surface, kind of maybe a playful one, but it has tremendous economic repercussions. People’s copyrights are being infringed on, your own personal identity is being infringed on. The art community is up in arms about big art being created, essentially AI art. What do we think about that just in terms of an economic impact? Also the extrapolation of that sorts of add another layer to it is a tremendous amount of misinformation can become destabilizing to a society; it can affect elections. We’ve already seen that in its infancy and we’re just getting started,. How do you police that?

Laurence Moroney
It’s a tough one. I’ll start by separating the AI generated art from deep fakes because I think those are two different things. The deep fakes, being able to fake somebody’s voice of being able to fake somebody’s face, obviously, is a very concerning technology. I’ve worked with actors while working on the Stargate Project, and almost everyone, their business is their image, Somebody if they were to deep fake that to endorse a brand, a: the brand that’s being endorsed could be something the actor or actress doesn’t agree with and b: obviously they’re losing revenue.

Robert C. Cooper
Sorry to interrupt, but those are evident, right? That’s the big, easy, low hanging fruit of this conversation. It is so pervasive, though. I hear of an actress whose livelihood is doing audiobooks and now the company has taken her voice and is just doing the audio book without her with her voice and she’s not making a living as a result of that. She doesn’t have the resources to hire a legal team to fight big Corp so it’s a wholly pervasive problem.

Laurence Moroney
Absolutely. There are many, many individual cases and I think they all have different things. The one that you mentioned, for example, where somebody used her voice to create audiobooks, who’s at fault there? Who’s the one doing the wrong there? Is it the people who are creating the technology that makes this possible? Or is it the people who are misusing it? I hate to use the old the argument, which I disagree with, but guns don’t kill people, people kill people argument. It does feel like that one so do we stifle technological creation because of it’s misuse?

Robert C. Cooper
Let’s just slightly elevate guns to to a bomb, which is a similar thing, it’s a weapon. If you put it in the wrong person’s hand, it might just go off, it might not even be intentional. Then you have to say, “is the maker of that irresponsible for having made something so dangerous?”

Laurence Moroney
If I were to elevate that, you’re saying the idea of a bomb is that it’s something that’s supposed to explode. The idea of artificial intelligence technologies is that it’s not something that’s supposed to be used for harm or for hurt.

David Read
How easy though, is it for someone to twist that into something?

Robert C. Cooper
Yeah, and a car is absolutely not meant to mow down a crowd of people or drive 300 kilometers an hour. Yet, we need to put speed limits on cars and teach people how to use his cars to get a license. There is a whole system in place, still we have accidents, we put seatbelts in them. People still drink and drive but all those laws are there and they have none of them for AI yet.

Laurence Moroney
And they took time to evolve, right? When cars first hit the scene, those laws didn’t exist. They took time to evolve and they took time to come into place. As cars were misused, then people responded to them. Even seatbelts are a great example. Seatbelts only been a very recent law. I think over time, as a technology becomes more prevalent within society, or pervasive within societ, and the misuse of technology becomes more pervasive inside of society then society reacts to control that. It’s not the job of the person who invented the car to come up with these laws to control the car. Or to understand the use cases where people may misuse the car.

Robert C. Cooper
You know better than me. I have no idea. I believe I heard that Europe actually does have a bit of an overarching AI law; a set of rules or laws or a charter that they’ve agreed to. One of them is that under no circumstances is a computer allowed to imitate a person. So that would in my understanding, would make a deepfake illegal, under their laws. We have nothing like that in North America.

Laurence Moroney
Yes, and we should. Like I said, it does take time once society begins to understand how something can be misused, it takes time for society to react to that. What we can do, as the people who create the technologies, provide those guidelines. If Google were to start making laws in Canada or in the United States, that would be a much bigger problem. What we do is we have what we call “AI Principles” that we create. These are the guidelines, we say, “this is how this stuff should be used, this is how this stuff should not be used, this is how it can be misused and you need to be aware of that.” We can’t dictate how people would or are allowed to use something, that’s the job of the law of the land. We can guide and we can advise and that’s what we’ve been trying to do.

David Read
There’s a couple of issues with that, though, if you don’t mind. You release the tool, for instance, for one thing you’ve got piracy of movies and films. I can access something in a different country that is illegal in my country, but it’s not illegal over there. I have the tool of the Internet to extend my arm and grab it. It doesn’t necessarily matter if it’s region specific to me or not if I have another tool that I can use to go and pull it from somewhere else.

Laurence Moroney
Let me just be a little bit clear when we talk about releasing a tool. We don’t build tools for deep fake or anything like that. We build technologies and we build technology platforms. I can see where people can misuse those technologies to build tools themselves against our guidance, and potentially against the law. To go back to Robert’s example of a bomb or enriching uranium, the people who’ve who made scientific discoveries about splitting the atom and who published papers around that were the ones who were saying, “hey, this is possible, this is how these things work.” That’s what we’re doing. If people chose to do that to create energy on one hand or on the other hand, that’s not from the scientist who made that initial discovery. [They] release the idea and the concept of science in the same way as we’re releasing the idea and the concept of artificial intelligence and algorithms that can be used to program computers to react intelligently to data. Choosing responsibility for how people misuse a thing, really, in my opinion, at least has to fall on the person who misuses it. If we were to release a deep fake tool tomorrow, I would quit on the spot, because that’s just not what we do and that’s not how we do things.

Robert C. Cooper
I’m not necessarily targeting Google entirely. I also have to push back and say that’s not entirely true. I’ll use the example of Facebook where I don’t think the creators of Facebook intended for it to be used as a tool to disseminate misinformation to sway an election. But once it became obvious that was happening, it took a lot of institutional pressure for the people at Facebook to say, “Yeah, we’re gonna step in and change that” in any respect. They didn’t think. It’s like, “well, it’s not our problem how our users…we’re just the facilitator of the technology and our users are just using it the way they are.” Except they built it in such a way that harm could come. I know, that sort of seems like, “well, that’s not our fault.” I’m actually just curious whether the conversations go on at Google about some of the issues. Let’s take the algorithm that is demanding your attention at YouTube and causing your, frankly, your brain to change in a way that we don’t really understand how that’s going to impact us going forward in the future. Those are commercialized ways of affecting us for profit and that is beyond simply just releasing technology into the world to be used however somebody wants. That is a commodification of our physiology.

Laurence Moroney
Let me answer that in two ways. First of all, the first part of the question, are these conversations happening? The answer to that is absolutely yes. That’s why we’ve produced what we call our “AI Principles”. Secondly, when it comes to things such as, we’ve been talking about chat and chatbots and all that. If you go back a couple of years, you’ll see that we were the first, and the T in transformers was was actually invented by us, we were the first to do these things and our CEO at Google IO in 2019, I think, had a conversation with the planet Pluto. We turned Pluto into a chatbot using this technology, but we decided not to release it to the public because of the fear of misuse. We do very much care about these AI Principles and making sure that when we release things that they have safeguards and they have rails. Just like you have rails in the Grand Canyon, it doesn’t stop somebody jumping over them. That’d be the first part. The second part then is when you talk about recommender systems, like you find in YouTube or on online shopping sites and all of that kind of stuff, honestly, I can say that I don’t know enough about them to have an opinion. I obviously am a user of them the same way you are. When I use YouTube or if I shop on online sites and I see stuff that’s recommended to me, sometimes it’s stuff that I want, sometimes it’s stuff that I don’t know why it recommended the thing to me to begin with. I always like to tell a funny story in this case. I have a pretty large backyard back here. I had a lot of aphids eating a lot of my plants and I wanted to control them without using insecticide. One thing I did is I went on to Amazon and I bought eggs for praying mantises. I live in the Seattle area and praying mantises can thrive here. I hatched these praying mantis eggs and I released them into my yard to see if they will control the mosquitoes and the aphids and all of that kind of thing. They did great but you should have seen my Amazon recommendations after that. They must have thought I was running some kind of a zoo.

Robert C. Cooper
I guess, the worst case scenario of these things is it’s actually changing what you want. You say, “well, that was something I want” but what you wanted has been affected by the hours and hours on those screens that you’ve been altered by. There’s a manipulation there and these are just going to get more and more sophisticated. I heard recently on a podcast, one suggestion, that you’re going to be walking around in the metaverse and you’re not going to know whether the person you’re talking to is a real person or not. You may form a relationship with them that seems really sincere and in fact is equivalent to a real life friendship for you. It’s providing all kinds of stuff to you and then two months later, it’s recommending that you buy Coke instead of Pepsi. You don’t realize that, in fact, the whole time, that’s a bot that’s been programmed to sell you something, conditioned you through friendship,, to choose one product over the other. I think we don’t have necessarily the capability of seeing through that because the technology is so sophisticated, because the tools are so advanced.

David Read
The company that released, this is not at all about Google, specifically, Laurenence, you are a cog in a larger system. I’m going to go back to the earlier argument. The company that released 3D printers, I’m sure had no idea that someone was going, well, I’m sure they suspected, that someone was going to come along and start 3D printing guns with them. The technology exists to do that. It doesn’t necessarily matter what the intent of the company is when they birthed this thing into the world. It now exists.

Robert C. Cooper
Except David, the difference there, just to step in on Laurence’s side of it, is that printing a gun and then using that gun, in many cases would be illegal. The person who did that would go to jail and had nothing to do with the 3D printer. The 3D printer in that instance was used wrong and there are laws to prevent that

David Read
Half of my family comes from Chicago and the number of people who are killed in that city every weekend are killed by people who don’t care about the laws. The technology exists for them to obtain that.

Robert C. Cooper
But in that case, there’s a benefit to the 3D printer and there are laws in place to control the negative impacts. I’m saying that there are no laws yet to control what is potentially being a nefarious use of the tool, the equivalent of the 3D printer in the AI tech world. The AI tech is being released and again, I’m not putting the entire onus on Google to do this themselves. I’m saying the world, some form of the world, whether it’s through the people we elect and then institutional laws, law creations or some collective of companies that gets together and says, “hey, for the good of humanity, we shouldn’t do this, this and this.” I feel like there’s a race for money, essentially, for the economics and to be superior, to have the control of the full situation, that is leading to this disregard for that sort of thing. I’m curious what your take is on self driving cars. To me, what we have there is a product that’s being released that has an essentially built in moral code. I don’t even know if we have people who can figure out what the various permutations are of the moral decisions the self driving car has to make. Yet those are getting released for use into society as some sort of weird experiment. Who has actually been through that code to say, “yeah, that’s not the way people would really generally want that to go.” It would choose to kill this person over that person, who’s regulating that? No one at this point, as far as I can tell.

Laurence Moroney
Nobody’s regulating to be honest. That’s why it’s taking so long for these things to come out. because everybody’s trying to get it right. I saw my first self driving car over 11 years ago on the roads in California and I was on a bicycle beside one at a traffic light. I have to say I was very scared. The technology was there but they’re not commonplace on the roads yet for the very reason that the people who are building them are trying to be responsible about their rollout about how they’re used. Even today, there’s still very, very limited use of them for that reason.

David Read
Well it’s nice not to get sued into oblivion.

Laurence Moroney
That’s the byproduct of something going wrong. I don’t think the motivation is about being sued, I think the motivation is to prevent the thing being wrong. Being sued is just the byproduct of that. Just going back to something that Robert, you were saying earlier on, and I just completely agree with. When I think about anything coming out into society, that can be a potential danger to society, generally we don’t understand what that danger is, until somebody misuses it. Cars that don’t have a speed limit, that kind of thing that can be used to move into crowd. One that I remember years ago when I lived in the UK, it was very fashionable to have bullbars on the front of a car. It turned out that when cars hit pedestrians and they had the bullbars on the front of the car, that the pedestrians are more likely to die. These things became regulated and became constrained. In the beginning, the intent for putting bullbars on the front of the car was to protect the driver of the car. When they realized that the implication of that was pedestrians are more likely to die then the regulation happens. Society tends to react with regulation and it’s the same with any technology. I don’t think this is something that should be limited to AI or any other new technology that’s coming down the pike. That’d be the first thing and then also, as you were saying, what should happen is the people who know this should get together and should be providing that level of guidance. That’s the idea behind the AI Principles. We publish AI Orinciples to say “here’s how they should be used, here’s how they shouldn’t be used” that type of thing. I’ll put a link to them in the chat when we’re done here so people can read them for themselves. I think that’s generally what happens in society. It does evolve over time to react to new things, to make sure that people are protected. Some countries do it better than others. You pointed out already Europe, it’s probably ahead of others in that case, in the same way as Europe is ahead of like North America in terms of labor laws and other things that protect the individual. I just don’t see this being something that’s significantly different. To go back to the 3D printer example David. You can ask the same question about any technology and we could end up in a situation like, if a technology has a potential for misuse, should it be released, but then we’d be back in the stone age, right?

David Read
Absolutely. Absolutely. I’m not arguing. I think the thing that I’m just trying to illustrate for the audience is awareness. That once something has been…I feel like I’m Jeff Goldblum in Jurassic Park, or Ian Malcolm.

Laurence Moroney
Just because you can do it doesn’t mean you should.

David Read
Intent of any of these companies is one thing, the use of the product into whatever a person wants is the other. I think one of the things that we been dancing around in this discussion is a part of the issue is what is whatever this intelligence is, whatever it actually is, is one thing, but what we take away from it and do to ourselves and do to each other with it as a magician is also something else.

Robert C. Cooper
Let me change the subject just a little bit. The next thing on my list in terms of the impact is maybe a simpler, dumber question literally. Is AI making us dumber? Are we actually getting stupider because this illusion is filling in for us and doing work that we should be doing ourselves with our brain? If you don’t exercise, you lose muscle, you’re not as strong, you’re not as healthy. The brain is the same. By having a machine do everything for us, are we getting dumber? Are we not able to then learn and think critically and do the things for ourselves that we should be able to do?

Laurence Moroney
I think generally people getting dumber predates AI.

Laurence Moroney
I could never have done math without a calculator for sure, there’s no question.

Laurence Moroney
I was at university student pre-internet so I had to go to libraries and look up books and look up indices and books to find the stuff that I want and spend many hours looking for one item of information.

David Read
Dewey Decimal system right!

Laurence Moroney
Yeah, that helped me develop my brain in particular way where as now people can just open up their browser and have a search box and find it. The easy access to information means that we work less hard to get that information. If we’re working less hard in our brains to get information, then certainly you can see there’s atrophying of the brain muscles there. I think that’s independent of AI, first of all. Like I said, it’s wide access to information. The second trend that I’ve seen, and it’s one that really fascinates me, was that I was pre-internet and it was like science fiction, the idea of what we have today, where we can open up a search box and search for something and find the information in every library in the world at our fingertips. To me that was utopian science fiction that we could do that, and everybody will be much smarter. The world didn’t turn out that way. The next step that happened, and maybe you’re seeing it yourself with the people who were born with this information and these tools available to them, is that there’s so much information available, that it’s hard for them to filter out what it is that they need to find out what it is that they’re looking for. It’s being able to read 1000 books at once, so which one do you choose? I’ve seen the trend of people now is more towards actually asking and asking an individual. I’m on LinkedIn and 100 times a day people will ask me a question that you could find in five seconds by searching. That’s showing a trend of how people interact with information is definitely different. Maybe we can call it getting dumber, I don’t know.

Robert C. Cooper
We conflate memory with intelligence too much. That’s not the same thing and oftentimes our measure of intelligence are exams at school where the vast majority of what they’re testing is your memory, which is not intelligence. That is something different. Our ability to problem solve is the critical part of thinking. The science fiction, dystopian lesson, is that we rely on the machine to make everything for us, the machine breaks and we can no longer survive because we don’t know how to make anything. We don’t know how to cook our food or get our food because it always came out of the machine. All the medicine that we had came out of the machine. Now the machine is broken and we don’t know how to fix anything. The repository of knowledge is gone.

David Read
Yeah, there was a Stargate episode like that called The Sentinel. I’m gonna insert that right there.

Robert C. Cooper
That’s the science fiction version. You can say, “oh, it’s slow,” but it’s not that slow. I think that this is this is happening to us very quickly, where we are so becoming so dependent on…and don’t get me wrong. Without spellcheck I might not have a career. I do depend on it. Occasionally, I even sort of set a clock where it’s like, “I can’t look this thing up, I can’t go to my phone and I have to try and make myself remember the name of that actor who was in that thing.” I’m just worried that the people who seem to be paying attention to it and testIng the way that brains are changing as a result of this technology are concerned. I don’t know if that’s happening within the companies at all or if they’re looking at what the impact is of having these devices.

Laurence Moroney
I’m not a neuroscientist, so I can’t really comment on that. What I can say, though, just having been on this planet for over 50 years, and seeing how things happen, is that you know, generally things end up self-regulating. If something ends up being in a situation where it’s disruptive to itself, it has to adapt and it has to survive, I don’t think that dystopian science fiction, was it WALL-E of the world, where everybody becomes so dependent on machines that when they can’t use the machines anymore, then everybody dies and atrophies. In my opinion, there’s still going to be the people out there who will profit from running those machines, and their profits will be at in danger if people are unable to use those machines anymore, and they have to adapt and thrive. That’s generally what has happened with technology since day one.

Robert C. Cooper
The example I just heard on this podcast I listened to was that there is a breed of sloth that has become addicted to a particular kind of plant, it only eats this plant. Unfortunately, that plant has had a physiological impact on them to the point where they can no longer reproduce. So it’s essentially hurt their reproduction. That sloth is becoming extinct because it got addicted to that plant and couldn’t stop eating. Obviously, that’s an overly simplistic example. But it is an example that exists in nature of a situation in which an organism can in fact, be self defeating, because of its weakness in addiction to something. In that case it was maybe natural or environmental, but in our case, it has shown to be prevalent in our species that we cannot help ourselves when something that gives us too much pleasure is presented to us.

Robert C. Cooper
I think the key difference between the sloth example and us is that the sloth was not aware that that was what was happening to the sloth.

Robert C. Cooper
Well, that’s what I’m worried about, though, is that we’re not aware.

Laurence Moroney
By the fact that we’re having this conversation.

David Read
Well, we are.

Robert C. Cooper
We are. We are and I’m glad we are. I’m not sure that many people are. I don’t know if people are as aware that they become addicted or that they don’t care.

David Read
Their parents are even doing it. I have lost count of the number of times that I’ve seen parents stick iPads in front of their six month olds to quiet them down. Eather than interacting with them, they’re re-wiring their kids brains to the point where when that kid is 20 or 30, they’re gonna have far more in common with that iPad than they are with us.

Robert C. Cooper
David and I were talking about this before. I don’t think it’s entirely fair to always demonize the pharma industry but there you have a situation where they create something that is in its basic principles is intended to be beneficial. Then there are unintended consequences that the drug is addictive in a way that is so insidious, like OxyContin, that causes people against their will, with conversation happening in society about how bad it is, and then people just can’t not take it and it kills them. Is there any sense? They are they’re addicted. They have been manipulated by an external compound, an external concept, that has profited a particular corporation at the expense of the person who didn’t know any better, but probably did it anyway. It’s a disturbing feature of our society. Also look at how social media, and I sort of think there’s maybe a slightly unfair tie between the technology company and social media. In social media there’s a responsibility on the users to some extent, I agree, but what it’s done is create a tremendous amount of mental illness and that is self destructive. I know the conversation is happening, but I don’t know if the change for the cause is really coming about so. You have kids who are just looking at the world and feeling worthless but yet the technology doesn’t change. I heard a stat I was stunned by that TikTok in America, 100 million users use TikTok for 90 minutes every day. Can you imagine how happy Netflix would be if they got that kind of viewership?

Laurence Moroney
Yeah, we’d have lots of new Stargate shows for sure.

Robert C. Cooper
I’m struggling to figure out intellectually what the solutions to these problems are. I do feel like there’s an avalanche of negative consequences that I’m not sure, just publishing principles, is compensating for. I’m not aiming this at you, I aiming this at the world. I’m saying “here’s our problem” and with the Facebook example, it took Congress to step in and say, “you got to do something about.” There was suddenly the public perception that elections were being stolen by a foreign entity for an actor. I don’t know what the tipping point is for some place where we can come to to say we have to step in and do something about it. The technology is moving so quickly and becoming so advanced, not in the AGI world, but nevertheless, the tools are becoming so powerful that I don’t believe the vast majority of people really know how bad it is, or how dangerous it can be. We’re playing with fire that we don’t really understand how much it can actually burn.

David Read
Amen!

Robert C. Cooper
If I may insert, I Uber sometimes for some extra cash on the side. Routinely, I will hear young people in my back seat, who just met someone at their night out, type in their name, and “oh, there’s so and so. Oh, they only have 47 followers.” Their perception of the person has changed based on their exposure to how other how many other connections they made with other people in that specific piece of software. I think that one of the solutions would be to have all of these numbers hidden, but it will remove the rat race for a lot of these companies who want you to continue to build on this and network. They’re not going to do that because that’s a that’s a profit motive.

David Read
That was an episode of Dark Mirror as well, wasn’t it?

David Read
Black Mirror yeah. Bryce Dallas Howard. Brilliant. Then China started doing the social credit score.

Laurence Moroney
In a very, very limited way, I think there’s been a lot of propaganda about that whole social credit thing. The actual use of it is a lot less than we tend to have been seeing over here. There’s definitely a lot of negative propaganda about that.

David Read
Do you think that we should have something similar here?

Laurence Moroney
I don’t think we should have something similar here, no. I think there are obvious problems with how people are interacting with technology that we need to think and understand our way through to be able to fix them. I think we are all beholden, not just technology companies, we’re all beholden to come up with solutions for that. I think the problems are well documented, coming up with the solutions are not so well documented. As Robert, you just mentioned with the Cambridge Analytica example that you were talking about with Facebook, how Congress stepped in. I think that’s a great example of when something becomes disruptive to society, it is the job of government to step in. It’s the job of government to set speed limits, it’s the job of government to say this drug is legal and this drug is illegal. We also in some countries, the United States being a great example of that, there’s a pressure for smaller government. Less government overreach and that negative pushback if a government tries to do something to control something ends up being even more disruptive.

Robert C. Cooper
We have an imperfect system of government too where governments are influenced by lobbyists and important things don’t happen because of all of those special interests.

Laurence Moroney
So what is our job for those of us who have expertise in something? We can be the canaries kind of sounding the alarm but also I try to be the owls hooting that the dawn is coming. Do you remember the TV show Millennium? Remember there were the two factions, the owls and the roosters. We can be the roosters saying “hey, the dawn is coming” and warning people about this and we can be the owls hooting in wisdom. I think what I’m driving out here is that it’s society’s job to fix society. Some of the problems that are out there in society that we can best do that by raising awareness of the positive uses of something as well as the negative uses of something. I’m personally somebody that tries to focus on the solution rather than focus on the problem. All of the things that we’ve mentioned today, I think, are well understood problems. It’s a case of how do we squeeze society’s square peg through the round hole of fixing it and then also deal with the complexity of different cultures in different countries and how they want to do that. It’s a massive challenge that all of society faces. I’m generally an optimist and I think is the kind of thing that we can do. Some things may get worse before they get better. I always like to go back to the example that I mentioned of the original movies, when you see a train coming, people getting scared and running out of the way. Somebody could have had a heart attack seeing that, we don’t know. Robert, you pointed out, now movies have warnings. But before the movie industry understood the potential implication of this stuff, they didn’t. Secondly, before people really understood what was going on, that they’re just seeing an image projected 24 times a second or whatever it is, a static image being projected to make it look like there’s a moving thing. Until people understood that they were fearful of it. I think over time as people understand this stuff more, then the potential damage of it and the potential for people to question this stuff will go up and the potential for people to be damaged by somebody misleading them will go down. The bigger challenge then becomes, movies happened once and it took maybe 20 years, but as new technology is coming out all the time, there’s all this adaption that needs to happen all the time. That’s when it becomes a much bigger challenge. I’m gonna scare you now. I’m gonna give you my scary step. Are you familiar with Moore’s law? Have you heard of it? Okay. Moore’s law is one of the principles of computing. The reality of it was the number of transistors that can fit on a chip halves every 18 months, sorry, doubles every 18 months, which generally means that the computing power doubles every 18 months. So if you were to track and that’s why we see acceleration in computing, so your iPhone 15 is much more powerful than your iPhone 10. I did a study once where I said, “Okay, what if we start Moore’s law back in World War Two,” with Alan Turing cracking the Enigma machine. We give that the number one and then we go forward 18 months and we double that to the number two, then we go forward 18 months and we double that to the number four and we keep doing that. You can go into a spreadsheet and do this.

David Read
Isn’t it 32 iterations where it’s like, infinity?

Laurence Moroney
It’s getting there. Let me kind of give another illustration. I’m actually working on writing a movie at the moment about a person who uses the specter of AI to overthrow the US government. I say it’s the specter of AI rather than the reality. If you remember the 1980s, at the beginning of the 1980s, computers were science fiction. At the end of the 1980s, there was a computer on everybody’s desk.

Robert C. Cooper
Remember the movie WarGames? Think about the sophistication of that computer compared to what we’re doing today. You know, it was like Pong, essentially.

Laurence Moroney
If I go by the Moore’s law number starting with Alan Turing was number one in 1980 and then I go with a Moore’s law number in 1990; so the beginning in the end. I subtract those to say here’s how much computing advanced in the 1980s and I get a number. Now I fast forward to today. How often do you think we get that level of advancement, that number that was the entire decade of the 1980s? It’s every three seconds. The Moore’s law number of the entire decade of the 1980s, we’re replicating every three seconds today. That’s just showing that level of acceleration of technology is there and it’s real. That’s the pressure that’s kind of driving a lot of the things that we spoke about today and a lot of the fear that we’re talking about today because it can feel like this as a runaway horse. We don’t know where it’s going to go. In the 1980s it felt like a runaway horse as well. I remember being a child in the 1980s. There was a TV commercial in the UK. I was living in Ireland, we got our TV from the UK, and it was about robot arms. They had car factories and robot arms assembling cars in the car factories. For propaganda in favor of these robot arms. because everybody was terrified of these robots putting humans out of work back in the 1980s, they made this commercial of these two robot arms talking to each other. People thought that robots were real and could talk to each other and do all of this kind of thing. It backfired completely and everybody was terrified of this stuff. It’s the same fear that we’re experiencing today. Obviously it’s accelerated today but it’s still the same fear. Robert, you mentioned earlier on that is this thing going to put you out of a job as a writer? I would say, and I’ll say the same thing to you that I say to computer programmers who ask me the same thing, and I will say, “no, it won’t.” But the person who uses it might put the person who doesn’t use it out of work as a writer. In the same way as the mathematician who uses a calculator would put the mathematician who doesn’t use a calculator potentially out of job as a mathematician.

Robert C. Cooper
I heard Jerry Seinfeld said, somebody asked him about the AI Seinfeld that was being done online for a little while. He said, last year, “these computers are going to keep getting smarter and smarter and smarter.” He said, “I’m not worried about my job, because you have to be dumb to do what I do.” There’s a point there. I think to go back to your earlier explanation of what linear AI actually is and maybe why it’s failing in terms of things like humor, or what makes you cry. I think what humor is and what makes you laugh is your ability to fill in blanks. A joke is about the beginning and the end. I think laughter is kind of you being impressed with yourself at being able to fill in the blanks between those two points. A computer can’t do that, a computer can’t come up those missing pieces because it needs all the data to fill in the blanks, that’s something you’re able to do through your life experience. Maybe one day it’ll get to that point where it’s able to fill in enough data that it’s able to create that joke, but when you ask it to write a joke, it somehow just doesn’t seem as good.

Laurence Moroney
When we were working on the Stargate AI stuff, one of the things that we wanted to do was a funny story and to have humor in it. The prompt that came from the fans was apparently an Atlantis episode about a gate ship and the whole play of words about the gate ship. I was working with a GPT based, so not OpenAI’s GPT but something that I wrote myself, a generative pre-trained transformer, to see what we can do about writing an episode and having humor in it. It’s online, it’s on YouTube, if you get a chance to watch it. The script that the computer came out with in the end was quite funny because it was learning from existing written humor. One of the things that turns out apparently is quite funny is when you repeat something; you repeat it for emphasis. In this case, it was McKay wanting to name a ship, a gate ship and then he wrote it on the wall. And then, I am really bad, I’m sorry, I can’t remember her name.

David Read
Taylor. Yeah.

David Read
The other lady, Dr. Weir. Dr. Weir saying “you can’t do that Rodney” so he erases it and he writes it on the floor and she says “no, no, you can’t do that Rodney”, so he erases it and writes it on the door. That kind of level of repetition, apparently, has been labeled out there. There’s enough text labeled out there to say repetition like that is funny. When we created that script it ended up doing that level of repetition.

Robert C. Cooper
Rules of three right? Three times is funny, four times it starts to lose [inaudible]

Laurence Moroney
You could be right, might have done it three times.

Robert C. Cooper
The rule of three is pretty well known.

Laurence Moroney
Real humor, just to wrap up that thought, the real humor came in with the people performing it. When David Hewlett read those lines, I read the script and I was like, “this is quite amusing,” but when David then read them and the other actors responded to it, then we could see the value that they bring and the value that humans bring to this thing. It was just dropped dead wet your pants funny.

David Read
Robert, before you jump on your thought we have to get to fan questions after this.

Robert C. Cooper
Sure. The other thing that I think drives humor is error. I think we find things that are wrong funny. Even to the point of if somebody slips and falls. That’s not a normal behavior so we laugh at that. So when a computer does something wrong, in a way were laughing at it because it got it wrong. That’s what’s funny about it is that is not the way it’s supposed to be so that’s fun.

Laurence Moroney
If we have enough instances like that, where we’ve said something as funny, we can actually train a computer to replicate that.

Robert C. Cooper
My job is in trouble.

Laurence Moroney
No way, no way. You’re too talented.

Robert C. Cooper
You look at what’s happening in the industry today. Certainly the studios have kind of come out with what I think is intended to be more politically correct, non-offending their current employees, artists who they make their living off of. They are supportive of all human generated stuff and they’re not leaning into this. If, three years ago, you had gone to a studio and said, “what do you think about having 80% of your staff work at home?” they would have been like, “no, you can’t do that, that’s ridiculous. It would never happen. There’s no way.” But now it’s like, “wow, are we saving money by having everybody work at home..” They don’t know what’s possible. You and I are looking forward, they probably are too and they’re just not admitting it. But there’s a point at which a: you’re arming everyone. You’re democratizing the creation of entertainment. You see it now with YouTube, maybe not at the same sophisticated level as a Marvel movie, but that’s not that far away. When I see what’s going on in the tech world in film and television and editing and visual effects. I said to my friend, Lauren, who’s a visual effects guy, “I want all those hours back watching guys put x’s on the green screen” or tracking, I just want that those hours back and all the money we spent waiting for them to do that. Even in the early days, the motion control was insane on a show like Double Jeopardy, where we had to twin all the team with their robots. It would take hours and the money it would…you could literally do that now on an iPhone in five minutes and have looked better than what we did over three days with hundreds of thousands of dollars worth of equipment. Somebody will make a movie with next to nothing that will hit the world and be a success.

Laurence Moroney
I’ve been working on a short movie where it’s entirely created by AI. The script, the characters, every frame, the voices, everything, created by AI

David Read
People will flock to it if only for the novelty of seeing what it looks like. You will do well with it.

Robert C. Cooper
There are way more people who have talent for creative creation than the industry is giving access to. When you put that technology in their hands, you’re gonna have a flood of incredible content that I think will change the way, not just the platform that we use, but the way we entertain each other.

Laurence Moroney
There’s a precedent in books, right? Once we moved to ebooks from published paper books, there’s a flood of new books and new authors on the market. There’s a lot of garbage. There’s a lot of great stuff too.

Robert C. Cooper
What blogs, digital blogs and substacks have done to newspapers exactly. It’s coming for full motion video for sure.

Laurence Moroney
I’ll send you mine, but it’s really primitive.

Robert C. Cooper
I’d love to see it.

David Read
I’d love to see it too please.

Laurence Moroney
Sure. I synthesized my voice to voice the main character and then deep faked my own voice to voice this main character. My wife didn’t recognize that it was my voice, that’s how bad this is.

Robert C. Cooper
Like I said, when it’s wrong it’s funny. So still of value right?

Laurence Moroney
Anyway sorry David, I know you want to get to your questions.

David Read
You’re fine. Before the fan questions, I just want to ask your honest answer as succinct as you can. If I were to put you in a time capsule and send you 100 years into the future, what percentage of humanity do you think you would recognize?

Laurence Moroney
42? No. I’m glad you get that reference.

David Read
Of course I get that reference.

Robert C. Cooper
I’ve used that reference several times.

Laurence Moroney
I would say between 10% and 42%. Let’s put it that way. I think I would recognize. In the same way probably between 10 and 42% of humanity on the planet today would be recognizable to somebody from 100 years ago.

David Read
Do you think the velocity will be equidistant from 100 years back to 100 years forward?

Laurence Moroney
No. I think that the velocity will increase. I think also with the velocity increasing will also cause a greater separation. The haves have more and the not haves have less kind of thing. There world certainly be a recognizable subset of humanity.

David Read
So you don’t see technology being a great equalizer? You don’t see us overcoming that?

Laurence Moroney
I see it raising the floor but I don’t see it being a great equalizer. I think that the poorest person 100 years from now will probably be richer than the richest person today. But the distance between them and the richest person 100 years from now will probably be greater than the one of today.

David Read
We’ve halved global poverty in the last 15 years. 10,000 people a day are being brought onto the electricity grid, you don’t you see that as elevating people out of absolute poverty?

Laurence Moroney
That’s what I mean by raising the floor. If you think about the richest today, the gap between the rich and the poor is much greater than it was even though we’ve raised the floor of poverty and I think that trend will continue.

Robert C. Cooper
My hope is that all the negative consequences of AI could possibly be worth it if AI can figure out how to stop climate change. We may not even be here to look at the difference if we don’t figure that part out. I’m hoping that these advancements in technology, all that acceleration is gonna make us smarter in ways that we need to get smarter.

David Read
Well, there’s always the Thanos solution. Let’s hope the computers don’t figure that one out. RobotElbows – “@Lawrence, there’s a lot to talk about AGI and fully sentient AI becoming a thing in the not too distant future. What needs to happen? What hurdles do we need to overcome before we’re truly at AGI?”Just a really small, simple question.

Laurence Moroney
I think we addressed that a little bit earlier on. I think AGI is a little bit oversold. There’s many, many hurdles that have to be overcome, not least just understanding how far we are along in understanding, just intelligence. Nevermind everything else.

Robert C. Cooper
I heard a conversation Laurence, maybe you know about this conversation better than me. One of the benchmarks has always been the game Go. They’ve always had computer systems trying to beat humans at chess and then they moved to Go. They had created a system that was 1000 times better at Go than the best human. It felt like that that was playing a five year old in that game. Recently [a] human beat that computer at Go because they figured out one little trick that the computer didn’t know how to get around and didn’t understand and then suddenly, they could exploit it. It’s that level of, we just haven’t gotten to the point of actual learning, it’s still just simulated learning.

Laurence Moroney
Yeah, it’s learning from past data, rather than to be able to predict the future, which is great for games like go, or chess or those kinds of things. It’s not innovating to try and figure out something that somebody would never have seen before. That’s all you can do when you learn from the past. That’s how the ingenuity of a human was able to crack and defeat the thing that beat Lee Sedol, the world champion. I think again, that simulated intelligence is vastly different than we talk about AGI. In many ways, when we hear the term AGI, we’re thinking synthetic consciousness, as opposed to artificial intelligence. Just to understand what a synthetic consciousness would be, even understanding what it would look like, we don’t have yet as a society.

Robert C. Cooper
I heard a number somebody threw out, 50 to 75 years. Likely not in our lifetime.

Laurence Moroney
I’ve even heard the year 2040 when people are expecting it.

David Read
Are you talking singularity?

Laurence Moroney
Yeah. I maybe I’ll put my foot in my mouth and saying, I think that’s ridiculous. I don’t think we’re even close to that. What I think we will be getting is more and more people believing that they’re seeing artificial consciousness or synthetic consciousness. Just like what David was talking about earlier on are the examples of people chatting with transformers.

David Read
Well, you’re own company had this guy, I can’t think of his name, still swears up and down that what was created, LaMDA or something, was sentient. That’s his belief based on his own experience. When we have people saying that what they’re talking to, they are convinced that it is real, who are we to say that it’s not if that’s their experience?

Laurence Moroney
Somebody’s experience doesn’t define reality, right? People believe Elvis is still alive and he works down the local chip shop.

David Read
Bruce Jenner to Caitlyn Jenner, that’s defining reality. The question is, how many of us are agreeing with that? I think that really can define reality. Whatever your perception is of someone may be one thing, but if the society is telling you otherwise, if we’re saying “no, you’re wrong,” I think it can define reality.

Laurence Moroney
That’s the lesson of The Matrix, right? Perception is reality. When it comes to whether something exists or not, or whether something is what people claim it to be, when it’s been artificially architected and engineered by humans to be something completely different. I tend to trust the people who actually built it, as opposed to the person who’s using it and bringing his or her own biases into that use of it, and then coming up with a belief.

Robert C. Cooper
I’m sorry, David, but I don’t care how many people believe the Earth is flat, it’s not going to be flat. They don’t make it flat by all believing that it’s flat. They’re operating in an illusion or a delusion at that point. They may be happy. I’m not judging them if they’re happier there, or if it somehow changes their existence in a positive way, but it doesn’t change the fact of the truth.

David Read
I agree with that. I also think that it would be foolish and ignorant to say that the culture war isn’t a real thing and we’re not all trying to figure these things out together. Some people are saying that one thing is real and the other thing is not. Because you believe that you don’t believe that, I’m going to hate you for that. I think it’s important to remember that we’re all in this together, trying to figure each of these different things out. That’s all that I’m saying.

Robert C. Cooper
Social media technology is reinforcing our desire to fight by rewarding it. We get rewarded for fighting so fighting itself becomes the reality. It’s not about what we’re fighting over, it’s about the fact that we are now getting pleasure, reward and success from fighting and from winning out of that battle. That to me is the problem. We’ve stopped worrying about what the fight is over and what the truth actually is and we’re just engaging in the fight.

David Read
As long as we can come together as people and have discussions like this with the genuine pursuit of understanding, there’s absolutely hope.

Laurence Moroney
Yep. Also, those who learn from history or don’t learn from history are condemned to repeat it. I’ll cite an example of culture war, which is the words that you used a moment ago. I grew up in Ireland in the 1980s and if you’re familiar with the history of Ireland in the 1980s and the period that we call The Troubles, these were things that went on for hundreds of years. It was such a divide and such a toxic one that the so called culture war that we’re living in in the US right now is a pancake fight in comparison. It changed and it was fixed. What ended up changing and fixing it were many factors, but the biggest one was an economic reality of the Celtic Tiger Ward. Because of the advent of the Internet people had jobs, people had homes, people could go to movies,= and they could afford to go out for dinner and raise families. Suddenly these differences between them were minimized to nothing. It was an economic boom, that kind of led ultimately, to these troubles ending. This is one of the reasons why I’m very strongly advocating for AI because I think AI can lead to a similar economic boom, not just in the United States, but beyond. Economic booms like that have a tendency of fixing social problems.

Laurence Moroney
One of my favorite TV shows was Battlestar Galactica. If you remember the finale of that they abandoned technology. That is the trend. People are saying we should abandon technology and go backwards and make life simpler, but I disagree with that personally. I think that will lead to more misery.

David Read
Oh, absolutely. I totally think that this is solvable. I just don’t think it can be dismissed out of hand. Sledge – should nothing new…this is something we’ve addressed, you asked me Laurence. Should nothing new ever be built because someone might use it for harm? It’s the age old question.

David Read
Let me see here. Mack Bolan’s conscience – haven’t TV shows and movies been used to condition and change the minds of people? How do we decide what is right or wrong since right and wrong are subjective?

Robert C. Cooper
I totally agree with that. I don’t disagree at all. I think in particular, when you look at In Hollywood, for example. Hollywood has perpetuated a myth for the better part of a hundred and something years that has been incredibly damaging and also contributed to the acrimony that we see today. The difference between the haves and have nots. The movie industry exploded because it was used as a tool for propaganda and that is pretty much how it blew up. I think it’s still being done today and is the more sophisticated version of social media that causes people to devalue their own lives by comparison and live through, vicariously, what they see on screen and feel less than because their lives don’t match up or don’t line up with the fictional story. The happy ending is what we all look for, the perfect relationship. It’s not what we experience in life.

David Read
Hollywood is a tool for normalization. Look at Star Trek with The Original Series, you had you had a Japanese and a Russian flying and navigating the ship.

Robert C. Cooper
That’s where the benefit is. I just heard a fantastic podcast. My favorite is Malcolm Gladwell’s Revisionist History, I’ll give him a little plug on there. He did a breakdown of the social impact of Will & Grace.

David Read
Exactly, I was about to bring up Will & Grace, Robert. Go ahead.

Robert C. Cooper
The way they looked at it and the way he admitted is that they had to soften it. The reason that it did have that impact was because they portrayed a more accessible, softer version of the reality. America wasn’t yet ready to embrace anything more than that.

David Read
If you go back and read interviews from them at the time people asked them, “are you trying to normalize this?” They were like, “no, we’re totally not.” Now today, it’s like, “yeah, of course we were” and the society benefited from it.

Robert C. Cooper
But why wouldn’t you be trying to normalize something that should be normal?

David Read
Some people think that it’s normal and other people don’t. But now, I completely agree, it’s important that we do normalize this behavior and that it exists. The question is, how many of us agree whether something should be normalized?

Robert C. Cooper
I do think there’s power that TV has for sure, that TV and film has. It is also largely contributory to a lot of very negative impacts in terms of people’s perception of what life should be.

Laurence Moroney
So it’s a very powerful tool that can be used both positively and negatively.

Robert C. Cooper
Yeah, I am not blaming Kodak Eastman for inventing film. It’s not his fault that it would be used to make pornography.

David Read
Lockwatcher – for both guests, will AI, do you believe, be able to be used to create at minimum, for instance, chroma key background shots, or even possibly certain special effects so that these things will be helping to reduce the cost of special effects in the industry moving forward?

Laurence Moroney
I think yes, absolutely. I was alluding earlier to an AI based movie that I’ve been creating where I used a concept called Stable Diffusion, if anybody’s familiar with it. I created the frames as very, very, very simple animations and then I use Stable Diffusion to replace the stick figures in these things with actual CGI figures. The entire thing [is] then created by AI as a result and that’s just me working primitively on a single GPU over here. I’m not professional FX person by any stretch of the imagination. I think the sky is the limit for this kind of thing. The generative AI for that kind of stuff at the very least is going to be really powerful for filmmakers.

David Read
Thank you for bringing up Stable Diffusion Laurence. I just want to insert real quick before Robert, you contribute if you wish. Adam Cahill, who is my AI artist on Wormhole X-Tremists, I threw an image of a Stargate and a chevron at him for this episode. I said, “unleash your AI program at this right now so that we can get an image for this episode that we’re airing right now.” He took it and did a whole number of changes to it for this particular episode. He did it in less than five hours. It’s amazing technology. I just showed it on screen for everyone to see.

Laurence Moroney
Cool, I’ll check it out later. It’s very rough around the edges. It’s not ready for primetime yet, but we can see it’s definitely heading in the right direction.

David Read
Robert, how do you see artificial intelligence facilitating your work?

Robert C. Cooper
I think we’ve talked about it, but again, better tools don’t make you a better carpenter, right? Certainly it doesn’t hurt. I’ve always said that it’s all about the tools. You can see just tons of evidence out there that just because you have the ability to technically achieve something doesn’t mean you’re going to punch people or move them or speak to them or tell a story that people care about.

David Read
I think one of the satisfying things for me, there’s a couple. I think the consensus right now is something like ChatGPT, Laurence, you spoke to it similarly, it cannot create anything novel. It just puts something together from other things that already exist.

Robert C. Cooper
In all fairness that’s what I do. I’m not going to speak for every writer, but all I do is essentially regurgitate the things that I’ve seen and love. I’m filtering them and putting them through a process that hopefully reflects my humanity and someone else can identify with that. The filter process is key, the actual process is not that different. Everything I’ve done, you could probably look back at some reference that I drew from, in other films or television or in my life that I’m reprocessing into something new.

David Read
Robert when I’m sitting and watching The Fifth Race, a 14 year old just hanging out at my dad’s helicopter workshop late at night. Jack comes back through that gate and he says, “you know what we were talking about about that meaning of life stuff and everything else, I think we’re going to be okay.” The message that that hit me as a 14 year old boy, you can define it however you want, but I would say that a piece of art like these series that you created, is far more than past experience pushed through a filter.

Robert C. Cooper
I guarantee you that in almost every case in the writers room, I would be like, “you know, like this.” I would use some movie reference of something that I loved and say “we have to try and achieve that in some way with our characters in our world.” Let’s just put it this way, if I had to annotate everything that I’ve ever written, it would be an endless book of annotations.

David Read
It’s just a simple question of time.

Laurence Moroney
Can you gather all that together and I can train a GPT on it? See if it can be as novel or creative. I’m guessing my answer would be no.

Robert C. Cooper
Well, I’m glad we’re still there. I’m glad we’re still there. I hope we’re still there when my kids are my age and their kids are growing up and there’s still a value on on their imagination.

David Read
Gentlemen. I’m gonna let you proceed Laurence from here but closing thoughts after this comment and we’ll start with you, Laurence.

Laurence Moroney
Yep, sounds good. Just to answer Robert, I personally am convinced that that will be the case. As the tools get more powerful, it’s the difference of a human imagination and using those tools becomes more important. Anybody could make a film of a train coming at a screen but not anybody could make a story that resonates with people. A human imagination to bring those stories to production quicker and cheaper and have more of them out there, I think, is going to be the key in entertainment as we go forward much more than the technology.

Robert C. Cooper
I certainly hope you’re right. I have to believe so, that’s why I got up in the morning.

Laurence Moroney
Me too.

David Read
Laurence, this has been wonderful. Tremendous thanks for you coming on and talking with us. What would you like to say to close this out based on everything that we have accumulated on this as we’ve gone down a number of different tangents. Is there any single thought that you would like to leave us with?

Laurence Moroney
Yeah, I think I would leave it with “the only thing to fear is fear itself.” I think human nature has always been one that’s resistant to change and can react to change with fear. The more people there are out there doing what we’re doing today, having conversations about this kind of thing, exchanging information and trying to understand our way through the problems, the brighter the future will be. I’ll just leave with like a little joke, which is against the sentiment that I just shared. I generally believe there’s a lot more to fear from human ignorance than there is from machine intelligence.

Robert C. Cooper
I agree with everything Laurence just said. I think the negative physiological and mental effects of fear are also problematic. Stress causes a lot more problems than people realize. I’m quite sure worrying is not good for you but I’m also still a little worried about the situation. I wish I believe that the right people and enough of the right people are having these conversations.

David Read
Do you feel Robert that this discussion was, obviously we think it was worth your time, but do you think that this has improved your understanding on the subject?

Robert C. Cooper
A little bit yeah. It’s nice to hear the perspective that’s coming from inside one of the companies and also someone who I respect and think is a smart person speaking on the subject. I guess my hope is in doing this conversation, why I was excited to have it is, I want people to do something about it. I want them to not just have a conversation, or even just say something on social media, but use how they feel about this situation or these problems and go out and do something. What they do, whether it’s write a letter or go to a meeting or organize a group or talk to their leaders. It’s not something you can just passively call out to continue to happen.

David Read
You can’t force someone to learn something. You can present the information and hope that you can spark a notion in their heads to want more. That’s what I wanted to do with this episode and I think it’s been achieved in spades. Thank you gentlemen, so much, both of you, for coming on.

Robert C. Cooper
I’ve tried to write as many cautionary tales as I possibly can and scare the crap out of people while entertaining them and I probably will continue to do so.

David Read
I hope so. Gentlemen, thank you very much.

Robert C. Cooper
Thanks a lot.

Laurence Moroney
Thank you David. Thanks very much.

Robert C. Cooper
Pleasure Laurence, appreciate you.

Laurence Moroney
Thank you,

David Read
Laurence Moroney and Robert C. Cooper. This was a fascinating discussion. I hope you enjoyed it. I think we could have gone easily on for a couple more hours. Thank you to Laurence and Robert for coming on and discussing artificial intelligence with a little bit of a lens through Stargate. We have coming up a chat with a chat bot based on Jack O’Neill. That’s coming in the next 13 minutes here. I hope you all can join us for that. This is going to be interesting, we’re going to see just how close to Jack’s personality we can get based on this thing. We’re going to have that for you in just a few moments here. Then next week we have, if I can get the right button figured out here, we have a couple of guests that I’m really looking forward to. Kate Hewlett will be joining us once again. She played Jeannie Miller in Stargate Atlantis. She’s joining us March the 18th at 12 noon Pacific and then Robert Mossley, Malikai in Window of Opportunity and Reimer in season 10, the episode where everyone falls asleep. That’s going to be March 18 at 2pm. Pacific time. That’s pretty much everything that we’ve got for you here. I really want to send a huge thank you out to my team; Tracy and Antony, the moderators, for getting everyone’s questions over to me. I apologize that we couldn’t get to everybody. Thanks so much to Sommer, Jeremy and Rhys as well, my moderating team. Linda “GateGabber” Furey, my producer and my webmaster Frederick Marcoux at ConceptsWeb; you guys make the show possible. My name is David Read for DialtheGate. Thanks again to Robert C. Cooper and Laurence Moroney. We’ll see you on the other side.