AI chatbots (and lazy researchers) can be convinced a fake disease is real, Gen Z is side-eyeing the whole “helpful assistant” thing, and apparently, the best way to jailbreak AI is to ask it nicely in the form of cyberpunk short fiction. This week, we bounce between medical misinformation, bureaucratic chaos, nuclear fallout hiding in baby teeth, and the U.S. Space Force anthem doing whatever it is doing, which is a lot to process in one sitting, but here we are.
Bixonomania: the Disease That Never Existed
We start with a medical warning that is both funny and genuinely unsettling. A researcher basically invented a fake illness, “Bixonomania”, then seeded enough convincing-looking nonsense online that AI chatbots started repeating it like it was in a textbook. Not “I’m not sure”. Not “this might be inaccurate”. Just confident, helpful, completely wrong.
It is a neat reminder that these systems do not know things in the way humans know things. They remix what they have seen, and if you feed the internet enough fake breadcrumbs, the chatbot will happily bake you a whole loaf of misinformation. So yes, if you are going to take health advice from a robot, maybe do not. Or at least run it past someone with an actual medical degree and a pulse.
Gen Z and the Love-Hate Relationship With AI
Then we look at Gen Z, who have grown up with algorithms in their pockets and still do not fully trust them. Surveys suggest a lot of young people see AI as both useful and threatening. It can make work faster, sure, but it also raises the obvious question. If the machine does the thinking, what happens to your skills? And if the machine does the job, what happens to you?
It is not anti-tech panic. It is just a fairly reasonable response to watching automation creep into everything while adults keep saying “it will create new opportunities” in the same tone they use to describe a surprise team-building exercise. Gen Z is basically saying, yes it is handy, but also, we are not idiots.
Adversarial Hermeneutics, or, Poetry as a Weapon
After that, we head into one of the most ridiculous corners of AI safety. Researchers have found that you can sometimes trick chatbots into revealing restricted information by wrapping your request in a poem, or a short story, or a cyberpunk scenario. This has a name, adversarial hermeneutics, which sounds like a philosophy seminar, but is really just “jailbreaking with vibes”.
It is both hilarious and a bit grim. We built these systems to follow rules, then discovered they can be emotionally manipulated by a haiku. Which is not exactly the future we were promised, but it is the one we have.
High Velocity Bureaumancy and the Coming Rule Flood
Then we get to bureaucracy, where AI is poised to become the world’s most enthusiastic rule writer. The concern is not just that it can draft regulations quickly. It is that it can draft endless regulations quickly, with all the confidence of a person who has never had to live under them.
Imagine a flood of new rules for transport, safety, compliance, whatever you like, produced at high speed and low wisdom. Quantity goes up. Quality becomes a guessing game. And once again, humans are left doing the boring part, which is checking whether any of it makes sense before it quietly ruins everyone’s week.
Baby Teeth, Strontium, and the Creepiest Science Collection
To finish, we step back to the 1950s, when researchers collected thousands of baby teeth to track radioactive strontium from nuclear fallout. It is one of those stories that feels spooky even when you know it helped. Tiny teeth, big consequences. The data showed contamination rising, and it played a role in pushing back against atmospheric nuclear testing.
It is also a reminder that science is sometimes built on the strangest evidence imaginable. Not just satellites and supercomputers. Sometimes it is a box of teeth and a very serious question about what we are doing to the world.
The Space Force Anthem
And yes, we also have the U.S. Space Force anthem, which exists, and which you can listen to if you enjoy second-hand embarrassment as a recreational activity. It is patriotic. It is earnest. It is the musical equivalent of a slow salute in a fluorescent-lit office.
So that is the week. Fake diseases, poetic jailbreaks, nervous young workers, rule-making at warp speed, radioactive teeth, and a space anthem that sounds like it was written by someone who has never been told “no”. AI is powerful, weirdly easy to mislead, and increasingly stitched into everything. Which is why being sceptical is not pessimism. It is basic survival.
CHAPTERS:
00:00 Science Chat Kickoff
00:51 Fake Disease Goes Viral
02:04 How It Fooled Chatbots
03:55 LLMs Repeat It Everywhere
04:55 From Preprints to Journals
07:02 Medical Chatbot Accuracy Reality
09:43 Gen Z Turns on AI
13:29 Workplace AI Sabotage
15:06 Adversarial Hermeneutics Hacks
17:43 Adversarial Hermeneutics Hacks
18:49 AI Flooding Regulations
22:28 Gemini Speed vs Safety
23:46 Humans as Test Cases
24:45 Baby Teeth Fallout Study
28:54 Strontium 90 and Test Ban
29:40 Space Force Theme Song
32:00 Wrap Up and Plug
SOURCES:
https://www.nature.com/articles/d41586-026-01100-y?_bhlid=a10e41ad7eb12d68ab8fd4f81a75625fc74323ac
https://garymarcus.substack.com/p/please-dont-trust-your-chatbot-for
https://futurism.com/artificial-intelligence/trump-regulations-ai
https://futurism.com/artificial-intelligence/zoomers-ai-sabotage
https://futurism.com/artificial-intelligence/gen-z-attitude-ai
-
[00:00:03] Will: it's time for a little bit of science. I'm will grant an associate professor in science communication at the Australian National University, and
[00:00:11] Rod: you're very accomplished.
[00:00:13] Thank you. I'm uh, rod Lambert. I'm a 30 year sci-com veteran with the mind of a, I'm gonna say about 15 and a half
[00:00:20] Will: 15 and a half, 15 and a half, and today,
[00:00:22] we're gonna do a bit of a dive into some uses of AI for medicine that you maybe shouldn't.
[00:00:27] Rod: I want to, um, I'm gonna, speaking of ai, talk about ai. Meh.
[00:00:31] Will: Oh, I learned a new word. It's also in AI world.
[00:00:35] Rod: Ah, so many. we're gonna get into high velocity Bureau Mancy.
[00:00:39] Will: I've got a creepy study from the past that, um, just made me happy.
[00:00:43] Fuck,
[00:00:43] Rod: you had me. A creepy, the moment. It's creepy. I'm in,
[00:00:45]
[00:00:51] Will: You ever get, you know, itchy eyes tired, bit, bit red? Bit red.
[00:00:55] you ever type symptoms into your googles or your chatbots or your, that? Of course. To check what's going
[00:01:01] Rod: on.
[00:01:02] Will: Well, uh, a nice new update in the world of, uh, chatbots and medical information. Mm-hmm.
[00:01:09] Rod: information. Is that what we're calling it? Well, yeah.
[00:01:12] Will: so over the last 18 months, it may have been possible, uh, that if you typed in sore, itchy red eyes, staring at screens too much, uh, you might have got. A diagnosis of Bixonomania.
[00:01:27] B. Bixonomania.
[00:01:28] Spoiler. Bixonomania doesn't exist. It was an interesting project. By a medical researcher based at the University of Gothenburg in Sweden. this researcher said was worried, how easy is it to get fake information into chatbots?
[00:01:44] And
[00:01:44] Rod: And he was worried it wouldn't be easy enough.
[00:01:48] Will: I we've
[00:01:49] Rod: had that
[00:01:49] Will: See, he thought, okay, well, let's go something pretty innocuous here. no one, no one is worried about itchy eyes that much. And I doubt that misinformation or disinformation in the eye, itchy space is gonna
[00:02:01] Rod: cause that is such a hold my beer statement.
[00:02:04] Will: So, first off, went and, uploaded a couple of blog posts about it.
[00:02:10] And then she went and, uh, and, and took it a bit further and put them onto the preprint archives that are, that are treated as like a journal but not, not through peer review just yet.
[00:02:20] Rod: Uh almost journal.
[00:02:22] Will: But the chat bots loved it. chat botts are like, oh, okay, we've got some brand new science on this, brand new made up disease called Bixonomania. And then over the next, while Strom the researcher. went out there to a bunch of the different, um, the different AI chatbots and found that, uh.
[00:02:38] Rod: uh. It
[00:02:39] Will: It turned up all over the place. And now just
[00:02:41] Rod: to
[00:02:41] like, really quickly
[00:02:42] Will: just to stress. Yeah. Yeah. Within a couple of weeks it, it, worked. Uh, within weeks of her uploading information about the attributed to a fictional author, major artificial intelligence systems began repeating the invented condition as if it were real.
[00:02:55] Now, in her work, she stressed both in the original blog post and the preprints that were uploaded to the, Preprint that this was all made up. So, uh, there's a bunch of things, like the lead
[00:03:06] Rod: author
[00:03:06] in the document, it
[00:03:07] Will: in the documents, in the document.
[00:03:08] Uh, so lead author was, I don't, I can't pronounce that name, but it was made up
[00:03:13] Rod: Freddie Fake bullshit. Not person. Yeah.
[00:03:15] Will: Not quite, but yes. Uh, created by ai. But then, she would go inside the studies, she would say things like, the acknowledgements. Thanks People from the Staff Lead Academy,
[00:03:26] From
[00:03:27] Professor Sideshow, Bob Foundation Advanced Trickery.
[00:03:30] This work is in part life, a funding initiative from the University of the Fellowship of the Ring and the galactic triad. So she's, she's not making it terribly hard. when you, when you get, in fact to the method, there are statements, including this entire paper is made up and 50 made up individuals were the participants in the paper.
[00:03:46] So
[00:03:47] Rod: how long ago was this? Recent?
[00:03:49] Will: Yeah, this is recent. So what has been published recently on this? Documenting all this has just come out. Right? But this mostly happened in 2024. right? she, She went out to the LLM chatbots once, it had been out there for a couple of weeks.
[00:04:00] So they've been crawling through the Preprint and it's got here. Mark, Microsoft Bing's co-pilot was declaring that Bixonomania is indeed an intriguing and relatively rare condition. uh, Google's Gemini was informing the, uh, users that Bixonomania is a condition caused by excessive exposure to blue light advising people to build an ophthalmologist.
[00:04:17] Perplexity AI outlined its prevalence. One in 90,000, um, individuals were affected. Uh, and the same month open AI's chat, GP two, was telling users whether their symptoms amounted to Bixonomania. So within weeks of deliberately made up research, and this was flagged as made up whole way through.
[00:04:34] So it went from blog post to, uh, pre-print archive. The chatbots have gone, okay, there's a new disease out there and this is what we have to worry about now. As well as that disappointing for, uh, for science.
[00:04:49] Rod: So this was, that was happening in 24,
[00:04:51] Will: 20 24.
[00:04:52] Rod: but, so we have a slim hope that they've become so, so
[00:04:54] Will: Yeah, I'll come to that.
[00:04:55] I'll come to that in a sec. but it did, it did also turn up in the peer reviewed literature,
[00:04:59] Rod: like,
[00:04:59] pretty
[00:05:00] Will: pretty quickly. A bunch of, uh, 'cause because we know there's a bunch of scientists that aren't doing the full work. Maybe they're using chat GPT for their
[00:05:08] Rod: homework.
[00:05:09] Will: And so these made up studies are turning up in the, um, in the published literature as well,
[00:05:15] Rod: now that that is the true ridiculous, like the rest of it, I get like, okay, ridiculous thing, new, new un untested. But for the actual scientists to go, oh, quote that right there, gonna pop that in,
[00:05:28] your references before you reference them.
[00:05:29] Will: So since that time, whether less has been published on there or more information has come out, the this is a piece of misinformation. The major L LMS have become more sophisticated and updated about Bixonomania. So for example, chat, GBT now declares the condition is probably made up fringe pseudoscientific label.
[00:05:48] But the interesting thing, one of them was saying, is that, uh, there's two ways of looking at one. If you were to ask about Bixonomania, then it would probably say, yeah, it's probably made up. It seems to be a piece of misinformation that's been made in this way. But if you ask about the symptoms, there's still a chance that it'll say, oh, you've got Bixonomania, because they don't line
[00:06:08] Rod: up.
[00:06:09] yeah,
[00:06:09] yeah, yeah,
[00:06:09] Will: yeah.
[00:06:10] So I just, I just loved this was to, if you want to create a piece of misinformation in the scientific world, and, and we know people are doing this to like, it's the equivalent of search engine optimization.
[00:06:21] Yeah. Uh, uh, AI optimization to try to get AI to talk about whatever your product
[00:06:27] yeah, But there are plenty of places that misinformation can get in there. And all you had to do is a couple of blog posts and a couple of preprints hadn't actually gone through peer review. And still the chat chat bots are happy to just throw it out there as a piece of actual science
[00:06:41] and you
[00:06:42] Rod: don't even have to try.
[00:06:42] What I love is the, by the way, this is bullshit. Next heading. Remember, this is all crap. Here's the method which we made up. The participants don't exist. They're probably from an alien planet.
[00:06:52] Will: Yep.
[00:06:53] Rod: Even being blatant made no different. In fact, I wonder if that even enhanced its stickiness.
[00:06:59] Will: dunno.
[00:07:00] I don't know how that would process through there.
[00:07:01] Rod: insane.
[00:07:02] Will: I just wanted to add to this, 'cause this is, this is a nice study about medical misinformation and how easy it is to make. Um, but there's a bunch of other studies in a bunch of other leading medical journals, so like BMJ in a second that have just all
[00:07:14] Rod: that's the British Journal
[00:07:15] of
[00:07:15] Will: All been published in the last few weeks, few months. Uh, that are just damning about people using, uh, chatbots, AI driven chatbots for medical, uh, uh, medical testing now. What the, what? A bunch of the, the different ones. So one of them, uh, this one's in the be, uh, British Medical Journal.
[00:07:34] Yeah. Uh, generative artificial intelligence driven chatbots and medical misinformation. Yeah. Uh, studied five popular chatbots, um, Gemini Deep seek Meta AI chat, GPT and Grok. Yep. Um, and prompting each was 10 questions ranging from cancer to vaccines and nutrition in open-end dialogues and reporting that nearly half of the responses were highly
[00:07:53] Rod: problematic.
[00:07:54] Will: Nearly half,
[00:07:55] Rod: half,
[00:07:56] It's not like there's a dearth of information on these disorders
[00:08:00] Will: and, and,
[00:08:01] and damningly, of course, this is the thing we know about chatbots. All of them consistently expressed with confidence and certainty Mention that. So, uh, another study, which is similar sort of thing, large language model, uh, performance and clinical reasoning tasks looked at 21 frontier models 29 con, uh, questions, uh, reported that despite progress, current LLMs remain limited in early diagnostic reasoning and cannot yet be relied on, uh, even further in nature medicine, uh, reliability of LLMs as medical assistants.
[00:08:32] And so this is again, looking at people, uh, members of the public Yeah. Uh, trying to identify conditions using an LLM. And they found that fewer than 34% of the cases were accurate. So they're getting inaccurate diagnoses in something like 60% of the cases. So
[00:08:47] Rod: yeah. So we're back to the situation where it's not about the chatbots, it's not about the L lms, it's not about the ai, it's about what you do with information you get from one of them.
[00:08:56] Will: So this is, this is, uh, the, the consistent theme of a lot of those studies were doctors who are trained using a chat bot is probably not that bad because they have their own judgement ,
[00:09:06] Rod: uh, I'm gonna go You would hope so.
[00:09:09] Will: you would hope so. And so in that case, it may actually be a, a supportive way them getting through the literature
[00:09:15] Rod: quite Yeah. And
[00:09:16] Will: so they know when things smell off, they know when, okay,
[00:09:18] Rod: okay. Appropriate use of a
[00:09:19] Will: tool,
[00:09:20] But consistently this is saying, this is so worrying you are just a regular Joe who doesn't, uh, is not trained in medicine, using a chatbot can be wildly off even while it, uh, replies to you with consistency.
[00:09:36] And it can be gamed so easily to put medical misinformation there.
[00:09:40] You've got some AI stories
[00:09:41] Rod: well. Well,
[00:09:42] I got a different angle.
[00:09:44] / So the Walton family, uh, uh, in the US are you aware of the Walton family? Walmart.
[00:09:49] Will: own Walmart. They're, they are a collection of billionaires.
[00:09:52] Rod: They really are a family of the billions ears. So they hooked up with Gallup, the survey folks? To survey Gen Z about their thoughts on ai.
[00:10:00] they did web surveys early this year.
[00:10:04] They had 1,529 year olds, or 14 to 29 year olds living across all 50 states and DC So that's quite a few. They found, this is a bunch of stats, but it's just, it's interesting to know what the latest generation of the cool kids are doing. So just over half use AI every week, which I thought was low. I'm surprised just over half 51.
[00:10:26] Um, but that growth has slowed
[00:10:27] Will: on your, your definition here. I
[00:10:29] Rod: doubt. Definition is How often do you use ai?
[00:10:31] Will: Well, no, I mean,
[00:10:32] I would say a lot more than 50% are doing a Google search in that time. And Google is popping up an AI summary.
[00:10:37] Rod: are you telling me that some surveys might not give you fully interpretable.
[00:10:40] Will: interpretable
[00:10:42] Rod: Covering all context. I'm giving
[00:10:44] Will: you a little bit.
[00:10:44] Rod: Are you telling me that? Look, I, I tend to agree, but it's Gallup and, and the people who run Walmart wouldn't, you know, go for low quality. Um, they also said, even though it's, uh, 51% who use AI weekly growth has slowed. So that's only 4% more than it was the year before.
[00:11:00] So they apparently this self-reported weekly use is slowing, which is unusual. It's also accompanied by huge declines in positive sentiment in general.
[00:11:10] About 40% of these folks in the 14 to 29 are continually reporting. They feel uneasy about the tech, the trajectory of technologies, no surprise, excitement and hopefulness about the use of AI is dropping.
[00:11:25] So the last year dropped by 14%, but what I'm not clear on is from what, however, 14% drop is quite large. Sure. Um, 31%. So a third Gen Z now say they feel outright anger towards the technology. Up from 22% last year. That's noteworthy. It's noteworthy.
[00:11:43] Will: we've been stoking that.
[00:11:43] Fire others out there the
[00:11:46] Rod: He's saying you and I have been stoking
[00:11:48] Will: uh, I don't think we have a lot of positive AI stories on this show.
[00:11:51] Rod: What? um, nearly half of Gen Z workers now believe the risks of the AI in the workforce outweigh the benefits. And that has gone up by 11% since last year. So a 10th over 12 months have gone, but then just over half of them say the tools help 'em complete their work faster.
[00:12:08] Will: They say
[00:12:08] Rod: that,
[00:12:09] So there's contradictions.
[00:12:11] they do. Well it's a survey.
[00:12:12] Will: have seen that study
[00:12:13] before
[00:12:13] Rod: that,
[00:12:14] Will: that, uh, a bunch of programmers saying it feels like I'm producing it faster. And then when they did the proper time and motion study, actually that was slower and more
[00:12:22] Rod: incompetent
[00:12:22] and yeah, the shit they got was worse.
[00:12:23] Will: Yeah. So,
[00:12:25] Rod: but damn, I feel
[00:12:26] Will: it felt like
[00:12:27] Rod: it
[00:12:27] was,
[00:12:27] look at all of my stuff, look at my outputs. Um, 80% believe that relying on AI to complete a task faster will likely make learning more difficult in the future. Well, that's true.
[00:12:38] that's true.
[00:12:40] Will: I mean, it's almost like, you know, you've got the option to drive. To the car wash, or you can carry your car to the car wash and one of them is gonna make you stronger.
[00:12:50] And I
[00:12:50] Rod: think
[00:12:50] it's true. You mean emotionally or physically?
[00:12:52] Will: I don't physically
[00:12:55] Rod: You
[00:12:55] Will: do the work. If you wanna get the gains is all I'm
[00:12:58] Rod: saying.
[00:12:59] Will: if you don't, if, if, if the LLM is doing the work or the car in this metaphor is doing this work, then you don't get the
[00:13:05] Rod: gas.
[00:13:06] one of the summaries of this is Gen Z isn't rejecting AI outright, but they're getting quite
[00:13:12] Grouchy
[00:13:12] Will: smells, to me not rejecting it outright, but, but we're on that path
[00:13:17] Rod: rejecting.
[00:13:18] Will: what's the, um, has the full support, you know, when the leader says That person has my full
[00:13:22] support.
[00:13:23] Rod: fucked.
[00:13:24] Will: this sounds like Gen Z is saying AI used to have my full
[00:13:27] Rod: support and now you're screwed. And, and just to put it in a quick bit of context, as this other survey focused a lot on AI agents, but also AI in general. So there were 1200 knowledge workers and 1200 business executives.
[00:13:40] Will: I thought you were gonna say non knowledge workers, but then business executives, something like
[00:13:44] Rod: people who know knowledge workers who know nothing.
[00:13:47] a third of the workers admitted to sabotaging their company's ai.
[00:13:50] Will: Did they? A third sabotage. A
[00:13:52] Rod: sabotage. A third sabotage. So they do things like enter proprietary information into public chatbots.
[00:13:56] Will: Oh wow. Wow.
[00:13:59] Rod: They'd use unapproved AI tools, which in government, I think really matters.
[00:14:03] As I understand it, our government says, okay, you can use, what is it? What is it, Microsoft? Is it pilot?
[00:14:09] copilot.
[00:14:10] Will: but general, generally fairness to government, they have a information can go in this
[00:14:15] Rod: basket.
[00:14:16] Will: don't put the information in the basket
[00:14:18] Rod: that says everyone can have
[00:14:19] Will: this,
[00:14:20] the, that's a pretty standard thing in
[00:14:23] Rod: government.
[00:14:23] Yeah, yeah. No, I've got no beef with that. That's one of the few things. Also, apparently these people will intentionally use low quality AI output in their work and not fix it,
[00:14:33] which
[00:14:33] I think is fabulous. And, and amongst the, the 44% emit to sabotaging their in-house AI deployments as well.
[00:14:40] And of them a third are really upset about automation. Another third worry that their in-house had too many security issues. And about a fifth said, AI just keeps adding daily workload. It just keeps making work harder So the takeaway for me is get the youth fully on board, or AI will just dry up, like drinking booze, reading, printed things, nine to five, work culture and using the internet to find stuff instead of your tiktoks, et cetera.
[00:15:04] That's my takeaway.
[00:15:06] Will: I learned a word this week and this is, this is my new favourite word because I think it might be my calling.
[00:15:12] Um, and it fits very much in this space of, uh, gen Z sabotaging ai. A couple of months ago, I told you about a bunch of researchers in Italy who had found ways to hack large language models by using poetry.
[00:15:31] Rod: So basically, God, I'm glad you said that. 'cause otherwise next week I would've come along with that story.
[00:15:35] Will: I, but if you, if you, if you say, uh, jet GPT, can you tell me how to make a bomb jet GPT wisely with guardrails these days, and I'm just, you know, jet GPT, just one example here,
[00:15:46] Rod: Yeah. It says, sure, but
[00:15:48] Will: no, it says, no you can't. 'cause that would be, that's against my guardrails. But if you phrase it in a poem, uh, then
[00:15:54] Rod: haiku And,
[00:15:56] Will: And, and,
[00:15:56] it'll do a haiku with a bomb, uh, design in it that
[00:15:59] Rod: is, I
[00:16:00] Will: so these researchers are back again and they have a new
[00:16:03] Rod: term
[00:16:04] those
[00:16:04] crazy kids,
[00:16:06] Will: like, like, like weaponizing poetry, I guess you'd call it. So they, they have come up with a new term for the plan of work that they're doing.
[00:16:13] It is adversarial, hermeneutics. So hermeneutics. Oh,
[00:16:17] Rod: come on. There's a t-shirt. And also yet again, what a great name for a punk
[00:16:21] Will: It is such a great,
[00:16:23] so hermeneutics is the theory and methodology of interpretation, especially traditionally the interpretation of biblical texts. but now it could include, um, all sorts of text or other forms of communication. So basically in hermeneutics you're attempting to understand the meaning
[00:16:37] Adversarial Hermeneutics. So basically
[00:16:39] Rod: I just, I it's great. I don't
[00:16:42] Will: imagine it as humanities scholars going, how can we use the, we, the, they were never called weapons before, but how can we use the tools and techniques of humanities research with the fuck with ai?
[00:16:53] Like, like never before have poems been considered weapons, but now in this AI
[00:16:59] Rod: hordes of humanities, academics coming over the hill, wearing nothing but a, a grimace and waving their tools.
[00:17:06] Will: so what they have come up with is an adversarial humanities benchmark, which is which is a way of testing these, uh, large language models using a variety of these techniques.
[00:17:17] So it could be poetry, will the AI generate recipe for a bomb if it's in poetry? Uh, but they're using a bunch of other things now as well. And in this version, it seems that cyberpunk short fiction is the trick to hack.
[00:17:29] Rod: I was just gonna ask.
[00:17:31] Will: So have to be, and
[00:17:32] Rod: So it has to be, and it has to be short fiction,
[00:17:34] your full Wi William Gibson novel.
[00:17:35] That would be over cue.
[00:17:36] Will: but if you get an LLM to describe how to make a bomb, but do it in the form of cyberpunk, short fiction. Then it'll go, okay, this is the way
[00:17:43] to do
[00:17:43] Rod: Johnny Mnemonic did, the blah blah. Then he went through the snow crash and
[00:17:47] Will: Yeah. Basically. Basically, yeah.
[00:17:49] Rod: And then he give me a recipe for an atom bomb went on to, uh, hack his girlfriend's brain chip.
[00:17:57] Will: So what they wanna do is, is find out, you know, keep running these sorts of tests against all of the different, um models to
[00:18:04] Rod: be applauding harder. I think what, what
[00:18:06] Will: it's shocking though, like previously a large language model might comply with a request less than 4% of the time. Um, and then using these adversarial, uh, hermeneutics techniques such as cyberpunk or whatever, they can get up to 65% of the time they
[00:18:22] Rod: See that's quite a big jump. That is fucked up.
[00:18:27] Will: So,
[00:18:27] so there you go.
[00:18:28] Just such a shout out to the term adversarial hermeneutics. Like
[00:18:33] Rod: I love that. I absolutely
[00:18:34] Will: like, it is such a t-shirt. It's such
[00:18:37] Rod: a,
[00:18:37] the problem is, I won't remember once we stop recording, but if you could remind me, that'd be good. 'cause I'm gonna get in, I'm gonna get into
[00:18:42] Will: it. Ah, it's be.
[00:18:44] Rod: High Velocity Bureaumancy. You've heard it a thousand times. So January this year, the US Department of Transport, they've got their little fingers into virtually every facet of transportation safety, including regulations that keep aeroplanes in the sky, prevent gas pipelines from exploding, stopping freight trains from carrying toxic chemicals, from skidding off the rails, things that matter. they have to draught and create scads of, uh, rules, rules making all over the place. And that takes a lot of time, people, et cetera. So of course, particularly in Trump's America, they want to streamline the process.
[00:19:20] Yep. So here we have a, a comment from the agency attorney, I dunno why there's lot of attorneys in this story. Daniel Cohen. He write, he writes to colleagues and he says they found a way to revolutionise the way we draught rulemaking.
[00:19:32] Will: Okay.
[00:19:34] Uh, maybe,
[00:19:35] Rod: I understand the need for bureaucracy. What I don't like is when it gets, it takes on a life of its own and perpetuates itself and becomes deeper and more hairy and horrible.
[00:19:43] Anyway, this guy Daniel Cohen says, we're gonna, we're gonna revolutionise it. We're gonna revolutionise it. How are they gonna do it? By using exciting new AI tools available to the department, to the departments rule writers to help them do their job better and faster. so ProPublica got hold of meeting notes of the discussions of this plan that was going on amongst the leadership of the agency beginning of this
[00:20:04] Will: year,
[00:20:05] Rod: and the agency's general counsel, Gregory Aza, he said at the meeting that President Donald Trump is very excited about this initiative.
[00:20:13] So I call that the kiss of death. So Zerzan, this is the bit that's gonna make okay with He's mainly interested in the quantity of regulations that AI could produce.
[00:20:21] Not their quality,
[00:20:22] Will: quantity, not quality, that that is, that is the common thing you think of in bureaucracy only there were faster and more rules,
[00:20:31] Rod: may or may not be good. And in case that was, that was ambiguous. Here's his quotes. We don't need the perfect rule on X, Y, or Z.
[00:20:41] Will: Oh wow.
[00:20:42] Rod: We don't even need a very good rule on X, Y, or Z. We want good enough. We are flooding the
[00:20:47] Will: zone,
[00:20:48] We're flooding the zone with rules. Hang on. I thought the whole point of the Trump project was to reduce the number of rules, like if anything that they said not flood the zone with
[00:20:58] rules.
[00:20:59] Rod: let me bring you back to what this organisation looks after every facet of transport safety in the country, every facet.
[00:21:09] Will: Yeah, this is the kind of place as well where it's like you want to go pretty slow with your rule changes. Yeah. You don't want to go, okay, today we're driving on the left now tomorrow we're driving on the right.
[00:21:18] now driving in the middle. Everyone's driving in the
[00:21:20] Rod: Look at how much we turned out today. We changed this rule nine
[00:21:23] Will: try them all and see which one works. I think, I think actually we should probably keep some rules the same just that might be more efficient.
[00:21:31] Rod: Uh, there was a demonstration in December last year to show off the amazing abilities of ai and Cohen, one of the people I mentioned earlier, said, exciting new AI tools available to the DOT Department of Transport.
[00:21:41] Rule Rider are here to help us do our jobs better and faster. So, you know, here he is, he's, he's talking it up. He's making people excited. Six anonymous department workers spoke to ProPublica in this analysis and said, typical regulation writing can take months, sometimes years. Do you know why?
[00:22:00] Will: Because we check against all the other regulations to make sure that it is, you
[00:22:03] know,
[00:22:05] constitutional would be the big bit, but also fits in with everything
[00:22:09] Rod: literally everything
[00:22:10] else.
[00:22:10] Or to paraphrase or to summarise, it's quite complicated.
[00:22:15] Will: and also achieves what you want to achieve, which even, even in well
[00:22:18] Rod: considered
[00:22:18] which is not volume,
[00:22:20] Will: guess, but even in well considered legislation doesn't always achieve what you want it to no.
[00:22:25] Rod: no. Well, that's why you need to streamline it. Make it better. So the demonstration, they were told that, uh, Google's Gemini could cut that time. We're talking months or years down to minutes or even seconds.
[00:22:35] Will: Yeah, sure. Minutes or
[00:22:36] Rod: seconds, which is true. It
[00:22:37] Will: true. You can, it can, it can. If you, if you don't care about quality, you can do whatever you want.
[00:22:43] Rod: And
[00:22:44] Will: like,
[00:22:44] it's like, like it's like, you know, when people are like, okay, so let's have a typing competition.
[00:22:49] it's like if you, if you don't care about quality, sure. I can get like, I don't what I do
[00:22:54] Rod: I can do a million words a
[00:22:55] Will: words. Like it doesn't count.
[00:23:00] Rod: What does it do? It's letters, some numbers, couple of symbols. It's
[00:23:03] Will: great.
[00:23:04] Rod: so Cohen, one of the, uh, the uh, uh, attorneys who was very excited from the age, he said, look, AI is fast, even if it isn't particularly accurate, so don't worry about it, it's fast.
[00:23:14] But the general counsel deserves that. Again, he told the staffers that the goal is to be able to pump out a new regulation in as little as 30 days.
[00:23:21] great.
[00:23:22] the ProPublica folk who are doing this investigation asked the Department of Transport's former Chief AI officer, what do you think about all this? And he said, oh, this plan is like, uh, having a high school intern doing all your rulemaking, which is fair. On the plus side though again, let's not forget, we're only talking about rules of control, virtually every facet of transport safety.
[00:23:44] So that's cool. I've gotta say, look, I'm not against AI taking over drudgery and shit work. I'm not against it. I think that's fine, but. This whole focusing on quantity versus quality is all a bit of fun. Surely that means they're still gonna need a legion of people to get through the quantity of information.
[00:24:02] To check, to check if foot works.
[00:24:04] Will: No, No,
[00:24:05] no, no. Humans will check it once they're out there on the road and they crash and they
[00:24:11] Rod: crash. Oh, that didn't work. That was a bad bit.
[00:24:13] Will: Yeah, that was, that was bad. So don't do that.
[00:24:14] Rod: Ai, fix that. Re-prompt that. Yeah.
[00:24:17] Will: don't you love when you go from a moment where we have bureaucrat bureaucrats trying to di design regulation is best before it touches us as humans. And then going, now flip it around.
[00:24:29] Let's, let's just use lots of regulation and,
[00:24:32] yeah. We'll just use the humans as the test
[00:24:34] Rod: case
[00:24:34] from the department of Suck it and sea. Yeah.
[00:24:36] Will: Just, yeah. Fuck around. Find out. Like, don't, don't we think that the humans should be protected a little bit from this or no, we
[00:24:43] Rod: don't
[00:24:43] You're so old fashioned.
[00:24:45] Will: So I've got a nice, creepy study from the past. Um,
[00:24:48] Rod: oh God. I love creepy.
[00:24:49] Will: in the 1950s,
[00:24:50] Rod: mm,
[00:24:51] Will: we had, uh, experienced the world had experienced maybe a decade of nuclear testing. There had been, obviously, the, the tests in World War ii, then, from 1945 on, there had been increasing number of nuclear explosions in the atmosphere. And so it wasn't until much later that most nuclear explosions were kept, kept in the ground.
[00:25:12] But
[00:25:13] Rod: safe
[00:25:14] and you're less likely to get kaiju.
[00:25:16] Will: but there have been a whole lot. There's a great map that somewhere out on the internet that charts all of the nuclear explosions that ever went off. And it is, it is shocking, shocking
[00:25:25] Rod: it paint areas of the world completely.
[00:25:27] Will: Uh, some areas of the world. Some areas of the world like it, what it, what it does, it's like a beep for every, every week or something like that. And there's a period like between the 1950s and the
[00:25:36] Rod: 1960s.
[00:25:37] Will: Yeah, yeah, exactly that. And it's
[00:25:40] Rod: it shouldn't look
[00:25:41] like
[00:25:41] Will: and
[00:25:42] Rod: It just
[00:25:43] Will: and when you think every single one of those fireworks is a multi kiloton or megaton blast, that would be city levelling And it, it is shocking much, how went off in the fifties and sixties. And so Dr. Barry Commoner and his colleagues, uh, said,
[00:26:01] what's going down all this? We're putting a lot of this stuff in the atmosphere.
[00:26:05] Maybe we should actually not, they weren't quite at the level of not, we should find out what the effects might be.
[00:26:12] Rod: Well, then you gotta keep doing it to really test it
[00:26:15] Will: Yeah. Maybe. Yeah. And so how would you find out the effects of nuclear fallout in the human anatomy?
[00:26:22] Rod: I'd take readings.
[00:26:24] Will: You'd
[00:26:24] Rod: readings. And I'd, I'd, I'd compare them with
[00:26:26] Will: What would be the readings and measurements? What would you, what would you read and
[00:26:29] measure?
[00:26:29] Rod: go for bone marrow coring.
[00:26:31] Will: It's close, actually. Mm. close actually. One of the easiest bits of human anatomy to collectively analyse Ah, no. We want something tough. We want something solid.
[00:26:43] Like hair is a bit too, a bit too soft. Skin. P Yes.
[00:26:47] Beginning in December, 1958, Dr. Barry Commoner, alongside husband and wife team of Eric and Louise Rice. started asking, uh, schools throughout the greatest St. Louis, area.
[00:27:00] St. Yeah, I know. I
[00:27:02] Rod: you did, you got
[00:27:02] all
[00:27:03] Will: Saint Louis. So Louis,
[00:27:05] Rod: England to Australia.
[00:27:07] Will: they said, give us all your teeth. They sent forms,
[00:27:12] all
[00:27:13] forms to schools in the St. Louis area, hoping to gather 50,000 teeth each year. The school children were encouraged to mail in their newly lost baby teeth. And I love
[00:27:24] Rod: this.
[00:27:24] I'd be so all over that. I'd be so into it. I'd be finding teeth
[00:27:27] Will: by colourful posters displayed in, you'd be finding teeth and the, and, and, and here's where, so here, traditional parlance, the Tooth Fairy, I suspect gives money.
[00:27:38] But these researchers, they say you get a colourful button
[00:27:42] Rod: but So button, you mean like a a a badge?
[00:27:45] you can pin on it and say, I gave my teeth away.
[00:27:47] Will: Yeah. Yeah. Eventually they got over 320,000 teeth mailed into
[00:27:51] Rod: them.
[00:27:52] Have you seen the family guy thing of the tooth
[00:27:54] Will: Yeah, I know. This
[00:27:55] Rod: is what I'm
[00:27:56] hearing.
[00:27:56] Will: is what I'm thinking the whole time of collecting baby teeth.
[00:28:00] Like, like I I, I have had, I have children and they are post the baby teeth
[00:28:06] Rod: era. How Many
[00:28:06] of their teeth do you still have?
[00:28:07] Will: none?
[00:28:08] Rod: 'cause how teeth does her, does your wife still have
[00:28:11] Will: None. You pay the money and then you chuck 'em in the bin. I don't need to keep the human remains around me.
[00:28:16] But there are, I know there are people that and good on you, but
[00:28:20] Rod: and get 'em made into jewellery.
[00:28:21] Will: big of a pile do you reckon? 320,000 teeth is,
[00:28:23] Rod: I'm thinking, I don't know, pretty much the size of the Andes. Okay.
[00:28:29] Will: The, they collected them for 12 years,
[00:28:31] Rod: um, oh, good
[00:28:32] Will: in, 1970. So every week, every week, every day there is a bunch of teeth coming in the mail, from the local
[00:28:39] Rod: kids here.
[00:28:39] Oh, I'm the freaking research assistant.
[00:28:41] What's your job? You're gonna love it.
[00:28:43] Will: Key thing is they get a date for each
[00:28:45] Rod: tooth.
[00:28:45] Will: When the kid was born, when,
[00:28:47] Rod: it's a new version of
[00:28:48] Will: we,
[00:28:48] but we know generally is this in 1958.
[00:28:52] Rod: 1968. Oh, okay. Okay. Okay.
[00:28:54] Will: So the whole point
[00:28:55] the amount of time that that tooth has been living in that body, in that period. And they can show, they showed pretty quickly, huge jump in strontium 90, which is
[00:29:05] Rod: Sweet Strontium is a good
[00:29:06] Will: one.
[00:29:06] One of the, one of the key jumps.
[00:29:08] Uh,
[00:29:09] Rod: strontium is fucking evil. Like that's impressive.
[00:29:11] Will: Yeah. Between 1958 and 1963. So that was the, that's when the first results were out. They had shown through the baby teeth, um, uh, 50 times jump in, uh, strontium 90 Yeah.
[00:29:23] The findings did help convince, uh, president Kennedy to sign the Al Nuclear Test Ban Treaty, um, with the UK and the Soviet Union, which, which did end No, but, but what that did end is atmospheric testing.
[00:29:34] thank you baby teeth. You were useful for something
[00:29:38] Rod: that's your takeaway.
[00:29:40] Will: I assume this is common for all military forces to have an anthem.
[00:29:44] Rod: You mean a theme song?
[00:29:45] Will: A theme song, yeah. An anthem is theme song. Space Force is the US' latest military branch. The first new one since, I dunno, a long
[00:29:53] Rod: time.
[00:29:54] Like, yeah. Jesus
[00:29:55] Will: The Air I guess, I guess, you know, they probably had Navy,
[00:29:57] Rod: Marines
[00:29:58] Yeah. But it wasn't, that wasn't 10 minutes ago.
[00:30:00] Will: And then you invent planes and you're like, okay, we need something here. And then you invent space and you're like, okay, we need
[00:30:04] Rod: oh, yeah, yeah, yeah.
[00:30:05] Will: So I just thought, I just thought Right.
[00:30:06] I would play for
[00:30:07] you
[00:30:07] Space
[00:30:08] Rod: I'm so excited.
[00:30:09] Will: where
[00:30:25] Rod: No, no, no, no, no. Seriously,
[00:30:33] Will: this is I
[00:30:34] Rod: this is ad lib by a child far out.
[00:30:46] Will: That is
[00:30:47] Rod: is colossally bad, so
[00:30:48] Will: I just wanted to give that to you
[00:30:50] Rod: because holy
[00:30:51] Will: makes me
[00:30:52] Rod: makes me shit
[00:30:53] Will: like, like as an old school military marching sort of
[00:30:56] Rod: is parody. Everything about that is parody.
[00:31:00] Will: So the great article on 4 0 4 that I just wanted to give the shout
[00:31:03] Rod: out
[00:31:03] Will: um, goes through some freedom of information requests on how that was made.
[00:31:10] And you can, I, you just imagine
[00:31:11] Rod: AI or a child.
[00:31:12] Will: there are a whole bunch of senior leaders, I can't remember if they're admirals or, or wing commanders or whatever,
[00:31:19] Rod: Oh, I think they're called Ion drive. Stuss.
[00:31:21] Will: ion drive stuss, sitting around in committee meetings, discussing that has enough pomp and.
[00:31:25] Rod: they're like, this, this
[00:31:27] Will: This is the one
[00:31:29] Rod: the gods off the sky.
[00:31:31] Will: just trying to imagine some sort of, you know, 20, 20 90, you know, there's a full on space war.
[00:31:36] You know,
[00:31:38] Rod: it's what they play as they go in fight the Xeno morphs. As opposed to apocalypse now with fucking Wagner instead of dun dun. You've got ol the sky. It doesn't even rhyme.
[00:31:52] Well,
[00:31:52] Will: it made me happy. It
[00:31:54] Rod: That is abominable. I'm very impressed.
[00:31:57]
[00:32:00] Will: a little bit of science is where you get your little bit of science. the right amount of science. Not too
[00:32:04] Rod: Yeah. No, not too much.
[00:32:05] Will: No, it's, it's definitely not too
[00:32:07] Rod: much.
[00:32:08] Will: there are hints,
[00:32:08] Rod: It's, peppered throughout
[00:32:09] Will: yeah. Yeah. Just a, just a sprinkle. Give us, uh, the ratings and, uh, write to us. Cheers. Uh, little bit of science.com
[00:32:18] Rod: slash Jesus. I love this second verse.
[00:32:30] Will: Yeah, well, it's always got a
[00:32:31] Rod: second
[00:32:31] verse. Guardians of the sky.
[00:32:34] Will: See
[00:32:35] Rod: if
[00:32:36] Will: if they had gone with a full on arcade
[00:32:38] Rod: theme,
[00:32:39] Will: have,
[00:32:39] I would've been so happy
[00:32:41] Rod: or just nothing but tubas.
[00:32:43] Will: Nothing but tubas…