top of page

Transcript: Tree of Knowledge

Sulaiman: Welcome to this week's episode of the Tree of Knowledge podcast! For any newcomers, this podcast is one where Adam, Omer, myself, Sulaiman, do research on different subjects with a specific theme in mind and come together to speak about them. This week the theme was misinformation in science and our projects aim to answer two questions.

Where is the communication going wrong within our subjects? And what can we do about it?

 

Each one of us will give an overview to our subjects and findings, and then we'll move into further building each subject and we'll end broadly with discussing possible solutions. Take it away, Adam.

​

Adam: So, hi everybody. My name's Adam, and I'm delighted to be discussing my research on nuclear energy and governmental communication with you today. So, personally, it's fascinating to explore why nuclear energy, despite its, you know, despite its potential as a solution to the climate crisis, faces significant resistance. Over the past few years, I've delved into this topic seeking to understand the factors shaping public perception.

From examining public understanding of science to dissecting governmental communication strategies, my research has revealed the pivotal role these elements play in shaping attitudes towards nuclear energy. Join me as we uncover the intricacies of this complex issue and explore the narrative surrounding nuclear power.

 

Omer: All right. So hi, my name is Omer. And I'm going to be talking about AI and its limitations and bias. So I'm sure you've seen all the hype about AI, like ChatGPT and other, uh, LLMs, posted by Google, Meta, and all the hype around social media too. I mean just recently OpenAI released their new Sora app, right?

And there's been a lot of hype around it, but there's a problem because there's also a lot of limitations of it too. And because of that, there's a lot of misinformation being spread out across the internet. There's a lot of TikTok videos, Instagram reels that show, uh, videos taken in real life, but they claim that it's from Sora.

So there's a lot of misinformation going out there. And from a more technical point of view, there's some bias, right? Uh, from both points of view, from, uh, companies caring too much and implementing too much, uh, bias resistance or not doing it at all. So, they can really, uh, skew what the AI gives to the, uh, to the viewer.

And I think the solution to that is really just, uh, educating people and from all points of view. And I'll talk more in depth about this later on. But yeah, that's what I'm on.

 

Sulaiman: And once again, my name is Saliman. And my research has been focused on communicating – getting mental health information services, especially where young men are concerned.

So young men are suffering high rates of mental illness, but very few seek mental health services. One possible factor could be the way mental health is communicated. It's often put into the frame of weakness. It's okay to be weak is something we hear quite a bit. As far as mental health is concerned, no young man wants to be considered weak.

Also, most of the current therapeutic approaches are heavily focused on speaking, something that may be deemed “unmanly”. In my paper, Masculine Mental Health, which will be found in the transcript, I suggest providing young men with adequate information surrounding mental health and also reframing help seeking or the communication of help seeking as being a sign of strength.

And thus may increase the uptake of mental health services by this group. Now, since Adam is interested in giving every country the potential to build nukes, and his name also conveniently starts with A, I think I'd like to begin by asking him this: what is nuclear fission? And why would I ever want something nuclear in my country?

 

Adam: Okay, so I'm kind of glad that you asked this question first, actually, because this is a really big talking point. Why would I want it, and you know, what's the risk with nukes? Those are two really big things. And so I think first we're going to start off with why would we ever even want nuclear fission, right?

And feel free to interrupt me at any point if you have any questions. So think about it like this, right? Climate change has been a really massive issue for God knows how many years, and it's going to continue being an issue. And right now there's not really any. foreseeable permanent solutions. The closest things we have are technologies like nuclear fusion, for example, right?

But those technologies, we're not expecting them to come out anytime before 2065, or at least to be useful before then. Same thing with carbon capture. So right now we need sort of an intermediary technology that will get us through these next 40-ish years so that we can reduce our carbon output before we get these technologies.

And so when you look at our choices that are available right now, you end up with a very short list of renewable energies, right? You have solar, wind, hydro, and other than that, you've got some niche technologies here or there. And so there are problems with these technologies. For example, solar and wind are famously uneconomical.

 

Sulaiman: What do you mean by that? I thought solar was going to save the world, bro.

 

Adam: Yeah, so, solar is in fact not going to save the world. It's great, don't get me wrong. But it has a lot of problems, the least of which is the fact that it's super inconsistent, right? And we can't even implement it everywhere. If you go to somewhere like Britain, where it's raining 24/7,, you're not going to get much use out of solar.

 

Right? And just in general, the amount of solar panels you need in order to generate the world supply of electricity, it's really hard to give it that much money. Right. And the same thing goes for wind. And so you end up with what? Just hydro left. Hydro, geothermal and nuclear. Hydro again can’t be applied everywhere, right?

Not everywhere has access to. gushing rivers and waterfalls and whatnot. And then for geothermal, same thing, just not accessible everywhere. So what you're left with is nuclear energy. It's a relatively cheap resource, okay? Like once you actually have a power plant, it's relatively cheap to keep running.

It generates tons of electricity. It's multiple times more efficient than any fossil fuel we have right now. And it lasts a while, right? Like the – and they're super economic too. So nuclear fission is by far the best candidate we have right now for helping us fight the climate crisis. And so the question you might have then is why haven't we started implementing it already?

And a big part of that is the whole nuke stigma, right? And so here's the thing. A lot of people think that as soon as we give a country access to nuclear energy, immediately they're going to figure out how to start building nuclear bombs.

Adam: It seems natural, right? Like, this is nuclear, this is nuclear, therefore, you know, it correlates, right?

But really, once you start looking into it, you realize that this is extremely impractical. So, there's two main types of nuclear fission fuel, right? There's uranium and plutonium, okay? The thing about uranium is that when you're using it for nuclear energy production, You use uranium that's only like 5 percent enriched and so that's way too little for actually building nuclear bombs.

Nuclear bombs need at least like 20%.

​

Sulaiman: I was going to ask about that. I was going to ask if you saw “Oppenheimer” where they're constantly enriching the uranium.

​

Adam: Yeah, like it takes a lot of effort to enrich uranium and the stuff that you're using for nuclear fuel. just isn't good enough, right? And so, it becomes impractical to use that to build nuclear bombs.

And then, if you use plutonium for nuclear energy, Right. The thing about plutonium is that it's a lot harder to build nuclear bombs with them. It's just far more difficult. And so again, it becomes impractical, right? It's just much easier to get access to uranium. That's enough and to build your nuclear bombs from there.

 

Omer: And, uh, hydrogen, because I know there's hydrogen bombs made from that too. So is there any energy in that?

 

Adam: No, absolutely. But then again, it's difficult because like. For example, right? Say that I wanted to pickpocket Omer's phone, right?

 

Sulaiman: Yeah.

​

Adam: What's going to happen is that if you weren't even thinking about it before, had I not mentioned this, it would have been kind of easy, right?

You're not paying attention. You're not even considering that it's a possibility. I just take your phone from your pocket. There you go. But now that I've mentioned this out loud, that I might have an interest in your phone, or even if, for example, I said like, “Hey, can you give me your phone? I want to search something up.”

Now that you're cognizant that there's a connection between me and your phone, it's going to be a lot harder for me to pickpocket your phone, right?

 

Sulaiman: Yeah.

 

 

Adam: So it kind of goes similarly with nuclear energy, where if a country just has nothing to do with nuclear on the surface, right, and nobody else suspects them of anything, it's going to become much easier for them to smuggle nuclear resources into the country and to start building these capacities for nuclear bombs, right? But as soon as they have eyes on them, right, and the international bodies regulating nuclear energy are watching them, it becomes way harder to do that type of stuff.

 

Sulaiman: But what if they're not watching? Like, I know Afghanistan has problems with terrorism, so what if instead of the country, it's a terror organization that imports this material or maybe gets their hands on 5% enriched uranium, what would happen then? Like, I know it's a, it's a big concern. It's something I've been concerned about.

 

Adam: And that's a good question, but why wouldn't the eyes be on them? Cause if, hold on, I just want to try to understand your question. So are you saying that Afghanistan is smuggling in the uranium or are you saying that they're getting the nuclear energy and then trying to circumvent the law?

 

Sulaiman: If we were to supply Afghanistan with nuclear energy to hit that 2050 target. They are rife with terrorism. I think they're currently run by the Taliban anyway, like the Taliban overthrew their government. So would the international, the international body is watching the country, but not necessarily the terror organization.

So let's say they somehow fade away before then. And then the country now has a great government and they start using nuclear energy only for the terrorist organization to come back, take it from them and start building weapons of their own.

 

Adam: Absolutely. So it is like a potential risk, but keyword here being potential, because it's highly unlikely because if you look at the historical evidence, right, there are 20 countries that have gotten nuclear energy so far, and they've built up the potential for this, you know, during and post the cold war, and they did not develop a nuclear nuclear weapon capabilities.

 

Okay. And so obviously careful analysis has to be done. And some cases. in the cases of like extremely autocratic regimes, right? Something like Afghanistan, there might be more risks involved. And so you have to be more careful in giving such, uh, you know, such leadership nuclear energy, but historically, creating nuclear energy usually comes second, not first, right?

If a country develops nuclear weaponry, then maybe they go into nuclear energy, but the path from nuclear energy to nuclear weaponry has just not been present ever historically.

Insert: *Autocratic regime means a government where all the power is concentrated in the hands of one person, think the typical ‘Ruler orders and kingdom obeys’ government*

 

Omer: Okay. That's interesting. I also wanted to ask you about, uh, how safe is this? I'm asking this because a few years ago there was an explosion in Beirut, right? And I was like that sure looks like a nuclear explosion, right? From not managing it properly and from poor management. So like, how safe is this?

 

Sulaiman: (Interrupts Adam’s thinking) Sorry, but the same thing in Chernobyl. I was gonna ask that question later on. How safe is it?

 

Adam: Okay, so I'm just gonna have to get this out of the way.

 

Beirut was not a nuclear explosion. In fact, what happened in Beirut, uh, this is kind of outside the scope of things right now, but just to clarify what happened in Beirut, that is that there was a boat on the docks and it was storing, I think it was potassium nitrate, just like fertilizer and stuff for way too long.

And it was a massive amount. And then somehow a spark happened, you know, it could be anything really, I'm not sure what the spark was, but in the end that ended up setting off the explosion. So it had nothing to do with nuclear energy. And as for cases like Chernobyl, okay?

 

Sulaiman: Uh, sorry, but I just wanted to interrupt for anybody who doesn't know what happened.

So, what happened there was, it was a nuclear power plant in Ukraine, and what happened was it blew up, and it spread radiation throughout parts of Europe and displaced some 350, 000 people, (and) caused some 5, 000 thyroid cancers, according to the World Nuclear Association.

 

Adam: Okay, so, the thing about Chernobyl, and if you want to, we can bring in the other cases as well, like Fukushima and Three Mile Island.

The thing about these is that there's usually massive critical oversights on a really high level that we've learned the mistakes from. Okay. And so with Chernobyl, especially, it was a lot of negligence on the part of the local government and handling the nuclear power plant and updating it.

 

Chernobyl was multiple years in technology behind anything that was available at the time. And they had not fixed a lot of the systems that were in place. And so, this is less of a risk of nuclear energy and more of just a risk of management. And this is something that has been improved upon by creating, you know, stricter regulatory bodies both internationally and nationally in any countries with nuclear energy.

And with the, and with the case of Fukushima, it was mostly just failing to account for the height of tsunamis, okay, that might hit the nuclear reactor. And so this has also been a lesson that has been learned from right? And by tightening a lot of laws most of these things have been avoided.

And in fact, if you look at how many casualties (nuclear energy) has caused historically. even including the big three disasters that everyone worries about, it is still tied for the safest form of renewable energy out there. And it's tied with solar.

 

Sulaiman: What are the big three safety disasters?

 

Adam: Chernobyl, Fukushima, Three Mile Island.

 

Omer: Okay. And I also want to ask you, is this a really expensive way of getting energy? Uh, obviously for larger countries, it may be just, it may be easy to get it, but what about for smaller countries? How easily can they get it?

 

Adam: So that's actually a really good point. And so the main drawback of nuclear energy is the fact that you need a really high capital investment right at the beginning.

Or, in other words, you just need to pay a lot of money upfront in order to build the reactors and build the plants. And so definitely it is a lot harder for countries to just start up in nuclear energy. Usually they need help and they need foreign help and foreign investments. And this usually comes from countries with really good nuclear capacity already, like Russia, China, et cetera.

But once you do start to develop your nuclear capacity, and once you get the ball rolling, it pays off like multiple times over economically. And I was reading a few research articles and they were talking about how economies that are trying to – they're trying to diversify their electricity grid and trying to make it more renewable, okay?

The best outcome by far is through the usage of nuclear energy. electricity usually in economies where they try not to use nuclear energy instead use solar, wind, et cetera. Electricity becomes really expensive due to a mix of inconsistency and just generally like the higher upkeep costs of solar and of solar and wind per capita.

 

Sulaiman: So in your paper, which will also be linked in the transcript, you mentioned something about governments doing something to uh, promote nuclear energy and make it more popular among the people. How is the government involved in any of this and why would they do that?

 

Adam: That's a great question.

 

And so most, most countries that have nuclear, nuclear energy capacities, they're not like… how do I say this?

They're not fully privatized, okay? A lot of it is involved with the government to some degree. Okay, and that's just because of the regulations I was talking about earlier in order to make sure that they're safe and in order to make sure that they don't, you know, get turned into other, you know, other nuclear weapons and whatnot.

And so the government would want to promote nuclear energy for the reasons I mentioned earlier, just like really big economic development, right? It encourages economic growth through job growth. And as for what their role in this is: if the government does not take an active role in promoting nuclear energy, then usually the information that's spread around, through the media, is just negative information.

It's stuff like, “Oh, Chernobyl was a big disaster!”, “Fukushima was a big disaster!” without focusing on why these things happened and the careless mistakes that resulted in them; the ones that just will not be repeated again. And also discussing the benefits that usually doesn't really happen because it's not, you know, it's not as clickbaity, doesn't get as many people's attention.

And so it's the government's job to do two things. One, it's to educate the public. It's to teach the public how, like what the science actually says, okay, both the good and the bad. Admitting to the faults of nuclear energy while still dispelling the myths and promoting its benefits. And then the second job of the government, and this one is probably even more important than the first, It's to make sure that they have an open communication, an open dialogue with their general public.

Because what happens is that if the public is not involved in creating these nuclear capacities, then, then they start to become afraid that their interests are going to be lost, right? If someone's going to develop a nuclear power plant in my backyard without talking to me, then suddenly I feel like my rights are being infringed upon.

Right. I'm being put at risk without having my opinion being considered. And so it's the government's job to create a plan in order to involve the people. And so my favorite example was China, for example, and they were developing their nuclear energy capacity. They made a top down government communication plan, where the federal government came up with the checklist and the ideas and the talking points.

And then as you go down, the municipal governments started involving the local communities. So anyone within, I believe it was a 30 kilometer radius, of a potential nuclear power plant construction site, they were consulted and there were hearings so that these people can come and talk about their worries and their concerns.

 

And so this both manages to help hold the government more accountable, which helps prevent corruption and the mistakes that were made in the big disasters, and it also just helps let the people feel like their voices are being heard. And so they're less distrustful of these projects and they're more open to nuclear technology.

 

Sulaiman: Okay, and so let's say that I was living in a space where I was in a 30 kilometer radius of a fission plant. Let's say the government spoke to me and I read about it and Chernobyl doesn't scare me, so I decided to let them. But let's say I also want to spread the word, so I go to my uh, former high school and I try to promote it to a 10th grade class.

Now they're not fans for obvious reasons. What steps would I take to 1) teach them more about this technology, and 2) get them on board given that it has more potential than solar or wind.

 

Adam: So with the ten year olds, the first thing you gotta do is you gotta play a subway surfer's clip in the back. [Humorous Tone]

I'm just kidding.

 

Okay, actually though. Entertaining ten year olds, or tenth graders? (Sulaiman & Omer nod after tenth graders)

Yeah, I misheard. Okay, so with 10th graders, it's, I think, interactivity is the most important part here, because education, like, if I'm a 10th grader and you just come and start [00:22:00] spitting facts at me, I'm not going to be quite that interested.

Insert: *Subway Surfers is a popular mobile game where a mischievous main character who was doing graffiti on a train car gets chased by a police officer, and you help them get away. Kids in highschool, especially the ones sitting at the back of the classroom, are notorious for playing games like these when the teacher is teaching.*

 

Sulaiman: Yeah.

 

Adam: Right?

​

Sulaiman: Subway surfers in the back of the classroom. [Humorous Tone].

 

Adam: Exactly, yeah. And so what might be better than just spitting facts at people is actually just having it be more interactive. And they can, for example, you can take them on a field trip to the local nuclear power plant or to a museum, right? And there they can start asking the questions and they'll be asking these out of curiosity.

And so they’ll definitely be listening. You don't have to play the clip (Subway Surfers) in the back. And so definitely just being interactive with them, answering their questions. Right? And giving them room to ask these questions, I think, is the most important thing.

 

Sulaiman: Really? And doing so, or in allowing them to ask those questions, you think that some of them will be convinced?

 

Adam: Absolutely, right.

 

Because I think nuclear just has really poor marketing right now. Because if you look at most of the research articles, for example, most of them that I was reading, whenever they talked about, “Okay, here are our next steps to help promote nuclear energy in our country” right?

Make the public understand more. [Booming, Condescending Voice]

 

Like that's it. That's the whole thing. Okay. And it's not catchy, it's not interesting. Okay. But with interactivity, right, by allowing people to voice their concerns and by allowing people to have their questions answered, they become a lot more comfortable discussing the topic and they become a lot more comfortable with nuclear energy in general.

So absolutely, I think interactivity is perhaps the most important part that is missing from government communication strategies right now.

 

Sulaiman: Cool. And, you know, with this topic and given all the misinformation that has been floating around the web or all the doomsday articles, it makes me curious to wonder what AI would say about this.

Now, I'm sure everyone here is familiar with artificial intelligence, and Omer talks about it in his “Limitations and Biases of AI” paper, which will be in the transcript. But in that paper, he also mentioned things like General AI and Narrow AI. I thought it (AI) was one thing. So if you could explain what it is and what type you investigated?

 

Omer: All right. Yeah. So Narrow AI is basically where – it's been used for like many decades now. Like it's an ad, right? Whenever you get recommended an ad on YouTube or Instagram, uh, or on the web, right, that's narrow AI. It's basically using your data, right? Pre-existing data, and it's making a choice based on that.

Narrow AI exists (and) it's very prominent today. And it's not something that's really

eye-catching. It's been around for a long time now. And when you think of real AI, like ChatGPT, Or the one you've seen, and well, ChatGPT is kind of a mix. I'll get into that later on, but when you think of AI in the movies, you know, like “Terminator”, uh, or “I, Robot”, right, that's what you call general AI: AI that does not, uh, it’s unsupervised. And there is some progress being made on it, but it's very much in the future.

And I would go on to say that it's probably at least, uh, at least, uh, 50 years from now. Like it's, we're pretty far away from it. We are making developments, right. With OpenAI, right, and, uh, ChatGPT, right. We're seeing some progress there, but, really, in its essence, General AI comes down to the fact that you don't really need a human, right?

 

It surpasses the intellect of a human, right? It can keep on answering that “why?”. If you keep on asking why, why, why it can keep on giving you the answer. And until this day we haven't cracked that code yet. So General AI just, it doesn't exist right now, although we are making developments to it, it's just very, it's like a far fetched future from now.

But Narrow AI is used everywhere, like it's used so much in many industries.

 

Adam: Okay, hold on. So I just want to ask a quick question. So what exactly is AI and like what's it doing? Because I want to understand, is it actually thinking right now? Because you're talking about, for example, the Narrow AI thing.

Is my YouTube algorithm actually thinking or is there like some math equation behind it? And then what about ChatGPT? Does it do something different? I don't know.

Omer: Yeah, so with your, uh, with like ads on YouTube, well, it uses, it uses some supervised learning and supervised learning using mathematical formulas, right?

Uh, like gradient descent, some, some more complex mathematical formulas. The whole gist is to classify your decisions, right? And then use those classifications to predict future decisions that you may click on them. That's basically how that works. There are other models too, like a regression, right? Uh, but like that's mainly the gist of it.

It takes a large set of data, right? A very large set of data, like probably thousands, tens of thousands, if it's like a big company, rather than millions, right? It takes that data and puts it in clusters, right? And depending on the use of it, right, putting clusters, and then based on that, it predicts what you are likely to choose.

That's how, uh, Youtube – the ads work, right? And TikTok and stuff. Uh, for ChatGPT, on the other hand, it uses LLMs. Large language models. These basically are large collections of information from the internet, right? And this information from the internet, they basically get user information, right?

And it learns from it, how, what people think, what people have posted, right?

 

Sulaiman: Wait, wait, I'd like to interrupt right there. So you said it learns from the internet, right? Well, in English in 12th grade, the teacher got quite upset that kids were using ChatGPT to do their assignments. And the way she could tell is not by using AI detecting software, but apart from the supposedly ‘robotic’ way the paper was written: the AI introduced characters that weren't even in the play, or mixed up the genders of the characters in the play!

So isn't ChatGPT just limited to what it finds online?

 

Omer: Yes, in a way, it is limited to what it finds online because it takes the information, but it adds its own twist to it, right? It adds some nuance to it to make it different from the information online. And so, obviously, it's not perfect right now, so, uh, sometimes it can miss – uh, there are mishaps that happen.

 

In fact, in my research, I talked about this where there were some LLMs that went completely off script and just literally just outputted bogus to my prompt where I told it to give me like a short, uh, story of five sentences and they just completely gave me nonsense. So obviously there's a lot of imperfection.

And now about your second statement about gender, this tape (tape means point here) directly ties into one of my other research points of bias, right? So sometimes what happens is, and I saw this in other LLMs too for OpenAI and, uh, I think Meta was also one of them too. Where, to fight bias, a lot of times they go to the other extreme where they try to be too inclusive, right?

To the point where they end up highlighting the bias instead, right? And, I think-

 

Adam: Could you give us an example of that?

 

Omer: Yeah, sure. So, an example of that could be when, um…

 

Oh yeah, uh, what do you call it? ChatGPT's image generation, right? When ChatGPT generated images of, uh, like, what was it called?

 

Sulaiman: U. S. Presidents?

 

Omer: Yeah, U. S. Presidents, yeah. And a German soldier, yeah. It gave like some racial minorities, right? Which is not factually correct, but then that ended up blowing up on the internet and that, and was fueled by that implicit racism, right? Because it was there, but like, even though they (ChatGPT creators) weren't trying to do it and they quickly took it down.

So yeah.

​

Sulaiman: I think that's very interesting. Why would a non thinking machine, (a) mathematical machine, try to eradicate bias in the first place? Like it doesn't feel; it doesn't really care.

 

Omer: Yeah, so that's the thing. Uh, it's, it's doing that because of the large amount of investment and funding that's been put in place by governments and companies to reduce this bias.

And I think it's just being, uh, it's just not that controlled right now. Obviously, uh, I think because of image generation, right, it's new and all, it's probably going to go down, right? Because we're in like the first few years of AI image generation, a lot can go wrong. And since companies really do care about not discriminating and right, you can get really counseled for it.

Insert: *Counseled in this case is referring to getting spoken to by a higher up, such as a manager, for racism/discrimination.*

And it's a real big deal in this day and age, right? If you do something racist or something like that, companies really do care about that. And they invest a lot of money into that, but sometimes they take it to the extreme and that can influence the AI model negatively.

 

Adam: It's funny that you mentioned how they're getting canceled because like, I don't know that much about AI, but the one story I do remember is that one racist Microsoft AI chatbot. Though, I don't know, this was a few years ago.

(Checks laptop)

 

Yeah, this was 2016, oh my god. But, basically, it was just like some Twitter bot that Microsoft made, and it was meant to be a primitive chatbot. And people started tweeting it all sorts of horrible, horrible phrases and slurs.

And so the AI chatbot started learning these slurs and started insulting quite a few minorities. So while I do think that there's definitely some over correction right now. From what you're saying, it's better than that, I guess, right? You know, pick your poison.

 

Omer: Yeah, so, uh, all of this is back in 2016, right, when AI was like, the AI scene was pretty dull back then, back in 2016, because there wasn't that much hype, right?

And AI was just – iit was in its early stages, and I would go on to say that, what do you call it? The chatbot that you were using, or people were using, was probably based off a supervised learning model, where it was taking data from pre-existing clusters, right, and learning from that data, but not from the general internet, and wasn't really… and, oh yeah, also, there wasn't a lot of, constraint on the bias of that at that point because at that point people are just looking for functionality, “Does it work or not?”.

They weren't really concerned about the bias or discrimination until they figured out that this is a problem later on, so yeah.

 

Sulaiman: So I have one big question.

 

There's a lot of hype about AI, but if it could go so wrong that we've been talking about now, why is there so much hype now? And you mentioned it, it, we're probably going to get to the levels of AI that we want to get to in terms of general AI, uh, far into future. So why is there so much hype now about it?

 

Omer: The reason there's so much hype is because this is a major milestone in the past. Even in the past like decade, we haven't made this kind of progress, right? The AI scene exploded when ChatGPT came out two years ago. And this was like major, it was crazy even for computer scientists, because it was just seeing this in action, right?

Seeing this play out, which is beautiful and just on a new level that wasn't – it wasn't even seen before. And it was a real stepping stone for what is possible and what is the future, right?

Because like, if you look at the internet, when the internet came out, right, uh, there was this whole hype about it, right?

And the same thing is happening with AI and the internet, when it first came out, it wasn't like this big thing that it is today, but it was a small thing and it kept on developing. So AI is like that,

 

it's a small thing but it keeps on developing and now we're like, we're, we can really see like, the full potential of it.

 

Sulaiman: Let's say that my 13 year old brother hears about this, hears about ChatGPT, which he already has. Let's say he tries to use it for an essay, what advice would you give him?

Because I know a lot of kids are using, like, these large language models to write their academic assignments, so much so that you see it everywhere on academic integrity “No use of AI”.

Insert: *Every academic place, like schools and universities, have an academic integrity policy which are rules to make sure the work we do is done in a fair way (without cheating, plagiarising, etc). Here, I mention that these academic places also prohibit the use of AI in this policy*

 

Omer: Yeah, so on our research, we actually did something like this where we put all the essays generated from AI into an AI checker. And a lot of times they had really high scores. So if you are going to just blatantly use AI to finish your assignment, I wouldn't recommend doing it because you're going to get in trouble.

 

Adam: But is a high score a good thing or a bad thing?

 

Omer: A high score in detectability. So it was very detectable. Yes. So yeah, I wouldn't recommend using it for like just cheating and doing your work – just getting over with it. I would recommend using it to learn from it, right? Putting your existing essay in and telling it, “Hey, how can I improve this? Can you give me some recommendations? Can you give me different viewpoints of how someone would read this?”

Right? Uh, and just learning from it, right? Uh, usually a lot of times whenever I write an essay , if I'm not sure about the structure of a sentence, right? I asked an AI, “Hey, how can I rephrase this?”

Give me multiple ways I can rephrase this to make it sound better. Maybe I want to sound more elegant, right? Maybe I want to sound more, uh, factual or more just down to earth, right? Then I can tell it that. It can give me different variations and just save a lot more time for me. And I can learn from that too.

 

Sulaiman: This is probably the last question on this AI in academia, but AI, as far as I think I know, can't really distinguish between good sources and sources that sort these peer reviewed articles. So for anyone that doesn't know, there has been a rise recently of things called predatory journals, where they'll just publish papers that haven't been reviewed for a fee.

And this is the reason they've gotten so popular, or one of the reasons is because academia, like universities, often push the professionals there to publish, publish, publish, in order to gain… status, I guess you could call it, in order to move up the ladder. So, could I, or would you recommend using it for research?

 

Omer: For research, what I would recommend doing is not to directly rely on it, right? You have to have those skills, a researcher would have to discern between what is good research and what is not, right? But I would recommend using it once again as a helping tool to help you identify certain, uh, research papers, right, to not disqualify all of them, but to like put a large sample of research papers into an AI and tell it which ones would you recommend and which ones would you not recommend and tell me the reason why, and then decipher through them and see why it does not recommend that for you.

So from that, not only will you learn how to use AI better, you'll also learn the faults of the AI, right? So then you can tell the developers of it too, and just improve the AI as itself right?

​

Adam: Well, a lot of people have already been using AI for research, right? You and I think one of the scariest things is potentially AI starting to, like, it's already starting to make up some of these facts, but what's scarier is people using AI intentionally in order to make up facts. So I think one of the biggest talking points, for example, recently has been Sora, that new video generating AI. And so, you know, how can we gain these research skills that you were talking about to be able to discern what's real from what's not? Or is it not even a concern?

Sulaiman: What's the issue though? It's just a video creation software.

 

Adam: Well, If the AI can create videos of anything, I could tell it, for example, like, hey, create a video of Suleiman, uh, I don't know, like, kicking a puppy, right? [Humorous]

And I could ruin your entire reputation with that. And so that's a really big risk. And so, yeah, so just how do we gain the research skills to tell what's real from what's not?

Or will this not be a concern?

 

Omer: Okay, so when ChatGPT, uh, since ChatGPT is releasing AI video generation, they're most likely going to have a prompt next to it that states that this is AI generated. But I know you can remove that too. So there's most likely also going to be some other third party softwares that offer you the ability to detect if it's AI generated, just like how we can check if something's, uh, some image is AI generated, you're probably going to be able to check if a video is AI generated. Obviously, it's not going to be, uh, like 100 percent accurate or certain, but it'll keep developing, right? Today, the text models that we have to detect if a piece of text is AI generated is very, very, precise, right? And it's, and they also work off AI too. They take, uh, they develop over time. So over time, we're going to develop software that can detect fake videos that are AI generated.

Adam: Great. So all the puppies are safe from Sulaiman.

 

Omer: Yes.

 

Sulaiman: You had a question, Adam?

 

Adam: No, that was about it.

 

Sulaiman: Okay. Omer: All right then. Sulaiman: Yeah.

 

Omer: All right. So, now, we're not going to move on to Sulaiman and your topic was, uh, if I recall correctly, the mental health of men, right?

 

Sulaiman: Yeah, mental health of young men.

 

Omer: Young men.

 

Sulaiman: So a young man, according to the UN definition, which you can find in the paper, it's anywhere from 15 to 30. A little bit weird, but it makes the research and considerations and considerations easier and more, or the most, broad that you could get it.

 

Omer: Alright, alright. So, please tell me: what were your findings and why is the mental health of young men such a big deal?

 

Sulaiman: So, what we found is that, or what I found was that young men are currently facing a mental health crisis. Six million experience depression every year. And when you look at how many are actually actively seeking help, it's only 13 percent of young men aged 16 to 24. And all these statistics can be found in my paper.

What is the benefit on society though? Like, why should we care? According to the White House, engagement in crime when depression is experienced in adolescence goes up, and individuals who suffer from mental health issues, they experience higher rates of unemployment, or they lack stable housing. The people that have to pay for all of these services to accommodate these people, are, of course, taxpayers. So it would be in society's best interest if we get people who can get themselves out of this situation if they simply improve their mental health.

To help them get out of that hole. So we avoid this big burden because the cost of living is already very high.

 

Adam: Well, if it's so important then why haven't we already done anything about it? You know, raising mental health awareness.

 

Sulaiman: Oh, I think we've done an excellent job in actually raising mental health awareness.

 

Pretty much everybody knows about what mental health is; why it's important. The unfortunate thing is that it's often framed in this vulnerable “it's okay to be weak”. I mean, when you think about mental health, the first thing that I think about is sitting in a chair and talking to a therapist.

 

Right? But how many people would actually go and speak to a therapist? Like, if you were experiencing something, uh, to do with your mental health, would the first thought be, let me go and speak to this stranger, or would it be let's see how well I can cope with this because I don't want to. Share this with them, right?

Because mental health is a very personal thing, um, not just for young men, but for everybody.

 

Adam: Yeah, I think you're right. Like, honestly, I think the first place I go to is probably like my parents or my friends. Therapist seems like too much, at least for me, but I guess that might be part of the problem of me thinking that it's too much.

 

Sulaiman: Yeah, for sure. And even in the, um, in an online survey of Australian young men from 16 to 24, most of them indicated that they'd rather have action based strategies. So like what you said, going out to their – seeking help, maybe it'll be looking for information online, maybe it'll be, or going to their parents rather than, um, wanting to have their treatment entirely be talk based.

 

Omer: Oh, since you mentioned finding information online, I noticed a lot of men when they go online, they find these red pill communities from influencers like Andrew Tate and take that advice. Would you recommend that advice? And how do you think this impacts men's mentality?

 

Sulaiman: So the red pill offers a great diagnosis of problems but offers horrible solutions.

 

So, like the problem of men not, or like the suggestion and evaluation that, hey, I don't want to speak to somebody and I want to do something about it. It's a great diagnosis of the current problem, right? We have this heavy emphasis on going to therapy and speaking about it. The solutions that they offer are horrible, like really coping with it and not doing anything about it.

It’s not the best idea.So again, one of the ways that I propose we tackle this problem is: 1) signposting young men to information. Here is where you can find information and here are some potential solutions that you could do yourself and also changing the way we speak about it. I mentioned it's always in this frame of weakness.

But what if we reframed it? To be a sign of strength because being vulnerable does take strength and it takes a lot more than clamming up and not saying anything.

 

Omer: Okay. Yeah, so I also wanted to ask you, so we're focusing on a lot of Men's mental health, but I wanted to ask you: what about female mental health?

How do you think that contracts the contrast and what are the similarities over here?

 

Sulaiman: So across the board, what we do notice is that both genders, it's very – first of all, mental health is very personal in both genders, and while I didn't specifically analyze young women, I can tell you straight away that, uh, the not wanting to open up and not wanting to go to therapy isn't characteristic of young men.

 

I remember in 8th grade, I saw this girl, she was going – multiple girls were going through stuff, mentally, and they really didn't want to go see a therapist, right? They preferred to deal with it on their own. So, again, it's very personal, and that's definitely one thing that stood out to me, even while I was doing my research.

So, this reframing of mental health would not only help young men, but it would also help young women, right? Nobody wants to be seen as weak. Another thing is, we notice is that both genders are experiencing mental illness. It just so happens that young men have a higher proportion of successful suicide.

It doesn't mean it's necessarily attempted more by young men.

 

Adam: So I think one thing is that most men's role models typically operate with like the quote unquote, negative masculinity mindset.

​

Sulaiman: Yeah.

​

Adam: Where a lot of like film action stars, for example, okay, a disaster happens and they just sort of stoically take it on and they, and they're depressed and they drink it off and they continue fighting the bad guys, right?

Do you think there's any, like, positive role models that young men can look up to in terms of mental health?

 

Sulaiman: Before we get to answering that question, one thing I wanted to clarify, one of the problems we see with a lot of the research on men's mental health, is this demonizing of men because they're men or this constant use of terms like negative masculinity as if it's bad to do that.

If a man drinks off his emotions and numbs himself, it's negative masculinity. But if a woman does it, is it negative femininity? Why are we always categorizing this based on gender? It's just a negative behavior period. You shouldn't do it If you want to get better, period.

 

Adam: Yeah.

 

Sulaiman: And for positive role models, I'm not going to give any one role model because it changes from person to person. What I will say though is that in order to find a good role model, sometimes they might have to experience what following the advice of a bad role model will be like. This trial and error throughout their life.

He tells me to do this. So i'll try it and if it doesn't and then it doesn't work and now there's suffering. Okay, maybe what he's telling me to do is wrong. So maybe I should try to find somebody else So, while there aren't necessarily bad, I wouldn't say there are bad and good role models, there are only role models that give certain types of advice that maybe aren't the best.

 

And it's up to the young men at the end of the day to find out for themselves using their thinking and maybe experiences if it's not so apparent that their reasoning is bad.

 

Adam: Is there a way that we could minimize this trial and error and just, you know, bring them to recovery sooner?

 

Sulaiman: For sure, one of the ways that they could do it straight off the bat is doing their research, right?

Uh, there was a saying, but a wise man learns from the mistakes of others. So, definitely doing their own research and looking into maybe how this advice has panned out for others. Reddit is a great example. The RedPill community was formed because it was initially just a space for men to share their experiences about the things that they noticed that were sort of against, uh, that went against tradition or conventional wisdom that you'd often hear. So doing their own research.

 

Adam: And what resources would you recommend then?

 

Sulaiman: For resources, I think I would recommend looking at the actual advice of people that are qualified to speak about it, firstly, and then maybe looking at a person who you trust or your role model looking what their, uh, what their advice is, and maybe comparing the two.

What does somebody who's qualified in this area say? And what does the role model say? And do they line up?

 

Adam: Would qualified just mean therapist then?

 

Sulaiman: It could be anything you saw in research too. Um, but for the most part, generally a professional that works in mental health. So therapist works, yeah.

 

Adam: Okay. So honestly, I'm kind of glad we've had this conversation because I realized that a lot of my, a lot of my preconceived notions about these topics and a lot of my biases are being challenged. And so, in general, I just wanted us all to discuss, like, what's the underlying theme here, right, where How could we prevent a lot of these biases from cropping up and a lot of these preconceived notions of these topics?

Do you think there's any, like, one, maybe not one size fits all, but just any general tip for any institution, be it like a government, a university, anything like that? How could they, you know, how could they help prevent these biases and preconceived notions?

 

Omer: I would say, education, educating people about, uh, such topics is paramount and I think it should start in school, right? In elementary and middle, in middle school, right. Having like – maybe a program once a week where they learn about this, uh, about AI and its limitations, about men's mental health, about nuclear energy, and like to combat the misinformation out there, right?

 

And they also learn about the existence of this misinformation and they're taught why it's wrong and how to avoid it, right?

 

Adam: Maybe they just listen to our podcast for 60 minutes.

 

Omer: Maybe they do.

 

Sulaiman: To combat the misinformation from AI.

 

Omer: Yeah, yes, definitely to combat the misinformation from AI, and I think, uh, Even at a corporate level, companies should be training their employees and giving them training on how to fight misinformation.

So, for example, for AI, any technical employee in a tech company should know how AI works, right? And its limitations. And this isn't, it doesn't apply to technology companies. You can apply to the entire company. You should know how it works and how it can be useful in a business, right? In a business setting.

So that could overall help the company too.

 

Sulaiman: Also, you mentioned education, but with Adam's paper, I was reading it over and one of the things you notice with education is that it goes up and then it sort of stops increasing at some point. It just plateaus. Education is no longer effective.

 

Adam: Yeah, I would like to clarify that actually.

 

It's because uh, because at some point once you're spitting a lot of facts at me, if I have a deep rooted belief, right, and the first few facts weren't enough to convince me otherwise, then what's going to end up happening is that I'm just going to start blocking you out. You know, you're going against something that I've entrenched into my mind, and so why should I listen to you?

Clearly, you're just not trustworthy. And so instead of just, you know, instead of just educating, which definitely is a big part, but at some point does have those diminishing returns, we should also be focusing on sort of earning people's trust. And engaging with the community and engaging with the public, because I feel like if with men, men's uh, mental health, for example, you were talking about how, for example, the red pill community on Reddit.

 

Sulaiman: Yeah. [Affirmative]

 

Adam: That gave men a space to talk.

 

Sulaiman: In the beginning. Adam: Yeah, well, in the beginning. Sulaiman:In the beginning,

 

Adam: Yeah. And so if we could create more of these spaces, for men to talk, you know, not just centralized on Reddit, but, for example, here on campus. Just create spaces for men to talk and be open without the connotations of being seen as weak.

That would help them start to, like, absorb the concept that, hey, maybe I'm not weak for being vulnerable, right? And if we create spaces for discussion about AI for people to ask questions rather than just throwing facts at them, perhaps that would be more useful in answering their curiosities and helping them truly conceptualize the biases in AI.

 

Sulaiman: For sure. Some men have found, uh, help groups where there's other young men or other men that have gone through what they've gone through helpful. Uh, but I'd also like to mention not even just communication between individuals, but the words used to communicate the ideas themselves. So definitely focusing on those and, for example, saying it's okay to be weak versus it takes strength to be, to speak about this matter, or with nuclear instead of saying, um, or having, or framing it in this way where it's like, we're treading a line where this thing might blow up and we're only doing this because, you know, it's the only viable solution to green energy versus this is extremely safe. The way that the conversation is framed.

 

Adam: Also, I would actually like to add on to that. It was part of my research. That's a really good point you make. It's, uh, In the case of nuclear energy specifically, you discussed the concept of a socio technological imaginary, which in English basically is just the idea of like a government integrating a technology into the imaginations of a society, right?

Because like, if, for example, we went back to the 19, uh, if we went back to the 1940s, like mid 1940s, right after World War II, And we asked any American on the street, okay, name a technology that comes into your mind. As soon as I say, you know, Manhattan, right? The first thing they think of is the Manhattan Project – nuclear technology immediately.

Right. And so countries can associate certain technologies with how the public, like, imagines the country's growth into the future. And so that's definitely something to be focusing on. Not just for technology, but for other concepts as well. If the country can through, for example, like through public speeches, right?

Start integrating these concepts into these speeches. They can change how the society imagines the country moving forward in relation to these concepts. And help improve these imaginaries and thus improve the public's understanding.

 

Sulaiman: So I think one of the technologies that are entrenched in the minds of individuals, maybe when they think about America because of Microsoft's chat GPT, or maybe they think of India when they think of AI because of all the stereotype of Indian people being great at computer, um, great at programming.

Let's say that we were so far into the future that America, um, allowed AI. control over its nuclear weapons because maybe they decide that it makes better choices than a human president. Do you think that disaster will come about sooner?

 

Omer: Okay. So first of all for that. I don't think that will ever happen where an AIi will have sole control over a nuclear bomb. There must and there will be some level of human intervention involved and human control involved for such a high degree of power.

Now, that being said, uh, it is, there is one likely scenario in which that's possible in a future where nuclear bombs are used at a much higher rate, and they're much more occurrent, so the, the consequences of throwing a nuclear bomb aren't that high, and the stakes aren't that high, and it's just an everyday routine in that kind of future, so then just to automate that process, AI could be used, but that's like when AI's really like prevalent, when nuclear bombs are prevalent.

Sulaiman: Okay, and now that we've covered the general broad solutions and we covered specific solutions to be implemented in each subject, we can finally move on to the part that all of our listeners enjoy the most…

Adam: Hyperbole hourrr! [Excited]

 

So for those of you who don't know, this is the part of the podcast where we just start asking all the questions.

All the absurd questions, right? Some of these you might see as hot takes on Twitter. Some of these you might never hear. We're just here to have fun. And, you know, now that we understand all the topics. So, Sulaiman, off the top?

Sulaiman: Yeah, for sure. One of the concerns that I had about AI, specifically moving into the future, is what if we gave AI control over nuclear power?

Would that even happen? And will we all be gone?

 

Omer: I personally do not think that will happen, where we give AI the sole control over a nuclear weapon without having any human intervention involved. And that being said, it is possible in the future where nukes are much more common and used much more frequently and the consequences of using them are much less, uh, where there's, where a government may just want to automate the process of a nuke.

Then, yes, an AI could be used just to give a full control over it, but that's only if nukes are at a very high, uh, have much less stakes.

​

Sulaiman: And you mentioned that AIs can only do more or less what they're programmed to do, right? Yes. So what if somebody programmed it to take over the world? Have you seen the Mission Impossible movie that came out?

Where there's this evil AI that, sort of taking over the world and shows this one guy as its friend and then it's like hacking into stock markets and stuff and nuclear programs.

 

Omer: Oh, yeah. So that's an example – that's definitely an example of a general AI type of (a) type you see in the movie. So that's definitely like something far from now.

 

But if, uh, to even achieve that, you would need a large set of data, right? I mean, a very big set of data to even achieve that type of knowledge. And to prevent that from happening would require, uh, would definitely be put in place before its development. Because it's definitely such a powerful tool.

 

 

Adam: So it's still more likely a person takes over the world than an AI?

 

Omer: Yes.

 

Adam: Okay, I still got chances. Good, good.

 

Sulaiman: Wait. So, you mentioned that AI is, that's, uh, people like corporations pay to develop this AI, right?

 

Omer: Yes.

​

Sulaiman: Governments do. But why would Microsoft make it free? Is it taking my data like Facebook?

 

Omer: Yes, to some degree. They do take your data to use it. And yeah, they use your data too. And sometimes they do also give it to the government too. Yeah, to some degree. Because like, there are US companies, so they have to abide by the government's rules. But a lot, a lot of times they use that data to make your user experience better and to just make the AI better itself. So, it can be less, it can be more human-like, you know.

 

Adam: On the topic of AI, actually, you know what? I'm kind of curious. Sulaiman, do you think we could ever have like AI therapists? Could we like just program a chatbot to just, you know, give me therapy so I don't have to drive all the way to the therapist?

 

Sulaiman: That was one of the problems that was cited in the Alice et al study. Uh, one young man said it was tiring to actually go back and forth, and I think a lot of people feel this. As Omer said, it depends on the data set. Maybe we could, we could see something like that, but for generally very not very severe symptoms very shallow, or light, symptoms. And then if it was more severe and this therapy isn't working, we'd probably have to seek higher forms of intervention.

 

Omer:

I'd like to add on to that. There's a company called Neuralink that is developing chips for a brain that you can implant into your brain. And then it said that once this technology is fully developed, right, all therapists will be out of jobs. Simple as that.

 

Sulaiman: Let me ask something. Was this company made by the same guy whose rocket blew up?

 

Omer: Yes.

 

Adam: I would also like to point out the fact that if the chip could magically cure my depression, couldn't it also magically cause it?

 

Omer: It definitely could.

 

Adam: I'm good, thanks.

 

Omer: Obviously when it first releases, there's going to be a lot of like, tests, right?

 

And it won't be, it won't be that popular, but it'll require a lot more development, a lot more refined to make it better

 

Sulaiman: I remember that I had been watching a YouTube video and apparently Tesla isn't really a car company. It's a data mining or like data collection company, right? So I don't think your thoughts are safe from Tesla anymore, or safe from Elon Musk anymore.

 

Omer: Well, it uses our data for its AI car software, right? It collects data about. how cars are moving and where different, uh, sides of the road are, where different, pylons are, right? Where the nature of how drivers drive right and how you drive too. It collects data on that to make the user experience better.

 

Sulaiman: So basically – okay, I have one question to that. Just one question. Does it collect information on where I go and where I live?

 

Omer: Yes, it does.

 

Sulaiman: Okay, well I think that's a very comforting thought for everybody that plans on buying a Tesla in the future.

 

Omer: But it's used for your safety, right? It's used for you to directly tell the car to go to where you want it to go.

 

And it's used for user experience, so if you're not comfortable with that, then definitely that would be something that would hinder your ability to buy it.

 

Adam: Okay, I'm getting a lot of dystopian nightmares from you guys, so I think I'm just gonna end the podcast here. Alright, thank you all so much for tuning in. We hope you enjoyed this week's episode. Next week, we'll discuss misinformation between the Moderna and Pfizer vaccines, a story about New York vaccinations, as well as climate change. We hope to see you then.

​

bottom of page