Transcript: Misinformation in the Information Age
*Intro Music: a high-energy blend of electronic and metal music, featuring a rapid, fast-paced piano melody and synth elements in the background*
Sania Khurshid: Hello listeners, this is Sania Khurshid and I’ve joined my co-hosts to explain a current topic of interest which I have spent significant time researching. For my topic, it is the use of AI within medicine, and how its use has been communicated to the public.
Leena Alorbani: My name is Leena Alorbani and my topic is the communication of research on stem cell therapy as a treatment for Alzheimer's disease.
Raweeha Raza: And hi, I’m Raweeha Raza, and in this podcast I’ll be talking about research regarding the rise of AI and public perceptions. In particular, the role of AI in job displacement, and whether or not AI will really take over your job.
Sania Khurshid: So Leena, you've been talking about stem cells and Alzheimer’s within your research, can you tell us a little more about that?
Leena Alorbani: Yeah, so, stem cell therapy is a promising treatment for the neurodegenerative disease Alzheimers. So Alzehimers is a disease that impacts the cognitive development of a patient, and is particularly prevalent in older people. Stem Cells are undifferentiated cells, meaning they do not yet serve a specific role, and they can differentiate into many cell types. The idea behind stem cell therapy for neurodegenerative diseases like Alzheimers is that these stem cells can replace or supplement dead or damaged brain cells. Currently, there is no cure for Alzheimer’s, only symptomatic - so, treatments that are directly treating symptoms - and partial treatments. For a disease that is this impactful on the lives of patients and those around them, the research on a possible treatment is extremely important. So animal trials have shown uh stem cells to be effective and safe but any trials on humans are relatively new, with the first one having been done in 2011, and have not shown as strong of a result and have not yet really been able to progress. For such an important topic this therefore begs the question, “What are the benefits and limitations to the progression of Stem Cell Therapy research for Alzheimer’s disease to further clinical trials?” I examined this question by looking at two different aspects: one is the gap that exists between animal trials that have already been done and clinical trials on humans, and two is the gap in communication between the scientists and their research and the general public. So for the first aspect, the gap between pre-clinical and clinical trials, I examined two pre-clinical trials, one being the first one that was ever done in 2011 in Korea and the other one that is currently being done, with the head scientist, neurosurgeon Christopher Duma, having done a not fully FDA approved clinical trial in the past. Both of these were stage one clinical trials, meaning they were testing for safety, and both of them found that the use of stem cells is relatively safe with the biggest issue being possible issues with the best transplantation method and side effects of the surgery. There was a lot of progress made in the technology between these two trials, the Duma et al. trial was able to do the trial with a slightly different approach using the patient's own extracted stem cells um and was as a result able to see much clearer results in terms of efficacy, with the treatment at least stabilising the cognitive decline of patients. The currently ongoing trial has advanced even further as they got FDA approval, and they now have the technology to supercharge these stem cells. This clearly shows that there is progress being made in stem cell research. The limitations uh are not scientific but rather mostly as a result of requiring funding, time, and participants. The average clinical trial on stem cells takes 2 to 5 years to complete, and requires significant funding and resources. As well, clinical trials researched have a limitation in patients with each having only around 10 patients. Even the clinical trials that have currently been done - uh, are currently being done have an estimated number of patients of 18. Even the process of approval requires resources. The most effective way to get funding and participants is for the public to invest and believe in the research, which brings us to the second part of this research. So before I move on, do you guys have any questions?
Raweeha Raza: Um, yeah, actually, I do. So why do you think it's challenging to move from testing stem cell therapy in animals to human clinical trials for Alzheimer’s?
Leena Alorbani: So there are several reasons for that. One is the purely scientific reason where animal brains are much simpler than human brains, and, uh, even, for example mice brains are similar. Human brains are a lot more individualised, which means that you might need to personalise treatment for every individual person and you wouldn’t need to do that for animals. The other reason is an ethical problem where there's much more of an um ethical limitation when you’re moving to working with humans, especially when you don’t already know the side effects than when you’re working with animals.
Raweeha Raza: I’m just curious, Leena, how do researchers actually measure the effectiveness of the stem cell therapy happening?
Leena Alorbani: Yeah, so, Alzheimer’s is a neurodegenerative disease which means that its major symptoms are- all impact the cognition - so basically, the ability of the patient to function in a lot of things like memory, (and) learning. So the way that you can measure the effectiveness of the stem cells therapy and the way that the researchers do this is by doing certain tests to test the function of the patients and memory. Um, and from there, you can figure out whether it’s- the cognition is improving or still declining or stabilising, which the, uh, not-FDA-approved trial found that their patients were stabilising with the stem cells.
Raweeha Raza: Oh, I see. Also Leena, I was wondering, because I’m not really a science-ey person so I’m curious about this. Uh, what- like, what are the actual typical improvements or outcomes that researchers are looking for in these trials? Like both short-term and long-term?
Leena Alorbani: Yeah, so, um, mostly the easiest thing to measure is memory, because there are a lot of widespread memory tests, but the other thing is just general function, so for example, um, Dr. Duma in his, uh, YouTube interview, he mentions that one of the measures is- are the patients able to or not able to dress themselves. So like, put on their own jacket, put on their own clothes, and that’s one of the measures of cognition they use.
Raweeha Raza: That’s really interesting.
Sania Khurshid: And just to ask about the, um, first clinical trial, why wasn’t it FDA approved? Like, would that alter how other people understand it? What’s going on with that?
Leena Alorbani: So, uh, basically, that clinical trial was finished in 2015, which was before a time when stem cell therapy was like, really widespread in research. So it was before a time when the FDA was really cracking down on people doing research with stem cells. When more people started doing research with stem cells, the FDA started regulating it, uh, which is why the current trial is FDA-approved.
Sania Khurshid: So does the FDA approval give the, uh, study authority, when you’re looking at it in terms of the first one? Is there a difference between the two due to the authority that the approval gives?
Leena Alorbani: So, the big difference is obviously that the FDA is the one - at least in the United States - that approves all of the- all of the, like, the drugs and treatments for diseases. So, uh, a trial having FDA approval makes it more official and also means that when it goes forward to more clinical trials, it has the possibility of being used for actual treatment. The other thing for this specific trial, though, is that FDA approval gave them, uh, a little bit more funding, in order for them to be able to have higher quality technology and for them to be able to supercharge the stem cells, like I mentioned.
Sania Khurshid: That makes complete sense. But, since this is an older topic, since 2011, why is this *now* the first time where I’m really hearing about it, from you specifically?
Leena Alorbani: So, that actually brings me to the second part of my research, which is the gap between the scientific research and the understanding of the public. So the way that I studied this was using an anonymous survey and a couple large things were found. One, is that there is, like you mentioned, an extremely large degree of ignorance towards the topic. Even with the sample size being completely made up of university students and graduates, and with 88.6% of the 44 participants knowing or having heard about stem cells in general, only 29.5% had heard about stem cell therapy for Alzheimer’s specifically. Participants in many other sections of the survey displayed and even explicitly expressed a great degree of uncertainty about the topic overall. The other important thing though- is noted is that the knowledge that the participants did have was mainly from academic sources like school or peer-reviewed research papers, which are not extremely accessible ways and easy to understand for the general public. As a result of this, despite having a clear understanding that a treatment for Alzheimers is needed and not displaying any negative opinions, the participants in my survey - and, therefore, can kind of be assumed to be the general public - their understanding of the benefits of the research was very limited. But for the limitations, they did mention safety, risks, and side effects 15 times throughout the survey, um, and, this was the most common thing that they mentioned overall. And this is very interesting because the entire purpose of the trials that have taken place - the stage one clinical trials - are to identify and increase the safety of these treatments. Overall this does display that the public does not have a full understanding of the topic at all, and this is partially if not fully as a result of a lack of clear and accessible information. This along with how important the general public is to this scientific research shows how connected these two aspects of the research question are. So bridging the gap between the scientific research and the understanding of the public is essential to bridging the gap between pre-clinical and clinical trials and therefore for progress in this field of research.
Sania Khurshid: So public fear is understandable within your research. But how significant are preconceived biases in structuring the way people understand this research and their hesitance for funding or volunteering?
Leena Alorbani: So obviously this is a pretty new topic in terms of the public communication of it, and as a result- this fear is as a result of a lot of, uh, preconceived worry about new science. Um, especially because of the fact that when a new science thing comes out, we don’t have a lot of knowledge on the side-effects and general impacts, or even possible benefits of the treatment itself. Um, and, I think part of that is, uh, an issue with people not really understanding the process of how research happens in general. So, stage one clinical trials are meant to combat the safety fears that the- this public has, but if the public already doesn’t have a preconceived understanding of how this research, uh progress goes, then it’s very difficult to explain that the- uh that these clinical trials have to be done in order to, um, mitigate these safety concerns.
Sania Khurshid: So you mentioned that we need to know what's going on within these, uh, clinical trials to fully understand the research, but how achievable is this? What is the basis that we need for this to occur?
Leena Alorbani: Yeah. So that question and my previous findings are actually what shapes this, the- the next steps. Um, so this next step, and this might answer your question, is inspired by the, a video by the neuroscientist, uh, Dr. Duma, where he's explaining to Alzheimer's Orange County community about the research that he and his team are doing. The video is openly accessible, clearly explains what they did in their current and past research, the benefits of it, the improvements made over time, and their need for funding. The language and explanations used are very clear in the video and designed to be easy for everyone to understand. For example, to explain the mechanism of Alzheimer's and the cause of the cognitive symptoms that is now believed to be the neuron death - so neurons are the essential cells in the brains that send and receive signals, and synaptic damage, uh, where, which is damage to the structure that allows these signals to sent- to be sent between neurons. Uh, so he explains that, that because of this, symptomatic treatments are not as useful. He uses the metaphor of a broken refrigerator, stating that what needs to be fixed is the fridge itself, and not to continually take out the rotted food inside. Of course, Duma's audience did already have background knowledge on Alzheimer's, but a similar strategy can be applied to wider audiences, particularly using social media platforms. The basic knowledge on Alzheimer's and stem cells in general can be communicated through these platforms. Things like YouTube videos explaining the topics to the public act as much more accessible means of gaining information than research articles or school. Short form content on social media such as Instagram or TikTok can also be used to reach an even wider, especially young, audience and either briefly educate or bring awareness of the availability of longer form content. Another more organised strategy to do this is a weekly or monthly podcast, kinda like the podcast you may know, This Week in Virology, which is a science podcast that gives weekly updates on viruses and it got especially popular during COVID-19. And this would allow the scientists to give updates on progress in the research and currently occurring clinical trials. This is a way to keep the public updated on a regular basis on the way that the research is progressing and what is being done and being given attention to in that particular period of time. The most clear benefit of this next step is that the public will have a better understanding of the science, which sparks further discussion on the topic and prompts further research. The other benefit is that the general public is the means of funding further research through investments and sponsorships. Even just through the conversation created when a topic of a treatment of such a life changing disease enters public discourse.
Sania Khurshid: These are very manageable approaches, but I was wondering, who are the agents that we need to employ to better communicate this research to the public?
Leena Alorbani: So, um, that's a very good question, especially since, uh, it's oftentimes not entirely manageable for the scientists that are doing the research to also do all of the communication on it. Um, ideally, it, it, it would be good for the scientists to do the communication because they have the most full understanding, but of course, not everybody can go on to a Zoom call with everybody like Dr. Duma did. So, as a result of this, it might be a good idea to form kind of, like, coalitions or groups of scientists or, like, public scientists, citizens who are interested in the science. Educate them well, and then have them educate everyone else.
Sania Khurshid: And what are the limitations that come from having more public involvement in this type of communication?
Leena Alorbani: Yeah, so, um, in translating knowledge from scientific articles, there is a significant amount that can be lost, and there is a possibility for misinterpretations. For example, mass communication does run the risk of causing overgeneralizations. For example, if the communication is not fully clear, it can be possible for people to understand things that might not have been fully said. For example, um, the FDA issued a warning in 2019 about how even though stem cells are your own cells and that things have been found to be safe, there are still risks to it. And that's important to communicate clearly. And I think the important thing to that is to, in your communication, be fully transparent and communicate everything with equal clarity so that it's fully accurate and comprehensive information is being shared.
Sania Khurshid: Definitely.
Raweeha Raza: Right. So, um, Leena, I had a question. I think stem cell therapy research is like really interesting and it definitely needs a lot of time, money, resources, um, to improve the research. And I was wondering how, like, how do you think regular people can contribute to or support stem cell therapy research for Alzheimer's besides donating money?
Leena Alorbani: So the biggest way I think to do that is to like make yourself aware. And this would be especially useful when there's more social media, uh, about it.
Raweeha Raza: Mhm.
Leena Alorbani: So like you, by educating yourself on the research, you're able to educate more people or share the educational content that is there and, and help to kind of spread the awareness on the topic, which allows other people to fund and participate or, and even just understand, which is also an extremely important part.
Raweeha Raza: Yeah, for sure.
Sania Khurshid: And also, Leena, seeing how the first or the two pre-trials were conducted in different countries. Do you think global cooperation could be applicable to this topic to reduce burdens in the limitations that you previously mentioned?
Leena Alorbani: Yeah, I definitely think so. Um, especially since the major limitations are things like funding and time. And I think global cooperation, more people doing the research and, and, How- and more countries joining in for funding, um, kind of like all, all global research has a lot more funding simply because it is global. And if we can do that for this, it can definitely be much faster progress.
Sania Khurshid: And do you think that going global with this type of research will cause more issues in terms of communicating?
Leena Alorbani: Well, I think the, the issue with making communication and research more widespread in general is always that as it becomes more difficult to communicate to each other simply because there's more people and more people working on it, um, there's always more of a chance that misinformation will slip in and that'll accidentally become the focus simply because of the magnitude of the thing.
Sania Khurshid: Yeah, so as I can kind of conclude from that: Global cooperation is really good, but there's also many issues with it.
Leena Alorbani: Yeah.
Sania Khurshid: Leena, what do you think the public needs to know right now for the most impact? What should we start with when we're communicating this?
Leena Alorbani: I think the most important thing to communicate is that there are things happening. Like there is research taking place, there are clinical trials, because like as I saw from my survey, the majority of people don't even know that this research is going on at all. And like you even mentioned this in your question.
Sania Khurshid: Mhm.
Leena Alorbani: Um, so, I think the most important thing is to tell them that this research is happening and to explain to them why this research is so important. Like, um, again, the trials that are currently taking place are stage one clinical trials, where the entire purpose of them is to determine the safety, um, of the treatments, uh, so that you can then move on to, uh, more evaluations about specific efficacy, which is how well it works, of the treatments.
Sania Khurshid: And that's a really good start. As someone who had not heard about this topic at all before Leena approached me with it, I do agree that we need to let people know that this is starting. This is something that will be continuing.
Raweeha Raza: Um, so just, I just had another question about, uh, the, limitations in researching on humans themselves, and I was wondering if there could be an alternative that would also provide realistic results, maybe something like artificial intelligence. What- what are your thoughts on that?
Leena Alorbani: I actually didn't come across anything like that in my research, but I do know that technology like that has been used in a lot of medical fields already. Um, so it could possibly be used for this one day.
Raweeha Raza: Mhm.
*Transition music consisting of a Lo-fi Hip Hop Mix*
Sania Khurshid: And honestly, AI is a big debate, especially within medicine during this time. Um, as you know- might know, GPT-4's release has caused more and more people to investigate how it can be effectively applied within medicine. My research project specifically focused on how the communication of AI in the medical field altered the understanding of medicinal expertise and skills for the public. Where AI is in terms of large multimodal and language models, LMMs and LLMs. This issue is especially relevant with GPT-4, which was produced by OpenAI. And my research primarily revolved around current cases of misinformation and how the information presented with them influenced public opinion and established links in terms of the issues individuals have with AI. The main themes include issues with ethics and security, the public being unaware of the structure in which this technology is implemented, and an overall lack of good research. Which kind of connects back to Leena's point where the public isn't really aware of the applications of this technology. So the way that I looked at my research was through different communicative aspects, one of which was a book, which is The AI revolution in medicine, GPT-4 and Beyond. And this was produced by many authors, such as Kerry Goldberg, Isaac Kohane, and Peter Lee, all of whom were given early access to GPT 4 in order to experiment and showcase their thoughts about how it could be applied to medicine. And the contents of the book were really well written. There are many people who had an extensive, uh, background in medicine that commented that they were able to fully understand what what they were saying, as it didn't overstate the ability of GPT-4, or did not make uncanny promises. So one of the important aspects of public perception was that they looked mainly at um, Isaac Kohane, which is one of the people who wrote it, and he's affiliated with Harvard University. So, they looked at what he was saying instead of the contents of the book, and this is what is reproduced multiple times, especially the key statement that he made that GPT 4 has worked better than many of the doctors he has observed, a very misleading statement that with limited information on this topic already creates public confusion about what is going on with GPT-4 and the next steps.
Leena Alorbani: So I actually have a question for you, Sania. Um, you mentioned that the book that was released had an audience of mostly medical professionals. How much do you think the format of that communication has the- has an impact on who the audience is and the understanding of it?
Sania Khurshid: So that's a very significant issue with books in this quantity, because it is not accessible in terms of price. So this is a book that you have to actually purchase and you cannot access it online. So the issue with this is that people who seek this book and look at its contents are those that already have a basis in medicine and want to learn more. So they're actively purchasing it. And this is what influences how many people are able to access this information and why certain audiences are able to approach different, uh, communication spaces.
Leena Alorbani: Yeah, that makes sense. So just like as a follow up to that, did you find that the information about this topic was mostly communicated in higher level academic sources that aren't accessible like that book?
Sania Khurshid: So there's a lot of ways that information is communicated. Mainly I looked at news articles because those were the most accessible, the easiest to find, just a quick search and you can look through it. But of course there are a lot of scientific journals, especially since this is a new technology. There are, there still needs to be more work done in terms of peer published articles and all of this stuff.
Leena Alorbani: That makes sense.
Raweeha Raza: Also, Sania, um, why do you think Kohane said such a statement, that “AI is better than many doctors I've observed”? Like, do you think there is a motive or any potential bias?
Sania Khurshid: So there could be a couple of reasons. The first thing is, uh, as observed from interviews online and other things, Kohane generally is a person who makes large claims and does not follow through with the communication at that time or the technology at that time, but also he has a background in more of the computer and information part of science. So he's not really there in terms of healthcare and medicine. So it might influence how he sees these sectors. And it just goes to show that it has some people, uh, authority figures have to be very careful, careful about what they're saying to minimise the spread of misinformation.
Raweeha Raza: Oh, absolutely. Because that's what people just catch and then get distracted by.
Sania Khurshid: Exactly. It's really easy to reproduce misinformation in this way. The second case study that I looked into in depth is Carbon Health, which is an American healthcare chain that has recently implemented large multimodal models to chart patient data during consultations. And this is by using a recording feature.The article that was produced by the company itself lacks significant information when explaining their implementation of AI. So rather than focusing on the technology, they talked more in terms of future impacts. So in their thing, they state, quote, “the rapid shift towards AI gives our team an opportunity to rethink every aspect of our patient experience and we plan to extend our patient patterning AI capabilities to areas like operations, referrals, and other workflows.” End quote. From here, we can see that they're looking towards the future. They're not really talking about what they're actually doing in the moment. And this has negative effects in reducing public trust in their operations, as many security and privacy concerns were not mentioned, such as how information is obtained by this LMM and is stored or encrypted, or what the limitations of this AI is in general. This has really, uh, significant impacts in changing public perception, as in news articles such as The Register, which showcase this, uh, the information put out by Carbon Health, individuals had a very negative view where they thought that this technology would be used specifically for diagnosing, um, illnesses and other stuff. But so this is simply used to chart the information patients provide so that doctors are able to better facilitate doctor-patient interactions and spend less time typing during medical checkups with more time focused on the patient Which in general seems very beneficial as it allows for greater levels of trust between doctors and patients. However, it's inability to communicate effectively cause individuals to also believe that the implementation of AI will, quote “remove the care from health care” end quote. What's interesting to know is that there are forms of media where the process of implementing AI and or security or privacy concerns is addressed. One such example is a podcast, The Revitalised Doctor. And this is by, uh, Andrea Austin, who has an MD, and she interviews one of the Carbon Health members, Dr. Kaylee Dove McGuire, who highlights all of these concerns, stating how this will be used and where the information will stay, which is in house, and is similar to regular charting. It won't go to OpenAI or all of these other third parties. But the issue with this podcast, which is seen in many of the technologies within the sector, is that their main audience is female physicians experiencing burnout. So it's unable to, uh, it has a lack of public outreach. The real life implications of this is that showed in a survey that I conducted with 18 participants, where more than 60 percent of individuals were unsure how medical charting would be used in, um, with AI. And when the public is not provided with clear and accessible information, they are left to rely on preconceived notions, which in the field of AI, I think you could imagine that this would facilitate a negative attitude in the practical applications of technology and medicine. Although it was a relatively small study, it highlighted similar trends to all of the discourse presented online and further emphasised the need for good communication.
Leena Alorbani: So I have a question. How much of an impact do you think the novelty of this technology has on the current quality of communication?
Sania Khurshid: Yeah, so this is very significant, as I found in my research, especially within my literature review, where certain studies that were conducted last year or during this time have lacked significant reproducibility due to them being just so new. One such example is from New England Journal of Medicine. Where they attempted to analyze how good AI could diagnose versus medical readers, except, uh, they lacked significant information within their paper, specifically how they were able to compare human, um, answers by creating a pseudopopulation of one- 10, 000 generic human participants when the actual polls themselves lack this many people. And additionally, they were using data that was free to the public. So it wasn't just people who had a basis in medicine. It could have been everyone. So as we can see here, information, since it's new, not a lot of people are able to effectively test it out and conduct good research during this time. Leading to people being misinformed of how well AI can conduct against humans in terms of medicine. So, as we see here, one of the key limitations in communicating is just how new this technology is.
Raweeha Raza: Also, I had a question regarding the patient's concerns with AI in the health care and like how they think it will remove the care that they actually get from humans. And so my question is, how important is it to involve end users? So patients, caregivers, health care providers, in the design and development of AI systems for healthcare.
Sania Khurshid: Of course, it's extremely important that we involve these specific agents so that we can create a design for the AI where more individuals understand what is going on, especially in terms of patients and are able to, um, not fear what's going on during this time and understand that we're not removing care from healthcare, but it's not very feasible during this time, especially since they’re so different fields. So the way of technology and the way of medicine, they're completely separate topics. And so it's quite hard for us to say, okay, we need to involve these key agents when we're designing AI. There might be some way of doing this in the future, but as of now, it's not, um, it's not present.
Raweeha Raza: I see.
Leena Alorbani: As a follow up question to that, I was curious, how important is it to bridge that gap between the field of technology and the field of medicine?
Sania Khurshid: So good question, and this kind of transitions to my point- point about next steps where the, um, there are certain suggestions that are already made in terms of decreasing this gap. So, as seen by suggestions already put up by the American Medical Association, Journal of Ethics. There needs to be a subset of actors that are held ethically responsible for issues created by AI. But additionally to this, when we're looking at medical professionals, their role in this system is to be trained with the program and to understand the biases of this technology. Along with, um, what's going on and how they can effectively communicate this to their patients. So this is what we see in carbon health, what is not happening. So they're not really disclosing the security or privacy or ethics of it. And this is what we need to, uh, go towards. This is what we need to transition to. There are also many other things that this journal recommends in general is- such as medical device companies. These need to be at the forefront of this system to ensure that they communicated the- its ability to the physicians and ensure that they have the basic training. But also, there are also other steps that I wanted to add to this list. And this is in terms of education. So within my survey, it was quite prominent that most people got their knowledge from social media and news articles, with education being one of the sectors- sectors that lacked proper communication on this topic. This is especially prominent with many medical students utilising the features of large language models such as GPT-3.5 and GPT-4, while summarising medical readings or solving diagnostic cases. And it is extremely important that they understand the limitations of this technology to avoid fostering an over-reliance on AI, which many individuals, as seen by prior research, are concerned about when they're looking at healthcare professionals and the usage of AI. A more, so a minimal solution could involve educators directing students to areas where communication is successful. Uh, of which there are many that are overshadowed by articles that are cap- with captivating titles and data and a more involved approach would be to have guest speakers discuss the usage of this technology and medicine to allow for more person -to-person interaction and give students the opportunity to ask questions, which one of the most of with one of the most involved solutions being to integrate this information within the course to some degree, such as class readings or projects. However, it's important to understand that none of these are fixed solutions that will 100% change the way that individuals see AI, and there is always more that needs to be done to effectively communicate these issues to the public. So with everything I've said, do you have any questions?
Raweeha Raza: What- In what ways can AI developers and medical professionals work together to ensure that the AI implementation in healthcare for the patients is, um, transparent and ethical?
Sania Khurshid: In terms of transparency, there needs to be something that discloses what's going on. So in terms of Carbon Health, they have a written agreement that already states what's going to happen and how they're going to record their voice and allows the patients to sign off on it so that they know exactly what's going to happen during their checkup. And I think that this is a really good solution, and we need to implement it more in terms of general AI usage. So, more than charting, when people are using AI to better format anything within medicine, there always needs to be a disclosure. Especially with, uh, developers who showcase, who integrate this within healthcare systems such as carbon health or others.
Leena Alorbani: Yeah, that kind of transparency is really important.
Raweeha Raza: And yeah, that's really cool. I had no idea of that.
Leena Alorbani: I have a question.
Sania Khurshid: Yeah.
Leena Alorbani: Who do you think should be the person doing the communication about your topic?
Sania Khurshid: This is such a good question, just because of how badly it has been communicated during, uh, past articles and stuff. So, firstly, we need to integrate a, uh, communication expert who is able to brief, um, individuals such as scientists or, uh, people working on the technology in the appropriate way to disclose this information to the public. So there are already been many issues with, um, individuals such as Kohane, who is overstating the implications of AI within medicine, and hence there needs to be, um, communication experts on the sidelines and people who are actually doing the talking. This can be a lot of people. This can be, uh, medical professionals using this information, the developers themselves. Whatever it- it's all feasible.
Leena Alorbani: That makes sense.
Raweeha Raza: Also, what measures do you think should be in place to protect patient data privacy? Because I think that's one thing that patients would really be concerned about.
Sania Khurshid: And this is honestly a great question because it spirals back to how it is being communicated. So right now there are these features in place that protect data. Especially with carbon health, just going back to this case study. Individuals, their information is charted in the same way as it would have been done when a doctor is doing it. It's all going to the same database, so privacy concerns are the same as they would be when a human is charting.
Raweeha Raza: Okay, that's fair.
Leena Alorbani: So I have a question. Your next step's really focused on education, uh, do you think that there are strategies on the platforms that are already being used, like social media, or like something like a podcast like I mentioned, that can be also used to expand this communication to a wider audience, not just an educational audience?
Sania Khurshid: Yeah, no, although I focused on education, it's very, um, significant that individuals are able to access this information in terms of social media especially, since that's where most of the people in my survey got their information from. So we need to bring the conversation to these specific platforms, especially social media, so that more individuals have an understanding of what's currently going on in terms of AI in medicine and how it will, um, and how it will evolve- What's going on and what will happen.
Leena Alorbani: So it's the education alongside more social media.
Sania Khurshid: Yeah. Of course, um, education is a good aspect, especially since there's an increase of medical students using AI, but social media is also especially prominent because of how little is known about this topic on a day to day basis on a person to person basis.
Raweeha Raza: I think with social media especially, there's a lot more room for misinformation. So I like that you mentioned that there's a need for experts to come on the platform and say something. Now how they come on is a different question because, um, you know, some, like, don't really simplify their scientific research and that just ends up confusing, uh, a lot of people.
Leena Alorbani: Yeah, definitely.
Sania Khurshid: And I think you can also agree with this in your topic that there needs to be, uh, communication experts guiding the conversation.
Raweeha Raza: For sure. Like I have an entire, um, section on this *laughs* that I'm about to go into. So I really like enjoyed this question.
Sania Khurshid: Yeah. Go on, tell us.
*Transition music consisting of a Lo-fi Hip Hop Mix*
Raweeha Raza: Okay, so you talked about the perceptions of AI use in the medical field. And my research very briefly does touch upon that, but it's mostly geared towards AI's impact on job displacement. So I'm going to delve into like the ever evolving field of artificial intelligence and its impact on our jobs. Um, like I'll, we're going to explore the fascinating yet daunting topic of AI's influence on employment and the public's perception of job displacement. So the rise of AI has sparked intense discussions and concerns about its potential to replace human jobs, and surveys from past research has shown that a significant 65 % of U. S. citizens believe that their jobs could be taken over by robots within the next 50 years. But why do we have such fear? Um, we're going to like delve deep into the reasons why people are scared, such as where are they getting this information from, you know, who's doing the disseminating? How can we improve this so that people are getting the facts instead of unnecessary concern? And I'm going to quickly touch upon what past research and studies have said regarding this entire topic. Um, I can link the research papers discussed in case you guys are interested in reading more about their findings. But basically in past studies, um, there is definitely an acknowledgement that AI is revolutionary and it enhances productivity and efficiency across multiple industries. However, this very transformative power has also fueled anxieties about job security, and people often get their information from various sources, you know, many people are scared because of science fiction movies, which depict AI in a threatening light, emphasizing themes of loss of control and doom. And there's this fear that, um, we're going to create something that'll start creating things itself, you know, this is what generative AI is already doing, like ChatGPT, GPT-4, like you mentioned, albeit on a very, like, small and not too intelligent scale… yet. And then there also exists that notion that one day the robots are going to turn against us and kill us all for revenge or whatever. And you can actually see these in movies. For example, Ex Machina, The Matrix, and I, Robot. And it doesn't help that, um, some technology celebrities like, Bill Gates and Elon Musk - they have sounded alarms about potential dark futures for humanity. Um, I actually, like, went ahead and, like, found some stuff about what they've said, and Elon Musk, who is admired by countless individuals, has called AI more dangerous than nukes. And, like, you can see that there's quite a lot of this fear mongering. Like, it's very important for what is said to be necessary and responsible, especially because there are so many people that look up to these figures. So, um, the key question of my research is how does the communication of AI advancements and their perceived impact on employment shape public fears? And I think addressing these concerns is crucial for, like having a balanced understanding and having, um, and knowing, having, ensuring informed decisions about AI adoption. Um, so. What previous studies have said, they've emphasized the need for experts to engage more actively on social media, like we were just talking about, to demystify AI and, like, counter the misinformation. However, this approach has its challenges, such as time constraints and the rapid spread of misinformation, like online, especially. It's not so easy to build a large following. Um, and what studies also found was that contrary to popular belief, AI's integration into the workforce isn't just about job displacement. It's also about transformation and creation. For instance, while AI may displace some jobs, it's also projected to create millions of new ones by 2025. Um, this aspect often gets overshadowed in public discourse. And what was found, interestingly enough, is that AI and human skills can co-live and play to each other's strengths to achieve more. It's not about, like, pushing one down and then there's only one option. Um, it was found that AI strengths lie in tasks, uh, tasks requiring mechanical and analytical intelligence, um, that are also pretty repetitive, while humans were better at tasks that required intuition and empathy. And this relationship is evident in fields like healthcare, which you both just talked about, um, where AI can help in diagnostics, but it cannot replace the human empathy and cultural understanding part of it. Now, to, to navigate this AI revolution responsibly, past research has also mentioned some strategies that I'll also go into later, such as, um, education, um, you know, having like- emphasising that people need more creative skills that are less likely to be taken over by AI and also increasing human-AI collaboration just to get rid of that fear and, like, have that starting point. Um, so to conclude the content of past research about this topic, um, it highlights the need for informed discussions and actions regarding his impact on jobs and by bridging the gap between expert insights and public perceptions, we can embrace AI’s benefits while safeguarding against its potential pitfalls. And now that we've briefly touched upon past research, let me introduce, uh, like what my research was and like what I found. So I'm going to start by breaking down the demographics and initial reactions of our respondents towards AI advancements. I won't include a lot of the numbers I found, but the link to my research paper can also be linked in case you're interested in that. So among the 70 respondents surveyed, I found a diverse mix of young adults and older individuals. Interestingly, the majority expressed excitement and optimism about AI's potential, while a big group also had concerns about job displacement. And, some participants even highlighted the need for more exposure to AI to understand its full implications. So they are curious, it's not all over, we're just terrified, we don't want it.
Leena Alorbani: Yeah, I have a question. Um, so you mentioned that the source of information about this topic is often through like media sources that can often sensationalise. So because, uh, older populations are generally less exposed to this kind of media and the younger generations are, do you think that there is a disparity in terms of age and their understanding of the communication?
Raweeha Raza: I think so. I think age can definitely play a role, um, in exposure to social media and like, consequently, the sources of information access. Um, what my demographics were, I think it was, most of them are young adults like of 18 and 25, and then the other big chunk were like 34+. And what I found in my findings was that they, there were some that were like, you could see a clear pattern, like the younger individuals are more optimistic, they're leaning that, oh, it's going to create jobs, it's going to be really nice. And then the older people are like, oh, this is, it's going to take away our jobs. They're a lot more pessimistic. Um, but there's also, like, I do think that with social media, older people have begun to learn how to use it more. And they are also um, slowly changing their- their thinking towards it. But also looking at a wider scale, I think older populations may be less familiar with social media platforms, and they might rely more on traditional media or personal networks for information, which could really cause a lot of misinformation to be spread. And so I think this disparity in information sources can contribute to varying levels of understanding and perception regarding AI and misinformation.
Sania Khurshid: That's actually really cool because I would assume that individuals that are going into the job force. So those 18-25 would be less optimistic because they would see it as a lack of opportunities on their part.
Raweeha Raza: Yeah, um, it is interesting. I do think that that's why they're leaning more towards- because they know that they still have time to educate themselves and actually go in a field that will not like, it's likely to not be replaced by AI. Whereas older individuals are like, oh, we spent our entire lives doing this. Why are we going to change it now? And I think that might be a reason to that.
Sania Khurshid: Yeah, no, I agree. And connecting back to my topic, I also saw many terms like revolutionary, AI is going to be revolutionary, all of this stuff. And I was wondering, does that terminology in your opinion, alter how people perceive this, uh, AI and how it's going to be implemented? Because even in my research, uh, paper, I wrote, will this revolutionise or something?
Raweeha Raza: Ok, wow. I- I think that's a really good question that I did not even think of because ‘revolutionise’... it does seem to be a term that will make people think, whoa, this is something that is groundbreaking, completely new, like never heard of before, similar to the industrial revolution was for the workforce. So I do think that us using the word, like a terminology like the revolution, it definitely will impact what people think about AI. Which leads to more interesting questions. Like, is that a good thing? Because, um, for now, I do think the AI will, it will take a few years, maybe even decades to actually start having an impact on the workforce. And for us to start saying it now, I wonder if that's causing more fear amongst the general public.
Sania Khurshid: I agree. And something I found that was very interesting while researching is someone said that this is the same as, uh, elevators where it used to be automated. And then people got really scared of being in an elevator alone when it was going by itself. So it's kind of similar to AI, where we're scared that AI will have a significant impact, but then we'll get used to it.
Raweeha Raza: Oh, that is really interesting. Um, and even with elevators, like you can control it now. So there is some sense of control about where you're going.
Sania Khurshid: Yeah, and this can be applied to this field.
Raweeha Raza: For sure.
Leena Alorbani: I do have a question for both of you guys. Having done research on AI, do you think that there is a revolution coming?
Raweeha Raza: Um- You want to go first, Sania?
Sania Khurshid: *laughs* Okay.
Raweeha Raza: *laughs*
Sania Khurshid: So, of course, uh, there's a lot that's being done right now, but we cannot constitute it as a revolution, since it's in the bare bones. We're just getting started. We're just starting to see how AI can actually be implemented. Specifically within healthcare, this is relatively new, we're seeing just how far AI can go in reducing doctors burden. And this is not very, uh, revolutionary in terms of what constitutes something new, something extravagant.
Raweeha Raza: Okay, I actually really agree with you. Like, because this is so new, we're not going to be seeing the effects for a long time, many years, maybe even decades. Um, but I do think that a revolution is definitely happening. Maybe it is at its very lowest point right now. It'll take a while before you actually start seeing the effects take place. I know recently they've started developing some software developer bots. There was this thing I've heard of called Devin AI, which did exactly what a software engineer would do and that scared many software engineers because they're thinking, wow, what we created, is that going to replace us? But I personally don't really think that's the case upon what past research has said and what current research is saying. I think it's definitely going to take a while and we're safe for now.
Sania Khurshid: And I could say the same about your topic as well, Leena, where, um, it hasn't progressed to a point where we're increasing all of this knowledge into a day to day basis. It's, we're still going towards that point.
Leena Alorbani: Yeah, um, I think that for, like, all of these topics, there is change happening, but it's not- but it's still slow enough that we are able to adjust to it.
Sania Khurshid: Yeah. And we're able to comment on it, showcase how communication is failing, how it's succeeding, all of this good stuff.
Raweeha Raza: I mean, the fact that we're already having this discussion, this goes to show that we're, we'll be prepared for when it comes.
Leena Alorbani: Yeah.
Raweeha Raza: So now let's explore where people are getting their information about AI and how these discussions are affecting what their reactions look like. So what my research revealed was that discussions on AI were prevalent on platforms like social media, um, educational institutions, so, you know, school, college, university, um, blogs and online forums. Surprisingly, mainstream media, documentaries and peer reviewed articles were barely as frequently accessed - I think I remember. Peer reviewed articles, they were the least, like, it was the least percentage. No one read those. And this disparity raises questions about the reliability and accuracy of the information actually being circulated. Um, online sources like blogs and discussion boards, they can be so subjective and you can't really be sure that what you're absorbing is accurate. So, based on these consisting of our sources, what varied perspectives did my respondents have regarding AI's potential impact on job displacement? Well, the majority, they were uncertain about job outcomes and the younger participants actually said that it would lean towards job creation, but to an extent, you know, it would be very limited, and the most impacted industries would be customer service, computer science, media, and manufacturing, while creative professions and specialised fields like legal services, they were viewed as less threatened by AI disruption. Now, the role of the media in shaping public perception cannot be overstated, so let's explore how different media channels influence our understanding of AI and job displacement. Well, media channels, such as social media, mainstream outlets, blogs, and movies, they were identified by my respondents as the key influencers shaping public perception. However, concerns were raised about biases, rapid dissemination of information and the portrayal of futuristic themes without a balanced representation. So, wrapping up our discussion, it's clear that effective communication about AI is essential for informed decision making. So, here are some of my next steps to bridge the gap between expert insights and public perception. These are supposed to be realistic and actionable steps to address challenges posed by AI advancements and job displacements. So firstly, um, education and training initiatives - they have to prioritise dispelling misconceptions about AI's impact on job loss. It is so, so important to promote continuous learning and adaptability, especially in sectors vulnerable to automation. This includes ensuring balanced and accurate representation in the media to foster a deeper understanding of AI’s implications for employment. We also severely need more experts online that can show the public the light that advancements can bring knowledge is power and power gets rid of unnecessary fear. Um, secondly, I think policy and regulation play a crucial role in safeguarding employment, implementing laws to protect workers and advocating for ethical practices are essential steps. Because you want to encourage AI to complement rather than replace human jobs. That's what's key. Along with providing, um, government support for AI related education and opportunities. However, it's also important to acknowledge that this isn't very, um, realistic for some. Uh, there are a lot of challenges faced by individuals in various parts of the world who may struggle to afford AI education or transition from traditional industries. So we need tailored laws and support systems for them that should be implemented to ensure inclusiv- inclusivity. Um, And fairness for workers, especially those in sectors like agriculture who may face significant barriers to adaptation. So, we really just need to, like, we need support mechanisms because this AI thing is so new to us. We can't expect everything to just work so smoothly yet. Support is crucial for workers facing job displacement due to AI advancements, so offering transition support like severance packages and career counselling can help individuals navigate their career paths effectively amidst the technological disruption. And once we have this support, and tell the public about it, they'll also start feeling like there's actually some regulation taking place and their fears will hopefully dissipate. Um, I also think promoting diversification and adaptation strategies, especially in sectors with more mechanical and analytical tasks, is so crucial. Equipping the workforce with necessary skills for the future job market and encouraging creative thinking and job creation is essential so that we stay resilient. So, like these next steps, they aim to address challenges posed by AI advancements, right, and ensure a more resilient and adaptive workforce. So, by promoting transparent and accessible information about AI, I think stakeholders can empower the general public to make informed decisions and adapt to the evolving technological landscape. So- so that basically wraps it up from my side. Um, do you guys have any questions?
Leena Alorbani: So, Raweeha, you briefly mentioned AI in the medical field. How much of the- of the impact of AI on medicine did you find in your topic? And does it relate in any way to the medical usage of AI communication that Sania spoke about?
Raweeha Raza: So regarding AI's impact on medicine, my research found significant involvement of AI technologies for things like diagnosis and treatment planning, similar to what Sania mentioned with the note taking. But there wasn't much about job displacement itself because we do need humans in the healthcare field. But what was common in our discussion was the need for accurate and transparent communication channels in healthcare AI adoption.
Sania Khurshid: Also, Raweeha, there are many parts where intrigue in the topic is preferred over the truth, especially when sacrificing the legitimacy of topics to make the article more intriguing. How effective are these methods in promoting misinformation?
Raweeha Raza: So there are concerns about how methods prioritising intrigue over truth can promote misinformation, especially in complex topics like AI. But I think there definitely needs to be a balance because you don't want to contribute to the dissemination of misinformation, but you also don't want to bore people to death, and I think there are ways to make this research more intriguing through simplification and models or examples, so people can truly understand what we're trying to convey while not losing too much of the accuracy.
Sania Khurshid: I agree with that completely. Do you believe that it is more important to educate influential public figures like you mentioned earlier on how to effectively communicate topics of AI usage or to position communication agents that are well educated in terms of media literacy and how feasible is it to implement either option?
Raweeha Raza: Okay, this is a good question. I think actually both educating influential public figures and positioning well educated communication agents are both important strategies. Because educating influencers can amplify accurate messaging and reach wider audiences. And at the same time, having communication agents well versed in media literacy can ensure that accurate information is effectively communicated across various platforms. So implementing both of these strategies requires collaboration among stakeholders and making use of educational programs and promoting ethical communication practices in media and public discourse. So while challenges exist, these efforts are definitely possible with coordinated efforts and, um, a commitment to promoting informed discussions about AI and its impact.
Sania Khurshid: Yeah, and I think what's important to highlight here is that when we're discussing this with influential public figures, we're also expanding this information to a wider audience. So right now there's very limited areas in which information - I think you could agree - especially about AI is being navigated. But when public figures have a role in this communication, it opens those doors.
Raweeha Raza: I agree because a lot of, um, when, when someone that you know really well or someone that you like really believe in says something, you're more likely to believe it. Nobody might really know the communication experts, but public figures, they can play a big role in changing perceptions. So I do think that they have a responsibility in society to make use of this role and actually spread accurate information.
Leena Alorbani: Yeah, I think that's actually something that makes communication of my topic - which is a lot more medical - difficult, because public figures don't generally communicate like medical information and medical research, because it's just a lot more complicated and not as much in the public view, and I just thought that was interesting.
Sania Khurshid: And I fully agree with this, where it is difficult for, um, us to fully educate all of these individuals on such a variety of topics, but it should be done to some degree, and there should always be guidelines in place when we are opening communication to the public or to anyone.
Raweeha Raza: Oh, for sure. I agree. Because, um, I think, I think even with yours, with Alzheimer's, even with things that don't affect them, they speak about and make a big deal out of because they want to spread awareness. And so I think even with Alzheimer's, if we can get, um, public, um, figures to talk about it,
Leena Alorbani: Mhm.
Raweeha Raza: and, and make a big deal out of it, it will be effective in at least spreading, um, awareness. It doesn't really have to be a case of, oh, I'm not related to this topic whatsoever, so I'm just not going to say anything. We can, they should definitely use their platforms to an advantage to spread, um, awareness and information.
Leena Alorbani: Yeah.
Raweeha Raza: So now that we're nearing the end of our podcast, uh, to wrap it up, what are the key things you guys think you learned? For me, the greatest thing I took from this project was the need for the general public, you know, the community to actively be involved in the scientific research. And that's truly how humanity can advance. To do this, we need to lessen the rapid spread of misinformation online.
Sania Khurshid: And I totally agree with that where what I found is that we need to educate people more on what's being done, especially when there is AI at play in terms of medicine or technology in your case in general.
Raweeha Raza: Mhm.
Sania Khurshid: Without proper communication, misinformation is prone to cause fear or distrust in the sectors that we have talked about, and I think you can agree with this Raweeha.
Raweeha Raza: For sure.
Leena Alorbani: Yeah, so a big thing that I learned is really how connected research and public understanding of research is. So for mine in particular, uh, my, my biggest takeaway was actually that bridging the gap between academic research requires that we bridge the gap between the scientific communication and the understanding of the public.
Sania Khurshid: And I feel that we can all agree that communication is the biggest aspect that we're looking at right now. Without good communication, these topics in general are very prone to being distorted in the media or from mouth- word to word.
Raweeha Raza: Yeah, definitely. Well, thanks guys. This was a lot of fun.
Leena Alorbani: Yeah, I learned a lot.
Sania Khurshid: See you next week with more on Misinformation in the Information Age.
*Outro Music: a high-energy blend of electronic and metal music, featuring a rapid, fast-paced piano melody and synth elements in the background*