Practical Access Podcast

S10 E3: A.I. in Education: Opportunities and Challenges

December 08, 2023 Eric Imperiale Season 10 Episode 3
Practical Access Podcast
S10 E3: A.I. in Education: Opportunities and Challenges
Show Notes Transcript

In this follow-up episode on Artificial Intelligence (AI), Drs. Lisa Dieker and Rebecca Hines explore its multifaceted role. The discussion is framed around a book by Reid Hoffman, written in collaboration with ChatGPT, focusing on the impact and potential of AI in enhancing human capabilities in educational settings.

Link to Reid Hoffman & ChatGPT Discussion:
https://www.youtube.com/watch?v=myWPwwj0THE

This episode highlights the need for educators and students to approach AI as a critical tool that can significantly enhance learning experiences and outcomes when used ethically and creatively. The hosts encourage listeners to engage with AI mindfully, focusing on developing critical thinking skills and using AI to supplement human intellect, not a replacement.


Key Points Discussed:

1. AI as an Educational Tool: The hosts debate comparing AI to past educational technologies like calculators and the internet. 

2. AI’s Potential in Problem-Solving: The discussion emphasizes AI as a tool for researchers and educators to solve critical problems by enabling creative thought. Lisa shares a story illustrating the transformative power of internet information, underscoring the need for critical examination of AI-generated content like Khan Amigos.  Link: https://www.khanacademy.org/khan-labs

3. Critical Thinking and AI: The episode stresses the importance of developing critical thinking skills alongside AI literacy. 

4. Navigating AI's Limitations and Misinformation: Dr. Dieker discusses the 'hallucinations' of AI, including nonsensical responses, plausible but incorrect information, and AI's overreach in claiming capacities it doesn't possess. This leads to a conversation about the importance of using AI responsibly and ethically in classrooms.

5. Empowering Students with AI: The hosts advocate for introducing AI technology to students early, teaching them to seek answers and support their learning independently. They emphasize the role of AI in leveling the playing field, particularly for students from disadvantaged backgrounds.

6. AI and Assessments: The conversation concludes with the potential of AI in designing assessments that analyze and indicate students' critical thinking and creativity, moving beyond traditional fact-based testing.


Lisa Dieker: 
Welcome to Practical Access. I’m Lisa Dieker. 

Rebecca Hines: 
And I'm Rebecca Hines. Lisa, today we are going to be continuing our discussion about AI. What do we have?

Lisa Dieker: 
Yeah, so Becky, I thought I would do something kind of fun, I'm setting you up in a positive way. I have been reading, or just finished reading, a book called "Impromptu: Amplifying Our Humanity Through AI." What's fascinating to me is the book is by Reid Hoffman, who's a venture capitalist and author, online entrepreneur, but he wrote it with ChatGPT. So, I found the book interesting to read because he would then write and then he'd say, "And what does ChatGPT say?" And so, I've taken some clips from the book, and I'm going to read them to you and the listeners, and I'm kind of curious about your response, and then of course, I'll piggyback on whatever you might say. And I think I've picked some that are very practical. So, he writes about Mims who wrote a column earlier describing ChatGPT as merely another in a series of recent technologies that have altered education. And he said, the author says, it's much like the steel trap memory, electronic calculators, fed up complex calculations, and Wikipedia displaced the printed encyclopedia, and online databases diminish the importance of a vast amount of knowledge. Curious about your thoughts on that.

Rebecca Hines: 
Well, you know, I love improve. So, thanks for the format of this. But my thoughts initially, having not read the book, well first of all, are we challenging that those innovations were not helpful? I'm not sure I understand the point there. I don't know if humanity's working memory needs to be monopolized by trying to retrieve facts from our long-term memory, storing and retrieving these things when we could be using it in other ways that have more meaning. So, I guess I'm not a valuer of knowing a fact for the sake of knowing a fact. In the first place, to be honest, I'm gonna give you a specific example, even about the calculator example. I am not a math person. I test out with a really high capacity for abstract mathematical thinking. So, any ability that I have in the area of technology, in the area of other things that count on this abstraction, are going to be hindered if I'm having to do a computation to try out my abstract thinking. So, I'm not sure that AI is harming anything. I think it should be conceived as a great tool for researchers who are trying to improve conditions for mankind. So, I think that leaders need to be the ones who figure out how it's used and how it's used ethically. But I only see it as having potential to solve critical problems by allowing people to use their creative thought instead of computational.

Lisa Dieker: 
Yeah. And I'm going to kind of piggyback on the emergence of the Internet. You and I had the privilege of seeing that, but what it did is it really harvested what we knew. But now, a true story. My son and I got the privilege, gosh, he was 10, to do a keynote in Peru. And I'll never forget the woman came up and started kissing Josh, Josh was like, "Make her stop kissing." If you didn't know, he didn't really love people touching him. And so it was interesting but she didn't know. She thought her child was going to die because they had Tourette's. She thought they would tick to death. I was like, "Wow." But that emergence of the Internet has empowered us all to get lots of information, sometimes misinformation. You Google what you think you have, and now you've got nine diseases. That's not healthy. But I think everything needs a guardrail, and I think AI is going to provide that. I want to give our listeners a really practical one to take a look at. Unfortunately, right now, it costs, and again, I know teachers don't have money, but it might be something to think about for those families who are trying to get their kids college-ready. It's $9 a month. There is a way to ask for it for free, but Khanmigo, I've always loved Khan Academy, but from a math background, it's a little too much of a talking head. Now, Khanmigo has been built in with GPT-4 so that it asks questions. So, it's got that bot nature to it. If you say 6 * 5 is 36, it'll say, "Well, why do you think that?" instead of, "No, it's not," or "You're wrong," which is a calculator, gives you the answer. I think AI is going to help that brain evolve and create your own thinking because, as you said, you're not going to have to keep so much in your long-term memory, which we know is a problem for many of our wonderful friends and colleagues that might have traumatic brain injury, might have dyslexia, might have dysgraphia, the inability to recall that information is hard. And I think AI is going to help with that so they can create their own thinking instead of worrying about trying to retrieve those facts.

Rebecca Hines: 
Yeah, I think that retrieval is the key, and I think AI is a retrieval tool. So, I think that it's on us to think about the right questions to ask and the right problems to ask for support in solving. But I don't think that needs to replace the minds of humans. I think we just consider it a tool.

Lisa Dieker: 
Yeah. So, it's interesting because then I kind of go to the later on in the book where he says Mims agreed with another author and said that humans could thrive alongside AI if three things happen. And one is that you've got to specialize in asking the best questions, which we have been talking about in the earlier podcast. The second one is learning insights or skills that are not available in the training data used by deep learning networks. So, again, gaining your own insights, taking six pieces of data and doing what the human brain does is synthesizing and then turning those insights into actions. And I was kind of curious because then I took the negative side to the beginning with, but I wanted to kind of go with what do you think about that? Do you think that is true? Do you think there are other ways we might need to think about humans and AI living in the world together in classrooms and the teachers?

Rebecca Hines: 
Well, I think my response to the previous question is exactly that. We need to be the actors, we need to be the ones who act on the knowledge. In particular right now, I would argue that increasing human interactions is critical. So, I don't feel that we need more time sitting looking at a computer screen, asking it random questions for no reason. However, if I'm new to an area and I want to support persons with disabilities and I want to find projects to support persons with disabilities, these days I will go to ChatGPT and ask a specific question like, "What places in my community do this, this, and this?" And it's going to give me a nice little synopsis so I can then go and meet with my local United Cerebral Palsy, and then I can type in, "What are good service projects to support people with physical disabilities?" So now I can use it again to figure out what I can do to support people. Whether it's planning a fall festival, which some of my students just did, why wouldn't they use artificial intelligence to help them design things for the festival or to create flyers for the festival or any of those things? At the end of the day, I want them to go and hold a great inclusive fall festival that's better than they could have created because they have the energy and the ideas behind them, but it's still them in person.

Lisa Dieker: 
And it's funny that you use that analogy because yesterday, again, we're thinking about how do we create better STEM opportunities on a college campus for people with a range of disabilities to help advance their skills in math? Because we know knowing statistics and data is what's going to really be some of the jobs of the future. And so, my colleague put into ChatGPT, "What are the skills that are needed for jobs of the future?" And statistics and analysis of data were kind of the top two. So again, for families, if you've got a kid—I still remember, you know, I put in (this was a web search), but I now think about what you could do today: "What are the jobs if somebody just has a high school diploma that pay the most money?" I was really surprised elevator repairman was a good one. And that was something my son was thinking about because he wasn't sure college was an option. So, as you said, there are so many ways to get information and blending Google searches with ChatGPT, with image-based kinds of things, can really bring a new level. So, I have two more questions. This one I think I really pulled because I know this is near and dear to both of our hearts. And this was a human expert, two people, Andreas Schleicher and Sir Michael Barber, who worked with hundreds of thousands of schools, that said, nationally determine what improves results at scale for poor children. They say the fastest improving school systems use technology along with all their other resources and assets to get instruction into the hands of teachers and through them to students. And I just really—I loved that statement. And I just wondered your thoughts because I know we often see these adoptions early in our schools that have technology. But I think the pandemic gave everybody technology. So, what are your thoughts about helping us lift up the lives of children like myself, that's a first-gen college student to say, here are some things Lisa could do in her school, even with limited budgets?

Rebecca Hines: 
Well, I'm not positive this is gonna answer what you're asking me, but here's my thoughts based on what you've just said. It's all improvised, you guys. So, I did not know that question was coming. Here's my first thought. It always gets back to put the tool in the hands of the student at an early age so they understand the capacity because you cannot always reliably count on an adult in your life helping you solve problems. So, as a special Ed teacher, I was just talking to some teachers here in Florida last week, and I wasn't talking about technology or AI, but I was talking about thinking, you know, with the end in mind. All the kids that were working within our classrooms, they grow up. We can't control what's going to happen to them, but we can teach them to start thinking as personal problem solvers and finding the fastest, easiest tool is the key. So, in this case, in terms of equity, if we don't teach kids—whether it's kids with disabilities, whether it's, you know, it doesn't matter what level of student or what background—if we don't teach them to seek answers to specific questions about supporting themselves and their own needs, that we are not giving them equal access. Just like you mentioned earlier about the Internet, there was a huge imbalance and, you know, there still is. In the 90s, I went and taught a pre-K group at a Recreation Center in a really impoverished area how to use the computer because it was the first time they put their little hands on a computer mouse. And some kids have that at home, and they're on it every day. So, if I get to school and I don't have any skills in how to use technology in general, I'm already at a disadvantage. And I'm not catching up because the kids who get to use the technology in schools are the ones who finish their work fastest. And the ones who finish their work fastest are the ones who know how to use the available tools. So, a long, boring answer to your question, but I think introducing the simplest technology, and right now, things like chat bots, AI, those are actually the simplest tools now for a child to use him or herself. And we don't have to do all of it for them, but we have to introduce it at a very early age, in my opinion.

Lisa Dieker: 
Yeah, and so one of the things I love to see teachers do, and I recommend—I have that crazy Tag Packer site, Becky, with a couple hundred. We can put that out for the listeners with a couple of hundred websites. And I always say to teachers, "Why are you looking through that site, make that a homework assignment.” Especially for our kids of poverty, there's ways out there to practice the ACT and the SAT. If you know, because again, I didn't know you took it twice. I didn't know you got a tutor. I didn't know you could pay because I couldn't pay twice." Those are the kinds of things I think that these two individuals that have done a lot of work said, is it's the agency of technology that is the game-changer for children of poverty, but it's the agency of the teacher putting it through to the student. Not the teacher using it, and I think that's what you said so eloquently there. So, the last one, I think will make you laugh. So, since we like to laugh on our podcast, I want to talk about the four hallucinations that AI provides, which I thought was really interesting, and just get your thoughts on it. So, the first one he talks about is the nonsensical, like it just gives you stupid stuff. Like, you know, I love the openings of the Seeing AI app, where you hold it up, and I just held it up, and it tells you it's made for people who are blind, and you hold up Seeing AI, and it shows the person whether they're happy or sad and their age. And our friend Charlie Hughes is a computer scientist, gets mad because it always says he looks grumpy, and he says just because my face is old and saggy. So, you know, again, those nonsensical things that technology does, but ChatGPT does it really well. Sometimes you're like, what the heck. And then the second thing that we've got to be looking for in these hallucinations, he said the first one's easy then they get harder and harder. One is plausible but incorrect, and I always think of what's the TV show where they, the MythBusters, where they would always say, you know, this is plausible. And I think that's what we also have to do is have a guardrail and say, all right, that sounds right, but do I really believe it? And I think those are great discussions for students to compare the textbook to that. The third one he mentions are responses where, the machine learning seems to claim capacities it doesn't really have. Like, I believe Becky feels this way, like it, it takes on how you know, thoughts that you're like, you're a computer, you don't have that thought. And then the fourth one I think is the one that keeps me up at night, and I think that's what we have congressional hearings about this,  is deliberate and destructive hallucinations such as accusing a race or misleading information or giving negative media. And I wonder as a classroom teacher practically how do I keep from number 2-3 and four, that plausible that wrong tone of saying something offensive to a child or deliberate and disgusting. But destructive hallucinations, how do we make sure that is not part of our teaching and that's my last question?

Rebecca Hines: 
Well, it's a great question. I think it's the one that we're all wondering about right now. I just said in my earlier one that we need to teach kids from a young age how to use it, but we also have to pair that with a focused effort to increase and improve critical thinking. So in all of this, in all of it, it is about what people can do with the information they find. And I mentioned before also acting on what we find, but how do we first question and validate what we find. And we're not teaching that well, we've been in this, you know, 20-year cycle of just memorizing and hammering these tests, etc., when what we need now more than ever are people who can think critically and question things because we don't want people to automatically take what they read on any chat site as fact. It's been a problem with the Internet in general because people have always been able to find misinformation online, but anymore, it is what is information, what data is real, do people understand what goes behind a research study. We see this with doctoral scholars; they will cite research, but if you trace it back, you're like, did you even read the actual research study because that was not a valid study. So I don't think we can trust every answer. I'll give you an example. I know since last year, since I first started talking about and thinking about things like ChatGPT and going in and putting in a question, ask it to write you a 5 paragraph essay on any topic, and it's going to write it for you in seconds. And I told my kids in college about it. Probably not one month later, my daughter sent me an assignment she had been given at Boston College where they were told enter this question into ChatGPT, get the response, and then I want you to annotate it with better information, and I want you to cross-suit  everything that is not factually true in that response. So they're deliberately teaching people, yes, here's the tool, we know that you know how to use the tool, but can you validate what you find? So if you think about that, even again, the challenge for teachers is to pair that way down and ask a simple question and model for kids, wow, let's read the response, what do we know about this, what do we know about this, let's fact-check this. So I think we have to start teaching those kind of processes and tie it to critical thinking.

Lisa Dieker: 
And it's funny because I really think if you think of our work in the field of special education and your classroom teacher, that is the bottom line of what you're trying to do. You're not trying to get a kid with dyslexia a better reader so they just sound prettier. It's so they understand better. You're not getting kids to be better in math fluency so they're just better at telling you as a parent what's 6 * 6. But you're doing it so that they can use that math fluency to understand more difficult concepts in algebra, trigonometry, calculus, so forth. And I think that's where we're at is, as I look at this, that's what I love. If we really say your job is to be the investigator, and I think we used to do that with textbooks, you know, we used to say, well, whose perception is that written from, who is the author of that book, who is the author of that textbook. And I think the same should be true for AI. If it is, if it authored it, it's a computer, it has no emotions, it has no feeling, it's taken other people's work. Is that accurate? Do you believe in it? And I love—I love that assignment. And I think our assignment should always be to argue why something is right, whether it's my own writing, whether it's from the textbook, whether it's ChatGPT, and whether it's my math answer or the and I think that's where we are now. We have to get assessments that get us to that level because that is the struggle we recognize in teachers. If the assessment doesn't allow me to show my critical thinking and creativity, then we're right back to having to do those with facts. So well, interesting book, interesting discussion. I'll let you give your thoughts there and I'll wrap this up.

Rebecca Hines: 
We both know that assessments, thanks to AI, can be designed to easily analyze and at least give us an indication, you know, of how things have been constructed from a critical thinking standpoint by students. And I'm looking forward to an opportunity where kids can truly demonstrate what they know through a variety of means because there are tools that can assess that.

Lisa Diker: 
Well, we thank you for joining us. And if you have questions, please tweet them at Access Practical, or if you have other questions, you can post them on our Facebook page. Thanks for joining us!