Episode 58

The Scoop with Nihal Krishan Part 1: ChatGPT

Nihal Krishan, Tech Reporter at FedScoop joins Carolyn for a special two-part episode to talk about some of the hottest topics in government tech. In Part 1, Nihal gives some eye-opening insight on all things ChatGPT including security, privacy, and national bans.

Episode Table of Contents

  • [0:25] Introducing Our Guest, Nihal Krishan
  • [7:39] We Need to Upskill
  • [15:45] How the U.S. Government Is Dealing With ChatGPT
  • [23:00] Stanford University Human Center Artificial Intelligence Index Report of 2023
  • Episode Links and Resources

Episode Links and Resources

Transcript
Carolyn Ford [:

Welcome to Tech Transforms, sponsored by Dynatrace. I'm Carolyn Ford. Each week, Mark Snell and I talk with Top influencers to explore how the US. Government is harnessing the power of technology to solve complex challenges and improve our lives. Hi, thanks for joining us on Tech Transforms. I'm Carolyn Ford. Today I get to welcome Nahal Christian. He's technology reporter at Fed Scoop. And you guys know I've mentioned many times I love talking to the government reporters. They have this bird's eye view of what's going on in government tech. I feel like that they're really good at keeping the massive reports that we see coming out of government, like boiling it down for us, even helping us, helping the government stay accountable to these reports. And so today, Nahal and I are going to talk about Chat GPT. Which, man is that the hot topic? And TikTok. So welcome to Tech Transforms. Nahal, how are you?

Nihal Krishan [:

Yeah, thanks so much for having me here today. I'm looking forward to this conversation. So just a little bit about me and my background to give your readers a sense of where I'm coming from and my skills. So I grew up all over Asia. I was a global nomad. I grew up in South Korea, Saudi Arabia, India, and Singapore before coming to the States to study journalism and politics here at American University and Arizona State at the Cronkite School. And then I've been in DC for the past eight years as a political reporter. I started off as a Washington correspondent with PBS Arizona. Then I was with Mother Jones, then I was with the Washington examiner as their big tech reporter, and now most recently with Fed Scoop. And I do some reporting for Cyberspoop as well. And so I've been at the nexus of politics and policy making for almost a decade now and focused on tech issues in particular for the past four years or so.

Carolyn Ford [:

I'm really excited to talk about especially AI, and specifically, I want to talk about Chat GPT. It's kind of taken the world by storm overnight. I mean, we've got some government entities like the National Science Foundation looking at use cases, like maybe even adoption use cases. We have some countries like Italy who've just banned it outright. So my son heard me discussing this topic and what you and I were going to talk about today last week, and he's a gen zer, and he's just like, mom, do you use that? And I said, yeah, I kind of do. And my use cases for using it is I use it to help me brainstorm it's, like my AI assistant. So I love it to help me generate new catchy titles. I'll have it rewrite things for me. I use it in a pretty limited way, and I certainly see it as a tool in my toolbox that I still have to be a subject matter expert in what I'm feeding it. And I talked to him about that, trying to justify my use of it. And he's like, It's learning from you. I really took that to heart. And I want to know from you why you feel like there's such a divide. Some agencies looking to adopt countries, banning my son saying, don't use this thing, it's going to end us all kind of feels a little apocalyptic. Okay, so tell me, why is there such a division?

Nihal Krishan [:

Yeah, I think the first thing you have to start off with and reckon with is the fact that everyone has an opinion on this because it is so incredibly user friendly. And that is the reason we're even talking about this at all right now is because we've been talking about AI since the early two thousand and ten s and in Sci-fi for decades now. But it's really coming to a head right now because it's just so scarily, eerily effective at oftentimes answering questions and knowing giving us what we want. That's at the core of generative AI, whether it's text or visuals or videos or otherwise, you ask it something and it's so well trained, OpenAI, and others have so significantly come up the curve in the past two years that instead of just giving you a bunch of junk or gobdi goop, it actually gives you something that is helpful and concrete. Now, obviously it doesn't always do that. Its track record is nowhere close to 100% or 90% or maybe not even 50%. But I think the major reasons that it is polarizing is that on the one hand, it's helping people to plan their vacations better. It's giving people companions if they're going through something difficult emotionally. It's of course helping coders and people who are creating some of the most cutting edge software. And so in all realms of humanity, from the social sciences to entrepreneurs to chefs and writers and poets, it's adding a new element of resources and capabilities to what they're trying to create and do. But on the other hand, it has a darker side in that this incredible innovative tool has come about from scraping the entire Internet the way that Google and others have done, and oftentimes it's scraping personal information that people haven't always consented to. So it's built off of some degree of non consent of data and information. And so of course, there are concerns around data privacy about how the tool was built. But also when you're inputting information, let's say you're talking about something related to your personal life or something related to your government documents or something at work, how much is it retaining that information, building it into its model that then other people are having access to the information you feed it. So there's that data privacy element and then of course, there's disinformation, which is.

Carolyn Ford [:

Huge issue because it's taking everything. Not necessarily like facts, right? It's taking everything. And to your point, it says right in its privacy policy that it does pull proprietary information and that if you use it illegally, that's on you. But it's almost impossible to determine what pieces are proprietary and what aren't. Which is why I stick to a very narrow use case. Like I use it for stuff that I am an expert in and that I can go back through and say, yeah, this is wrong and nuance it myself. But another point is my son said this is going to make everything so I'm probably going to use this word wrong but homogeneous, right? The creativity is going to go and I'm like, yeah, I can kind of see that because it's so easy.

Nihal Krishan [:

Yeah, so it's funny you mentioned that actually. So I actually recently hosted an event here in Washington called Art and AI. And I brought in a few local artists who are talented and have used AI in some of their and then I brought in an Amazon developer who's created AI tools for years and this is exactly what we were talking about. And actually many of the artists that they feel that their work is going to be even more valued because as your son said, there will be some homogeneousness when it comes to lower levels of art and maybe it'll make it much easier to create certain images and graphic art detects. But if you are a talented artist who paints or sculpts or does something else, what you do will be valued even more because it's something that teteept or other tools cannot make. And so I think it will just raise the bar in terms of what people value in a creative sense. And some people will likely lose their jobs, some people will be put out of work. Those who create things that now chat, GPT, certain videos, certain text, certain code. And so people will have to up their skills, they'll have to up their game in order to remain relevant. But I think we're still some time away from a significant number of jobs being impacted. But eventually, eventually it will get good enough that that will happen. But, but yeah, I'm not as worried about the homogeneous that I think your son has talked about because yeah, it's just like anything else like the internet. We just thought that, you know, love letters would like disappear. But people still people, people still, you know, write things like that. There's, there's all sorts of creative things that still happen even though technology makes it super easy. And I take the optimistic view of it raising us and making us even better, at least in the long run.

Carolyn Ford [:

I agree. And like I said, the way that I use it, it actually frees me up to be more creative because wordsmithing something, there's a lot of creativity in that, don't get me wrong. But there's a lot of things that I write during the day and I'm certainly not a writer like you, but there's a lot of pieces of communication that I have to create that good enough is good enough, and it doesn't require that level of creativity. So I can use Chat GPT to help me with that and then use the time that it frees up to do higher value tasks exactly. That chat GPT can't do. But I've tried, believe me, I've asked it to do stuff and it spits out garbage and I'm like, no, I have to do this myself.

Nihal Krishan [:

Yeah, there's still many really important critical tasks that human beings are going to have to do themselves. Yeah, exactly. I think even in the government, for example, I think one of the things that Sam Altman, who's the founder of OpenAI, said his favorite part of using Chat GBT is its summarization element. I think that is something people if you have hundreds of pages in a document or report or maybe even just an article from Fed Scoop or Cyberscope that's long and you don't have the time to read the whole thing, it will summarize it for you. Now, obviously the accuracy of it is still called in the question. I've tried to ask it to summarize many things before and it does not always get it right.

Carolyn Ford [:

It doesn't. And if you're not an expert, you'll miss that.

Nihal Krishan [:

Yeah, exactly. And as you very rightly pointed out, you do have to have a sense of some knowledge and expertise to know what you're reading is actually true or it's just bullshit. But I think very much eventually, I think probably within the next year or two, if not sooner, we will have really powerful summarization tools, and that will allow people in the government, for example, to be able to take large amounts of information and say, okay, just tell me the top five or ten things. Give me a few quotes. And then that allows them to then build upon that when they're building software or they're trying to build policy or other things, and so it's not there yet, but soon we will be at that point. And I think, as you said, using it as a way to take certain tasks off your checklist that are difficult or maybe time consuming but are more menial, and then get to focus that time and energy on things that are more creative and that maybe require collaboration is how to view it. And I truly think that this is the kind of tool which it's inevitable. Inevitability is such an important concept when it comes to AI tools. And it's either you get on board and you try to see ways in which it can help you personally and help your organization, or you're likely to get left behind in some way. And so it's better to understand, even if it has problems and it could be considered an enemy, better to understand it and know its strengths and weaknesses than to try to ignore it altogether.

Carolyn Ford [:

Well, so what do you think about countries like Italy banning it I mean, what does that mean? What does that even mean that they've banned Chat GPT? Like you can't get to it if you live in Italy.

Nihal Krishan [:

they are going to be getting:

Carolyn Ford [:

How do you foresee the US government dealing with or not dealing with Chat GPT? Do you think they're going to come out with any kind of policies or even lean into it and say, yeah, guys, start using this more?

Nihal Krishan [:

Yeah, definitely. So, first of all, I would say the first approach of the government. I mean, I've spoken to folks at the VA, at NIST, in the White House, all across the board, and the first things I hear is there is a cautious sense of optimism and excitement over these tools and the effects they can have on the American people in public. But very quickly, and I think more importantly, they say, first, we need to build, guardrails and safeguards around these tools before the government can consider using them in any significant fashion. There needs to be clear red lines as to what these tools can and can't do, what sort of information they're sucking up? What are the possible negative effects or problematic outcomes that could occur? And so, unlike the industry, the government is much more, in a cautious sense, focused on, and your chief data officers and CIOs and CTOs are much more focused on, okay, first of all, what can you not do with this tool? What should we not be allowed to do? And then at some point in the future, there will be a focus on how can we use this in interesting and exciting and innovative ways? But first, let's prevent the harm, which is sort of the flip of what you see in the industry side it's first, how can people use this in an exciting way? How can it benefit people? How can they earn more money? How can it benefit their bottom line? And then there is an element of safeguards and safety. But I think that's a little bit of a second, oftentimes not with all AI tools, but much of the time the safeguards is a little bit of a secondary element because if you don't get users on board in the first place, then who cares whether or not you have all these safeguards in?

Carolyn Ford [:

So with the guardrail rails, though, you mentioned a story that just dropped, I think on Friday about AI in general government regulations. Talk to me more about that one. Why? Is it significant?

Nihal Krishan [:

Yeah, it is significant because just last week on Wednesday, the Commerce Department, which runs the NTIA, the National Telecom Institute, they have asked the public at large for comment on artificial intelligence regulations and rulemaking. And so basically, it's the very first step. It's an extremely early step in the hope and ambition of eventually creating AI rules and regulations that are in the law for federal agencies. And so I think we're still months, if not years away, likely, of creating real AI regulations within federal agencies. And there's no element of this in Congress right now. As sources have told me, it's slim to none that there's going to be any chances of AI legislation passing in Congress anytime soon. But with the Commerce Department taking this step of asking Ford a request for comment with the White House putting out an AI. Blueprint for rules at the end of last year, with NIST having its own framework for AI. Guidelines. You can see that the gears and the machinery of the federal government are starting to recognize that this is something that is coming fast and furious down the pipeline. And we have to prepare for its inevitable introduction into the government in ways in which it's affecting people's lives. And so, yeah, these are serious steps being taken, but there's no clear sort of plan of action when it comes to how the government in a unified sense is going to tackle AI and regulating it. So everyone is starting to take action and see what ways they can do so. But I think what many would like to see that I hear from sources in the federal government and in the tech world is they want to see soon hopefully a comprehensive approach to artificial intelligence policy making the way that we have with cybersecurity, for example, and zero trust. And it's this overarching mandate and set of principles and goals that everybody is relatively clear on even if everyone's at different stages and trying to achieve it. And so I think we are quite far away from that comprehensive plan, but each agency trying to find ways to build their own safeguards and rules is a way of starting that process and hopefully at some point we will have a more comprehensive national or federal strategy for AI.

Carolyn Ford [:

Well, and to my point earlier, we really do rely on you guys, the media, to keep the pressure on so we get to those definitive guidelines rather than just like talking about it, keep us accountable. And you said something interesting. Is this a common practice to reach out to the public at large and say, what do you think about AI to create policies and regulations? Is this a norm?

Nihal Krishan [:

This is absolutely the norm. So federal agencies across the board, if you look at the EPA and the Clean Water and Clean Air Act, those went through they go through, like, years of public comment sometimes, or they go through public comment for a significant period. And then in many agencies, for a rule to be put in place and to be legally binding, rulemaking takes years, typically, and public comment is a significant part of it, and so it typically takes months. So yeah, this is just a concrete step that the Commerce Department has taken and many people who are still sort of trying to figure out why is it that Commerce is leading on this, why are they the ones who've been the first to ask for a request for comment? I think it's just Commerce has a lot of telecom authority with the NTIA and of course the Chips Act is coming out of the Commerce Department. So they've been appropriated hundreds of billions of dollars for semiconductors and other science and tech. And so there is some sense of why it is that Commerce is taking the lead on this. But obviously you have folks at NIST that are working on AI elements. The White House is as well. And so it's not entirely clear, I think so far right now Commerce has just decided they're going to ask for a request for comment next week. Another agency could decide to create rulemaking and take the request. So yeah, it's sort of like a patchwork of ideas and movement right now and yeah, still to be seen what long term focus or direction it takes.

Carolyn Ford [:

So the Stanford AI:

Nihal Krishan [:

Yeah, certainly. So just for your readers, in case they're searching it searching for the let's.

Carolyn Ford [:

Get them the right name.

Nihal Krishan [:

Intelligence Index Report of:

Carolyn Ford [:

Me just a little bit more about that. What do you mean the impact on the environment? I have not thought about this.

Nihal Krishan [:

ni, which was a study done in:

Carolyn Ford [:

Thank you, Nahal, for taking the time to share your insight today. You certainly gave me some eye opening perspectives on AI listeners. Stay tuned for Nahal's next episode, where we continue the conversation on another controversial application, TikTok. Please share and smash the like button. And we will talk to you next week on Tech Transforms. Thanks for joining Tech Transforms. Sponsored by Dynatrace For more tech transforms, follow us on LinkedIn, Twitter and Instagram.

About the Podcast

Show artwork for Tech Transformed
Tech Transformed
Tech Transforms has a new home, visit us here https://techtransforms.fireside.fm/

About your hosts

Profile picture for Carolyn Ford

Carolyn Ford

Carolyn Ford is a passionate leader, doer, adventurer, guided by her father's philosophy: "leave everything and everyone better than you found them."
She brings over two decades of marketing experience to the intersection of technology, innovation, humanity, and the public good.
Profile picture for Carolyn Ford

Carolyn Ford

Carolyn Ford is passionate about connecting with people to learn how the power of technology is impacting their lives and how they are using technology to shape the world. She has worked in high tech and federal-focused cybersecurity for more than 15 years. Prior to co-hosting Tech Transforms, Carolyn launched and hosted the award-winning podcast "To The Point Cybersecurity".