PBS North Carolina Specials
Discussion | Coded Bias Independent Lens Preview
3/2/2021 | 38m 26sVideo has Closed Captions
An engaging virtual discussion with local experts.
Following a preview of the upcoming Independent Lens film, Coded Bias, PBS North Carolina COO Shannon Henry moderates a virtual discussion with our panelists: Dr. Kemafor Ogan, North Carolina State University, Dr. Sarra Alqahtani, Wake Forest University and Dr. Hai "Helen" Li, Duke University.
PBS North Carolina Specials
Discussion | Coded Bias Independent Lens Preview
3/2/2021 | 38m 26sVideo has Closed Captions
Following a preview of the upcoming Independent Lens film, Coded Bias, PBS North Carolina COO Shannon Henry moderates a virtual discussion with our panelists: Dr. Kemafor Ogan, North Carolina State University, Dr. Sarra Alqahtani, Wake Forest University and Dr. Hai "Helen" Li, Duke University.
How to Watch PBS North Carolina Specials
PBS North Carolina Specials is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipMore from This Collection
Discussion - A Town Called Victoria - Independent Lens
Video has Closed Captions
The filmmaker and former Victoria residents share their story. (46m 51s)
Discussion - Native America Season 2
Video has Closed Captions
Panelists discuss preserving the languages of Native American tribes. (39m 1s)
Video has Closed Captions
Sci NC executive producer and host, Frank Graff, chats about upcoming Season 6 of Sci NC. (26m 6s)
Discussion - Southern Storytellers
Video has Closed Captions
Author David Joy and others discuss storytelling and their new PBS series. (42m 13s)
Discussion - Mama Bears | Independent Lens
Video has Closed Captions
Producer and director Daresha Kyi discusses the film and LGBTQIA+ advocacy. (34m 41s)
Discussion - My Music with Rhiannon Giddens
Video has Closed Captions
Discussing the series with producers Will & Deni McIntyre and country artist Rissi Palmer. (39m 56s)
Discussion - Free Chol Soo Lee | Independent Lens
Video has Closed Captions
Local lawyers, professors and nonprofit leaders discuss wrongful convictions and reentry. (40m 44s)
Discussion - Stay Prayed Up, Reel South
Video has Closed Captions
The filmmakers discuss their journey with Mother Perry and The Branchettes. (45m 4s)
Discussion - Storming Caesars Palace | Independent Lens
Video has Closed Captions
Local professors and nonprofit leaders discuss welfare and the social safety net. (33m 2s)
Discussion - Fight the Power: How Hip Hop Changed the World
Video has Closed Captions
Local experts discuss the history of hip hop with PBS North Carolina. (59m 43s)
Discussion - Love in the Time of Fentanyl | Independent Lens
Video has Closed Captions
Local harm reductionists, therapists and others discuss the opioid crisis and more. (55m 44s)
Discussion | Independent Lens: Move Me
Video has Closed Captions
A dancer with blindness and disability advocates discuss adaptable arts programs. (38m 46s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Hello, and good evening.
I am Shannon Henry, Chief Operating Officer here at PBS North Carolina, formerly UNC-TV.
We are proud to broadcast the acclaimed PBS documentary series, "Independent Lens," each Monday night and even prouder to bring you a special preview screening of the film you just watched, "Coded Bias."
Thank you to our wonderful audience for joining us virtually.
Now you get to meet a very talented group of women, all computer science professors from local universities.
So without further ado, I would like to introduce you to Dr. Helen Li from Duke University, Dr. Kemafor Ogan from NC State University and Dr. Sarra Alqahtani from Wake Forest University.
I'll serve as host for tonight's conversation with our esteemed panelists.
And we'll begin with a question for Dr. Helen.
So Dr. Helen, early AI developers measured the intelligence of technology by its ability to play games such as chess.
Why might this definition of intelligence be limiting?
- As Shannon mentioned, many popular AI demonstrations were presented by playing games like chess, Jeopardy, board games and so on and so forth.
Basically, why games?
This is because we know these games.
We know how easy or hard they are, and then we can use our own understanding and experience to judge how intelligent the AI systems are against their counterpart.
So what are the common features of these games?
And as you may say that for a game, it is a single task with well-defined rules.
It usually involves small number of participants, like two players for chess or a few in Jeopardy.
There's an explicit objective, or what we call, there's a reward hypothesis.
So we know what is the criteria of winning, and everyone, including the AI systems, like to win.
So technically, it is not intelligence in the generalized form.
It is a specific example of decision-making problems.
And when we're talking about this the major difference among those games is the complexity, meaning how hard to play those games.
So what was actually measured through the years is not general AI, but kind of a computational intelligence.
And as you can see that advanced algorithms and faster computers can support more functionalities more complicated computations, and then make AI systems successful in more challenging games.
So Shannon, back to your question, the limitation of the definition of intelligence as defined by games, in fact, it's only measure the computational intelligence, not others.
- Thank you, Dr. Helen.
So turning to Dr. Sarra.
Studies found racial bias in algorithms used in courts for sentencing and used in hospitals for healthcare recommendations.
Even though the AI, excuse me, did not factor in race data.
How did this happen and what is causing the biased results?
- In my opinion, I see we have biased or racist models for two reasons.
Either the researchers or the developers of the models, they start with wrong hypothesis.
Like when we have seen in the movie when they built the sickness model to measure the sickness level of the patients for their admission in the hospitals, and they used the billing data.
So patients who spend more money on their health, they assume they are more sick than other people.
And that's a wrong hypothesis to start with.
There are sick people who don't have health insurance.
So that's one part.
The second part is, AI models actually build hidden correlation.
And that's why we call AI models "black box" because we are still working on understanding why our models make those kinds of decisions.
So even if you removed the race, the models themselves will build, will correlate, will make racial correlation.
That's for one part.
The other part is there are really some systems like in medical systems, you need to consider race.
So it's not the solution to remove the race.
The solution is to understand and explain our models and that's what we call it explainable AI.
That's the direction for research nowadays, is to understand why our models behave in that way.
And if we can pinpoint the hidden correlations, why the models correlate certain data together, then we can hopefully systematically remove those kinds of racial correlations.
- Thank you.
Dr. Kemafor, besides the potential harms caused by bias, are there other vulnerabilities of AI and technology that pose risk of harm?
And if so, what kinds of vulnerabilities and risks?
- Yes.
So, bias typically implies that there's a disconnect between the way distributions are represented in data, on models, and in reality.
But then, and in those cases, it's typically the underrepresented that bears the risk of harm, but there are other kinds of problems and I'll give some examples.
So I would call this maybe data errors.
So two people live in the same address, or at least associated with the same address.
One person has a high credit risk and ML algorithms have sort of learned that there's a correlation between behaviors of people that live together.
So basically birds of the same feather flock together.
Then, using such an algorithm, one could impute creditworthiness risk from one person to another just because of their residential association, and that poses an ethics question.
Should we do that?
And this co-location of addresses may be as a result of a mistake, and this actually happened to an Oxford University computer science professor.
So there was an anecdote there, and in that case, he got an unfavorable credit application response.
And it turned out he had just moved into his new home and the former occupant who was a tenant and was a bad apple, so to speak.
And it does take time for address updates to to be propagated all across, and so you have very likely what happened was you had a situation where maybe the two addresses were linked to both of them and so he inherited the bad credit risk from the other.
So, that's a problem and that doesn't have anything to necessarily do with bias.
Also, the other kind of thing is I call here missing context.
So let's say for asthmatic patients you come in with pneumonia.
Typically that combination is really bothersome, it's a high-risk situation.
They typically will end up in the ICU but over time I see doctors have done well to figure out how to manage them and the outcomes are good.
So now you have algorithms that are going to learn a correlation between those symptoms and good outcomes, right?
And so the next time the patient comes in if they're using automatic decision making to triage, it could decide, "Oh, this is a low risk patient," which misses the context that the low risk or the good outcome is because they had good care and ICU.
And another example is, some Palestinian man had, and this is a missing context in the context of language, a Palestinian man had had posted out something, I think on social media, and the phrase meant "good morning" in Arabic.
But it turns out it means "attack them" in Hebrew and he was arrested.
So the context that was missing was the language context.
Finally, I think that was the other thing to worry about is generative technologies.
We have technologies that can help generate text, images, videos that are not real but they're very credible looking.
And so we're going to be blurring the line between reality and fiction.
It could be weaponized.
And this is very concerning.
So these are some of the other examples of a potential harm that doesn't necessarily have to do with bias.
- Thank you, Doctor.
Turning back to Dr. Helen, what other forms of intelligence are an important measure for technology?
- When we're talking about understanding and essentially benchmarking intelligence, the best way perhaps is to think about what human beings can do and what exactly we want AI systems offer to us.
So immediately may pop out in many other forms, intelligence that's important, I can give one particular example over here.
Say a person here, right?
We can think, we can speak, we can hear, we can feel, we can move, we can do many, many things.
And essentially we observe information from different resources in different formats.
And similarly, it's very important for the AI systems to process and analyze data in various modalities.
And that this isn't actually people putting a lot of efforts to enable this capability.
For example, we can talk about self-driving cars, right?
Self-driving cars, besides navigating, in fact, they have to create a dimensional map of their surroundings based on a lot of different sensors.
Okay.
So I'm not good at driving.
I only know the cars by algorithm and sensors.
Radar sensors, for example, in order to monitor the position of nearby vehicles, video cameras observe traffic lights and signs.
LIDAR tends to measure the distance and detect the road edge.
Virtual sound sensors then help us to parking cars and so on and so forth.
So then for AI systems in self-driving cars, besides the normal operations, it's particularly important to respond in special situations, like in dangerous situations, in the situations that were not well trained, or well captured by previous data.
So this is not directly related to biased code but it's very similar to the situations that data isn't feeding into the machine learning system so were not the predicted.
You may have heard about there was some accidents involved with self-driving cars and essentially they're all coming from the situation were not predicted, were not actually inputted into the systems.
So then this will be very important.
Another thing is, while we're driving cars in different environments, for instance, and they have different conditions.
We can imagine driving a car in Raleigh.
Hong Kong could be very different from highways in urban environments.
So how to respond to that?
Maybe train your systems and predict the situation where perhaps in dangerous situations that would be important.
So, this is just one particular situation.
There are many, many more measures of intelligence, like reasoning, like creativity like cognitive activities.
So then when we're talking about those measurements, what is very important is, we have to link that to the specific tasks, the requirements, the conditions that we are facing.
And then, right now people also tend to define, and to regularize this measurement to, for example, for a self-driving car system.
- Thank you, Dr. Helen.
Dr. Sarra, Joyce says in the film, "Because of the power of these tools, left unregulated, there's really no kind of recourse if they're abused.
We need laws."
Do you agree that AI should be regulated and why or why not?
- That's a very controversial topic.
There are tons of studies about should we regulate AI or not?
So how I see it, AI is still at it's early stages.
And I think if we regulate AI without understanding how the AI works, that would just kill it, essentially.
So how I see it, I think we need to support researchers.
The regulations should start from our labs.
And I think the government is going to that direction by creating a robust intelligence program in their NSF National Science Foundation.
So that's to study and make AI models more robust.
So I'll give you an example, like self-driving cars or autonomous systems in general.
In that field of research, we have this concept of certification, robustness certification, safety certification.
And then in order to publish a paper and say, "This algorithm is working," the algorithm should pass that kind of certification.
So we use my mathematical ways to approve, guarantee, the safety of the algorithms.
So we need that kind of certification for our AI.
And actually I just read a paper two days ago about how to certify fairness.
So that's the direction to go.
So to answer your question I don't think we need to regulate AI models.
We need to regulate how to collect data.
We need to get permission from users to preserve their privacy and that's has been done in European Union and California as well.
So that part we could really regulate, ask people for their consent to use their data.
But for regulating the technology itself, I feel like that's going to kill AI.
The way, as I said, is to fund researchers, to perhaps defy their technology, their models, their research.
That's how I see it.
But again, it's a very sensitive topic and controversial topic.
- May I chime just for a quick comment?
- Yeah.
- Yes, of course.
- Sure.
Actually I just read the news from New York Times yesterday, was talking about Massachusetts managed to write rules on facial recognization.
So just an add on to Sarah's comments, perhaps we need to regulate how to use AI but not AI technology itself.
So the news is talking about, possibly Massachusetts will require police to obtain a judge's permission before running a facial recognization search.
So it's again, who is using the AI systems and how to use it?
Not the technology itself.
- I agree.
- Thank you, Dr. Helen.
Barbara Kemafor, how is the tech industry doing when it comes to promoting inclusion in the workforce?
And what more needs to be done to ensure that everyone has a fair opportunity to work in tech?
- Okay.
So this is a good news, mostly bad news situation.
[laughs] So we'll start with the bad news, right?
So the statistics don't look very good at least for minorities, Blacks, Hispanics, Native Americans, collectively.
If you're looking at just Silicon Valley we talking about 5%.
Overall, it's a little bit better.
It's about 16%.
And that's because there are places like Atlanta that do a much better job.
And it's possible that Silicon Valley has this issue because they focus their hiring from elite universities, which themselves do not have significant representation.
So that just propagates bias from education into the labor sector.
And as bothersome as all that is, I think what is much more sobering is what the indicators of the future suggest.
So we have even fewer minorities that are included at the decision making table cause these people could maybe start to change the direction of things.
But if they're not included, and I don't mean just hired, because hired there's one thing, but should be at the decision table.
Then we don't have a lot of optimism there.
And also attrition is a big problem.
So you can hire them and check off the boxes but then a lot of them don't stay.
We have to figure out why those hires didn't become beneficial for minorities.
And then you look at, okay, another segment would be next-generation employers like startups.
There are very few minority startups.
And the VC capital for minorities is about 1%, 1% funding for minority startups.
So we're not going to be getting, and those would be at least places that may be higher.
And then what was interesting was looking at a survey, an employee survey that was done about employees asking them to express how they felt about what their companies were doing about diversity and inclusion.
Roughly half of the respondents were people of color and let me look at numbers here.
So what was interesting was about 15% of the respondents felt the industry was doing too much about equality, which was interesting.
And then 18% said they were doing enough.
So right there you have about a third, 33%, who don't feel like anything needs to be done.
And so you're not going to be getting any, they're not motivated to make any extra effort, right?
So we have to rely on the other two thirds.
So that's all the bad news.
But then the good news is the conversations that are continuing.
We have things like this.
And particularly at the educational institutions, in my department, for example, there's something we're very passionate about and we talk about.
So they're continuing and they're broadening.
And so maybe, at least, change begins with a conversation.
So hopefully there is hope.
And then also there was something really cool that I ran into recently, a company called VR Perspectives.
They use virtual reality to create immersive experiences so that people can feel the impact of bias.
So literally put people in other people's shoes using virtual reality.
And there's some Fortune 100 companies including Facebook and so on, who are using this.
So there are some interesting things helping.
That's a little promising.
- Can I add one thing?
- Yes.
- I think also inclusion and diversity should also start from our classrooms.
And I feel like Wake Forest, we do that.
We encourage, "Let's try to recruit students from minorities," and that's a good sign, especially in a field like CS.
To be honest, in my department I didn't see people of color.
So working on that, recognizing the problem, and motivate us as faculty to work and explore and discuss with our students.
I think that's the first step towards moving our students throughout the industry.
And beyond tokenism in big companies.
Like Google, this incident, they have this division of Black researchers but then they fired this Black researcher because she wrote in one of her papers about racist natural language processing models.
More tokenism.
- Thank you, Dr. Sarra.
We're going to turn over to the OVEE chat to answer questions from the audience.
Now, these questions, unless I call your name specifically, they are directed at all.
Or to all, excuse me.
So first question, who is responsible for inputting the information in for the algorithms?
If the result is biased, isn't it the fault of the computer experts who put in the data in the first place?
Let me know if I need to repeat that.
[laughing] Anyone feel free to chime in.
Would you like me to repeat the question?
- Well, I-- - I think I got the question.
So Helen, I wouldn't want to go, I think-- - Oh, that's that's okay.
Well, if you've worked in companies before, you may realize that normally your boss asks to do things and define the task and the specifications.
Engineering is basically taking what we have and trying to accomplish those tasks.
So whoever defined those tasks instead of then who is working on the AI system technology explicitly, perhaps then shall take the first lead in trying to change their mind, their attitude and how to make it work.
[laughs] - I guess the other point is, when you talk about unconscious bias it means it's stuff that you're not conscious about.
Right?
So there are blind spots.
You don't know what you don't know and that's really the problem.
Sometimes you need other people to help shine a light on, and that's why maybe teams, or, I've seen some companies that are developing frameworks for asking questions about your models, about the attributes you pick on and so on.
And that would help you, "I need a systematic approach to do this."
Because sometimes it's unintentional and you really don't know what you don't know.
- Yeah.
The other point, actually I'm teaching this semester a class about trustworthy AI.
And in the first lecture, one of the students asked me, "I don't think it's the fault of the developers or programmers.
I think if the data itself is racist, is biased, and we collect data from the real world, so how can we prevent that bias?"
Right?
So if the system already has some biases in it, how can we prevent our model?
And I think that's, I like this quote from the movie, "We want our models to be ethical more than mathematically correct."
So we don't want to reinforce and recycle the racism, discrimination, from our daily life into our models.
So I think we need to try to break the correlation, even if the data itself is biased.
- Can I talk in a more particular way from engineer's perspectives, and clearly we have input, we process it, we receive the results according to the correctness or incorrectness of those results.
It's kind of a feedback and helps us to improve the system.
So without any inputs for any results, in fact, we don't know how is this system going to work and what's going to result eventually.
So from this perspective, I think it's already very good for us to realize that AI system is not accurate.
It's not accurate, not because of the mathematical model, it's because of data collection, the ways to processing data.
So in this, intentionality already being very important.
So that helps us to say, "Okay, we should be more careful and we need more iterations, more evaluations during this process."
- Thank you.
So I have another question here.
The film focused on the negative results and bias in AI.
Can any of you name a few positives that have resulted, or is that list too long?"
[laughs] - Every automated task in our life is based on AI, right?
The thing that I like about AI, personally, when you try to deposit your check.
Just take a picture of your check and then you're done.
You don't need to go to the bank.
So that's AI, that's NLP model, natural language processing, reading the writing and processing the signature.
So that's one application.
Also, using AI or machine learning to predict cancer from early stages by processing the x-rays.
Environmental issues, we use AI to study remote sensing imagery and detect deforestation in different areas just by a click.
So that's, of course they are positive, and we can't actually limit the list.
But we care also involve making them robust and fair.
- Thank you, Dr. Sarra.
Next question, how can we bring this topic to the general public?
Is the bias that widespread that there's a need for social activism to correct this?
- Well, I think this is an excellent, at least what we're doing now is an excellent exercise.
I think one of the things that even we as researchers, at least I wasn't aware of, was how pervasive this was, and how on that, I don't know if you say unaware or just the impact this was having in people's daily lives.
And I think that's the problem.
If you don't know, for the example I gave about the Oxford University computer science professor, he found out only because he's empowered, he's a computer scientist, he knew.
He chased down the credit card company.
And then they referred him to the credit rating agency in the UK.
And they told him, "Well, we have this superior machine learning model.
That was the outcome."
And he said, "Well, I demand an explanation," because at least in the UK and Europe, they have some regulations about what obligations are if you're using personal data.
So he waited two weeks and they got back to him and they said, "We can't really explain it very well but we think it has something to do with where you live."
And so he then started to put two and two together because he knew, he had found out about the person who had lived in the house before him.
So I guess what I'm saying is, if he didn't do all that, he wouldn't know.
He would just get this funny credit application response.
And that would be that.
So now once it's done one wonders, you know when you're on Facebook, ads and so on, those things are AI.
But one really wonders.
I was very surprised about the teachers, that the evaluation of teachers using, those things were really surprising to me.
I didn't think anybody would rely on that.
So that is concerning, actually.
- Next question.
Is the court in Pennsylvania still using the program that determines sentencing and parole?
I don't know that, you may not know the answer to this question.
[laughs] - Didn't they mention it in the movie?
They are still using it.
- Yeah.
This is more like a fact.
[laughs] - I'll move on to the next question.
What can be done about this?
Who do we speak to about the fact that they're using this program to determine sentencing and parole?
- Social activism, people need to speak up.
You don't need to wait until you can come into that situation, right?
- Yeah.
I guess if people do that, I think I saw, I was watching the chats and a lot of people were just as surprised about the extent of this.
So I think this really was really valuable.
I think everybody goes home with open eyes and we and the classroom, I think we have a duty to at least sensitize the students about the risks involved and things that they should be thinking about.
- Exactly.
We need to discuss with our students, "Should you implement this or not?"
"Think about people."
And I usually say to my students, "Maybe sometimes you don't need to listen to your boss."
- Okay.
So I will ask my final question and it will be directed to all.
So what is your vision?
And we'll start with you, Dr. Li.
What is your vision for AI development in the future and how would you like to see the technology evolve?
- Okay.
I might fall in a very different catalog than the others.
And I'm working in this area.
and definitely, I wouldn't like to people ban AI systems throughout the campus.
Essentially what I really want is to build AI systems to help us, to make our life easier and more comfortable.
So I guess Sarra and Kemafor can talk more about algorithm about those.
What I wanted to say here.
In fact, there's importance of the computing systems and hardwares.
So what I mean is, right now our AI algorithms yearly requires in classroom of the GPUs and huge computing powers to drive it.
It's okay for Google, Facebook, to run this.
It will be very difficult for a human being, a person, to attain the same intelligence level without the support of that hardware.
So I think by dedicating working on the hardware systems and trying to make it very small, very efficient and personalized, that's better for you to use.
And I do think this will be very important.
And eventually I will envision human machine interface watch lists or in more integrated ways that will extend the capabilities of the human body.
And this is what I envision and hope that can come true in the future.
- I just want to add a comment.
So yes, this is a very excellent point.
And that's one of the AI-side effect, or, that's the advantage of AI.
That we have to consume huge amounts of power in order to train one model.
It would take more than seven days and that would eventually hurt the environment.
And there are some research going on trying to reduce the training time to save the environment.
So I will answer your question by, AI is good.
AI is not bad but we want it to be better and better and more robust.
So I think that's good.
People start talking about it.
We have different fields, like fairness, accountability and transparency in AI.
Which is like different conference every year for that kind of research.
We have Explainable AI.
We have Ethical AI, we have AI for Good.
So people starting paying attention to it.
Which, I am hopeful.
As I said, NSF supports robust AI, secure AI.
So I'm hopeful.
We are hopefully in right direction to have safe and fair AI.
- I agree.
I absolutely think that there's certainly a lot of value to it.
I think what we all are arguing for is a fuller implementation of intelligence.
So intelligence, there's the reasoning that you and I do beyond pattern recognition and classification, that there are other kinds of reasonings that humans do.
And I think what if we can capture as many of these as possible and have integrated systems.
So for example, we do logical inferences.
We do deductions from facts and rules in a domain.
That allows us to bring in domain knowledge, expertise allows us to bring in common sense, and so on and so forth.
For example, this is another example I heard about.
You and I could see a picture of face on the bus, an ad, and I can tell that it's a face.
It's a picture of another human being.
And I wouldn't have to have seen a million buses to know this.
But it turned out that in China, I think Helen's laughing cause maybe she knows this story.
But it turns out in China, a very popular artist or business woman, I think she on an ad on a bus.
And she got a jaywalking ticket because the bus was in the crosswalk and, yes.
So basically facial recognition recognition recognized her face, but failed to recognize that she's not a human.
You can't have a human on a bus.
The human needs a body and legs and everything to be in a crosswalk.
These are things that you and I do And we don't have to see a million buses to do it.
So other kinds of reasoning, and Sarra had highlighted some of them that are there.
I think projects like neuro-symbolic reasoning which are trying to connect the connectionist techniques like neural networks with symbolic reasoning like the kinds that I'm talking about.
So I think that's where we need to be.
And if we understand that this will never get us perfect deductions.
Then we have to think about what we have to do with the social and the cultural education frameworks and accountability, legal frameworks, to encourage and if need be force people [laughs] force people to have responsible behaviors.
I think it has to be twofold.
Technology and on the other side responsibility.
- I'm so sorry [laughing] - Technology.
- Awkward pause.
This event has been an opportunity to amplify you all's voices and to hear from experts in the field.
And so I thank you for that.
And your passion comes through.
And so thank you so much to each of you for joining us tonight.
A special thank you to our event partners, RiverRun International Festival and State Library, North Carolina.
And thank you to all who tuned in and for asking such engaging questions.
Don't forget to watch "Coded Bias" in its entirety on Monday, March 22nd at 10:00 PM on PBS NC, your local PBS station, and on the PBS app.
And please keep an eye on your inbox for a link to the recording of tonight's conversation, resource links and an audience survey.
And I just will say thank you all and good night.
- Thank you.
- Thank you, good night.
- Thank you, good night.