AI Frontiers: Measuring and mitigating harms with Hanna Wallach

[MUSIC FADES] 

Let’s jump right in with this question. How do you make an AI chat system powered by a model like GPT-4 safe for, say, a child to interact with? Now, for me, this question really illustrates the broader challenges that the responsible AI community—which of course you’re a, you know, a very important part of—has confronted over this last year. At Microsoft, this felt particularly acute during the preparation to launch Bing Chat, since that was our flagship product integration with GPT-4. So, Hanna, as a researcher at the forefront of this space, how did you feel during those first days of Bing Chat and when you were, you know, kind of brought into the responsible AI effort around that? What were those early days like?

 

 

To finish reading, please visit source site

Leave a Reply