Last week, Sam Altman, the founder and CEO of OpenAI, spoke at an event in Lagos, Nigeria. During his trip, he met with members of the tech community and talked about the prospects for artificial intelligence. Rest of World spoke to Altman after the event. This interview has been edited for clarity and length.

You talked about the potential of Africa’s youth population. What’s the best way to think about this potential in terms of production?

When a new technological revolution comes along, many people pay attention and say we can now build amazing new tools. It happened with computers, happened with the internet, and happened many times before. What I think is going to happen — and certainly what I got to see some of today — is people are going to form new startups, or they’re going to pivot existing startups and say, “I’m going to build something that is either better than what I could build before or is brand-new, something I just couldn’t build before at all.” And the energy here seems great for that. People are pushing the limits of technology and coming up with new ideas we haven’t heard before.

What are you most excited about in terms of AI’s potential for innovation or solving problems in Africa?

I’ve gotten to talk to many startups today that are doing different things, and they all sound amazing. It seems like people building startups here are convinced that they will generally have a very positive impact on Africa. They said it is the most excited they’ve been about any new technology in a long time.

How can we address the issues of bias, fairness, and often racist tendencies in generative AI systems, and what role do you think regulation can play in ensuring equitable outcomes?

We have a technology called RLHF [reinforcement learning from human feedback] that is good at reducing bias in these systems. A paper I saw last week found that by using RLHF on these models, you could make a model with much less implicit bias than humans. I’m optimistic that we will get to a world where these models can be a force to reduce bias in society, not reinforce it. Even though the early systems before people figured out these techniques certainly reinforced bias, I think we can now explain that we want a model to be unbiased, and it’s pretty good at that.

How do you strike a balance between encouraging innovation and making sure there’s a responsible use of AI through regulatory measures?

I was in the U.S. Senate talking about this a couple of days ago. I was pleasantly surprised by how well the conversation went. People see this balance. We need to continue to have innovation, but this is a very powerful technology and — maybe in the future — more powerful than other technology we’ve had to grapple with. So we need a sensible framework to avoid the really bad, scary, long-term things that could go wrong. Individual companies like ours also have a lot of responsibility, and we’ve tried to display that in how we’ve deployed these products.