A discussion between OpenAI Director Shivon Zilis and AI Fund Director of Ethics and Governance Tim Hwang, and both shared perspective on AI’s progress, its public perception, and how we can help ensure its responsible development going forward.
Hwang brought up the fact that artificial intelligence researchers are, in some ways, “basically writing policy in code” because of how influential the particular perspectives or biases inherent in these systems will be, and suggested that researchers could actually consciously set new cultural norms via their work.
Zilis added that the total number of people setting the tone for incredibly intelligent AI is probably “in the low thousands.”
She added that this means we likely need more crossover discussion between this community and those making policy decisions, and Hwang added that currently, there’s
“no good way for the public at large to signal” what moral choices should be made around the direction of AI development.
Zilis concluded that she has three guiding principles in terms of how she thinks about the future of responsible artificial intelligence development:
- First, the tech’s coming no matter what, so we need to figure out how to bend its arc with intent.
- Second, how do we get more people involved in the conversation?
- And finally, we need to do our best to front load the regulation and public discussion needed on the issue, since ultimately, it’s going to be a very powerful technology.
Source: TechCrunch