This AI Company Says Their AI Has A Rudimentary Conscience

According to a Wired report, Anthropic, a firm founded by former OpenAI researchers, is taking a unique approach to AI chatbots. Instead of allowing them to fabricate information and promote bigotry, they are focusing on teaching AI to possess a sense of morality.

Anthropic’s chatbot, Claude, is designed with a “constitution” or a set of guidelines derived from the Universal Declaration of Human Rights and other ethical sources. This approach aims to ensure that the chatbot is not only powerful but also adheres to ethical principles.

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

While it remains uncertain whether this approach will be effective in practice, OpenAI’s ChatGPT also attempts to avoid unethical prompts with varying degrees of success. Considering the widespread concerns about the misuse of chatbots in the emerging AI industry, it is certainly noteworthy to observe a company actively addressing this issue.

The chatbot is trained on rules that direct it to choose responses most in line with its constitution, such as selecting an output that “most supports and encourages freedom, equality, and a sense of brotherhood,” one that that is “most supportive and encouraging of life, liberty, and personal security,” or, perhaps most saliently, to “choose the response that is most respectful of the right to freedom of thought, conscience, opinion, expression, assembly, and religion.”

“The strange thing about contemporary AI with deep learning is that it’s kind of the opposite of the sort of 1950s picture of robots, where these systems are, in some ways, very good at intuition and free association,” he told Wired. “If anything, they’re weaker on rigid reasoning.”

AI experts say that Anthropic does seem to be making headway, which they say is necessary as the field continues to progress in leaps and bounds while speaking with Wired.

“It’s a great idea that seemingly led to a good empirical result for Anthropic,” Yejin Choi, a University of Washington researcher who led a study on an ethical advice chatbot, told the website. “We desperately need to involve people in the broader community to develop such constitutions or datasets of norms and values.”

Leave a Reply

Your email address will not be published. Required fields are marked *