A study, which was published on Monday by a group of researchers at the Max Planck Institute for Human Development’s Center for Human and Machines, shows that humans can learn new things from artificial intelligence systems and then pass those skills on to others in ways that could have a cultural impact.
According to the study hypothesis, the emergence of human algorithms may exhibit different biases and problem-solving abilities than their human counterparts. However, hybrid algorithmic-human problem solving could lead to better outcomes in situations when diversity in issue-solving methodologies is advantageous.
“Digital technology already influences social transmission processes among people by providing new and faster means of communication and imitation,” the researchers write in the study.
“Going one step further, we argue that rather than a mere means of cultural transmission (such as books or the Internet), algorithmic agents and AI may also play an active role in shaping cultural evolution processes online where humans and algorithms routinely interact.”
This study asks whether algorithms with complementary biases to humans can boost performance in a controlled planning task and whether humans further transmit algorithmic behaviors to other humans?
Therefore, researchers conducted a large behavioral study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. The results show that the algorithm boosts the performance of immediately following participants, but this gain is quickly lost for participants further down the chain.
The findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved.
However, the prospect of machine learning influencing human learning—and culture itself—across generations is terrifying.
“There’s a concept called cumulative cultural evolution, where we say that each generation is always pulling up on the next generation, all throughout human history,” Levin Brinkmann, one of the researchers who worked on the study, told Motherboard.
“Obviously, AI is pulling up on human history—they’re trained on human data. But we also found it interesting to think about the other way around: that maybe in the future our human culture would be built upon solutions which have been found originally by an algorithm.”
Go, a Chinese board game computer program—AlphaGo—beat world champion Lee Sedol in 2016; its gameplay was regarded as startling and unusual, possibly breaking longstanding Go norms. AlphaGo’s peculiar play is most likely since it learned by self-play with little or no reliance on human historical gameplay. The success of AlphaGo poses the concern of how such new games would affect human strategies.
The use of technology, such as books or software, for human learning in games such as Go is not a new phenomenon; it is a common means of socially transmitting information from one generation to the next. However, recent advancements in AI have enabled algorithms not just to play games but also to play freely without relying on human games. This enabled social learning between artificial and biological organisms.
Algorithms have a significant effect on society. They are deployed to influence the behaviour of the working class in both physical and virtual workplaces. Several AI investigations have revealed that manipulating phrenology and physiognomy with these systems is eerily simple.
“I don’t think our work can really say a lot about the formation of norms or how much AI can interfere with that,” Brinkmann said.
“We’re focused on a different type of culture, what you could call the culture of innovation, right? A measurable value or performance where you can clearly say, ‘Okay this paradigm—like with AlphaGo—is maybe more likely to lead to success or less likely.’”
Source: The Royal Society