A few decades ago, we envisioned these years to be marked with flying cars, super intelligent robots, and automated lives. We are getting there, and robots have already started to pop up in our daily lives. While we worry about the morality of artificial intelligence, our robots are getting suicidal. A robot security guard at DC mall just decided to drive itself to a fountain and end its life by drowning.
The Knightscope K5 was the security robot meant to detect potential criminals using facial recognition and a variety of other sensors. The recent wave of violence in the US inspired the K5 that was developed by a Mountain View robotics start-up after the Sandy Hook Elementary School shooting. The five-foot robot weighing 136 Kgs first became the center of attention when a 41-year-old man just pushed it over when it was patrolling a car park in Silicon Valley. Another time, the K5 was alleged of running into a toddler and just rolling on.
Despite its potentially suicidal tendencies and the ability to hurt others, the robot has been proven to be useful and economical for patrolling public places like parking lots and malls. Uber uses the K5 to patrol certain parking lots. The rental prices for the robot begin at $7 an hour, which is less than the federal minimum wage proving itself to be much cheaper than hiring a person for patrolling.
After the toddler accident, the company issued an apology calling it a freakish accident. This time, it is a matter of suicide for an armless robot, and we shall have to wait and see how it affects the company’s reputation and the robot’s reliability. A national security fellow at the Electronic Privacy Information Center (EPIC), Jeremie Scott once said, “Automated surveillance, facial recognition and license plate recognition in public makes us all suspects. The K5 could become like a cuter, less aggressive Terminator that kills privacy instead of people.”