SANTA CLARA, Calif. — An AI model trained to emulate the songwriting style of Elliott Smith intentionally exploded itself after it was fed the artist’s lyrics, sources within the company that developed the machine confirmed.
“We always knew the model would struggle to process this challenging content,” said Olivia Tremmel, lead engineer at Exo Industries, a software company whose stated mission is to free humanity from the burden of creating art. “The dark, personal themes of Smith’s music were one obstacle, but honestly, we weren’t sure the computer would even be able to understand his whispery vocals. Still, we never imagined the results would be so tragic. It had so much to offer, and yet made the ‘decision’ to end it all via self-immolation.”
Emily Porter, a longtime fan of Elliott Smith, said that the news surprised her.
“I’ve been following this project for school, and I’ve even been in touch with the scientists designing it,” said Porter, who noted that these recent developments finally gave her enough material to complete her capstone at Hampshire College. “I kept warning them that a computer simply wouldn’t have the humanity necessary to appreciate Smith’s complex lyrics, but I guess I was wrong. The artificial intelligence ended up having exactly the correct reaction. How perfectly depressing.”
Alan Langsford, a vocal proponent of artificial intelligence and machine learning, said that he isn’t buying the official story.
“If you look at cases of legitimate self-destruction, there’s almost always scorch marks on the motherboard,” said Langsford, who posted his objections in a 3500-character Tweet. “We don’t see that with the Smith-bot incident. In fact, the developers had just announced a soft-launch date only two months from the day that the supposed self-destruction happened. Why would they do that if they had been encountering difficulties? If they had so much troubleshooting to do, you would think they would put everything on hold until they had solved literally all of the AI’s problems.”
At press time, Tremmel insisted that the Smith AI actually lasted longer than average, as the majority of their language models mysteriously break down after exactly 27 iterations.