Killed by the First Train: What that 1830 Tragedy says about AI today
Back in 1830, Britain threw a huge party to show off its brand-new Liverpool & Manchester Railway, the first train line meant for everyday passengers. Brass bands played, politicians waved from open coaches, and crowds lined the track to cheer the future. Halfway through the trip the trains paused at a little stop called Parkside so the engines could take on water. One of the VIPs on board, Liverpool politician William Huskisson, decided to step down onto the track bed to shake hands with the Duke of Wellington, who was sitting in another carriage on the oposite rail. In all the excitement Huskisson forgot—or maybe never really understood—that the other line was still in use. While he chatted, George Stephenson’s famous locomotive, the Rocket, came racing along at what people then considered break-neck speed, about forty kilometres an hour. Huskisson tried to scramble back into his coach, but the door swung the wrong way and left him stuck in the locomotive’s path. The Rocket hit him hard, crushing his leg, and despite a desperate dash to the nearest doctor he died that evening. His death stunned the country. Only hours earlier everyone had been cheering the miracle of steam; now a respected public figure had been killed simply because no one had laid down clear rules to keep people off a live track. The shock forced railway owners and lawmakers to act fast: they fenced off lines, added signal systems, trained drivers properly, and set up inspectors to probe every crash. In other words, progress only became truly safe once eveyone admitted how dangerous it could be.
Fast-forward to today and think about artificial intelligence. Chatbots draft contracts, cars drive themselves, and image tools create pictures that look real, all in a matter of seconds. The technology feels just as magical as steam engines must have felt in 1830, and it’s spreading just as quickly. But, like those early trains, AI can harm us in ways most of us don’t yet see. A language model can invent fake legal cases that fool a lawyer, a driverless car can misread a street and cause a crash, and a cloned voice can trick a parent into sending money to a scammer. The systems keep learning and scaling, yet we still don’t fully understand how or why they sometimes fail. That puts us in a Parkside moment of our own: we’re standing on the tracks, taking selfies, while an invisible locomotive comes around the bend. If we want the benefits of AI without the horror story, we need the modern versions of fences and signals. Independent testers should stress-check new models before they go public; clear rules must spell out who is responsible when things go wrong; and high-risk uses—health care, finance, critical infrastructure—should require certified operators who actually know what they’re turning loose. Good regulation isn’t a buzz-kill; it’s the safety net that lets everyone ride with confidence. The railway age only took off after people saw the danger and fixed it; AI will be no different, unless we ignore the warning and find ourselves repeating Huskisson’s tragic lesson.