It’s now been just over two weeks since women’s Olympic soccer players from New Zealand spotted a drone flying over their training session near Saint-Étienne, France. Police quickly detained the operator as the New Zealanders registered a protest.
Within a day, the International Olympic Committee sent the drone operator and Bev Priestman, head coach for Team Canada’s women’s team, home. As penalty, FIFA, the soccer governing body, deducted six points from any Canadian women’s winnings. Meanwhile, Canada Soccer promised:
“(We shall) seek to understand the historical culture of competitive ethics within all our programs.”
On the pitches in France, in the days that followed, the Canadian women played their hearts out – not only trying to make it back to the medal round, but to dispel the implication they were cheaters. I admire their determination and team spirit, but like most of us (despite the contention that drone spying is common in sport) I condemn the practice.
More emphatically, I decry any stream of society choosing to abuse technology – whether in sport, in government, in business, in banking or even in education. How often have we heard about technological fraud on the stock market, in the shady world of crypto currency, or identity theft when criminals use technology in “phishing” expeditions to acquire people’s bank account, phone or Social Insurance numbers?
Law enforcement officials say they cannot keep up with this “synthetic fraud.” A security observer in Silicon Valley went further. “The very technology that empowers us,” Nick Shevelyov told ABC News, “may also imperil us.”
At the heart of this synthetic fraud, of course, is what’s called generative AI (or Artificial Intelligence), which enables computers and other machines to simulate human intelligence and accomplish problem-solving. I was surprised to discover the wide range of AI applications at our fingertips.
They include such web search engines as Google Search, video-streaming platforms such as YouTube and Netflix, such human interactive speech devices as Siri and Alexa, and (perhaps the most advanced and unbridled) such creative tools as Apple Intelligence, AI art, and Chat GPT.
While I think (back in mid-1960s) I was one of the first teenagers to have a transistor radio that fit inside my shirt pocket, and then (in the 1970s) my wife and I eagerly bought one of the first Texas Instrument hand calculators on the market, I still harbour a constant and (I hope) healthy suspicion of technology and the potential for its abuse.
At the top of my suspicious offenders list is Chat GPT. For the uninitiated (and I include myself here) it’s an artificial intelligence program that generates dialogue. It claims to understand human (spoken and written) language, while accepting information it’s fed and then “what to spit back out,” according to Open AI, the firm that invented it. On the Open AI site, it offers this example:
“Explain how climate change affects endangered species,” it suggests as a common question. Or, “Write me a poem,” and when it does, it prompts you to request, “Now make it more exciting.”
OK, should I worry that its explanation of climate science is so precise or instantaneous? Probably not, unless as a teacher I find (as I did when I taught journalism) stories, quotations, source information presented as original, when clearly it’s stolen from someone else’s brain.
However, I become nearly rabid that Chat GTP can write me a poem and then “make it more exciting.” As a professional writer, broadcaster, author and everyday consumer of information and art, I consider generating rhyme, rhythm and imagery that is NOT yours, plain plagiarism. It’s wrong. It’s illegal.
That visceral response probably explains my feeling about other so-called “convenient” technology. I never use the self-checkout; I consider it taking jobs away from people. I rarely if ever do banking online for the same reason; and if I’m only known at my bank via my ATM pass number, who’s going to consider my application as a person when it comes time for a mortgage or loan?
The other night at a local chain restaurant, we took our family out to dinner. Our food was partly delivered by a robot with trays on wheels. I immediately thought of that deadly scene in 2001, A Space Odyssey when astronaut Dave Bowman, attempting to re-enter the spacecraft from outside, says: “Open the pod bay doors, Hal.”
And Hal the computer says, “I’m sorry, Dave, I’m afraid I can’t do that. I know you were planning to disconnect me, and that’s something I cannot allow to happen. This conversation can serve no purpose anymore. Goodbye.”
Or, “I’m sorry, Ted, you haven’t paid me yet. I cannot deliver your food. Goodbye.”