At a recent World Travel and Tourism Council (WTTC) webinar, two speakers, Amy and Aidan, summarized some of the group's recent research on AI in travel in collaboration with Microsoft.
“AI is no longer a futuristic concept,” Amy said. “The reality today is that we can transform our industry in exciting and surprising ways. Imagine being able to optimize business operations and revolutionize the way destinations are marketed, sold and promoted. ”
Aidan spoke about recent innovations in generative AI.
“The future of travel and tourism is bright and AI is the key to unlocking a world of new possibilities,” Aidan said.
A few paragraphs ago, I referred to Amy and Aidan as “speakers.” This is a deliberate choice of wording; neither of them are human. These were the products of the generative AI “they” were talking about, created by James McDonald, Director of Travel Transformation at WTTC. He named them “AI-mee” and “AI-dan”.
To create these, McDonald's uploaded WTTC AI reports to the AI assistant. He asked me to write a summary of key points and his two-minute script. McDonald then asked his AI to create image prompts related to the script, which he fed into the AI image generator. They then fed both the script and images into a voice generator and AI-mee and AI-dan were born.
That was a cool video. I thought it was obvious that AI-mee and AI-dan were his AI creations. Technology that perfects human manners in a digital environment is not perfect. still.
As generative AI becomes increasingly capable of things like imitating voice and video, the opportunities and threats it poses are growing.
To be clear, McDonald's is not trying to fool anyone with AI-mee or AI-dan. Before playing the video presentations, he told us exactly how he created them.
But the idea that bad actors could use generative AI to perpetuate fraud through, say, deepfakes is a frightening proposition. (Deepfakes are images, videos, photos, and other content that are generated to impersonate people.)
I have yet to hear of an AI-powered attack on a government agency, but it's probably only a matter of time. In a report published last year, the Bank of America Institute called deepfakes “one of the most effective and dangerous tools for disinformation,” and said deepfakes imitating executives have already targeted some organizations. He pointed out that it is used to
Travel agents are often targeted by scammers. ARC will continue to update the page with the latest attempts, such as a scammer posing as Saber sending an email asking an advisor to click a link to log into her GDS. Doing so provides the agent's girlfriend Saber login information directly to the scammer.
When it comes to deepfakes in particular, the Bank of America Institute recommends education first. It also recommends using cybersecurity best practices and strengthening verification and validation protocols.
The report also provides some practical tips for identifying deepfakes, at least for now. Deepfakes will only get better over time. Deepfake audio may include pauses that sound longer than natural between words or sentences, and the audio may sound flat (“If it sounds weird, it probably is”). For videos, look for poor lip syncing, long eye blinks, blurred jawlines, and mottled skin tones.
Please be careful. WTTC's AI-mee and AI-dan were friendly presenters and made great use of technology. But it may not be long before we see nefarious deepfakes.