As transportation companies look into implementing AI solutions, a critical factor is preparing their workforces to use the technology effectively.
At a recent Trimble media roundtable, executives discussed the significance of developing an AI-focused upskilling program, maintaining clean and reliable data, and implementing role-based training for transportation teams.
“They need to adopt an AI-driven culture and mindset so that AI is just fundamental to the continued growth and success of businesses going forward,” said Eric Lambert, vice president of legal and employment counsel, at Trimble.
This has become a core skill that transportation companies look for in their employees, he said. However, he noted that it’s not a one-size-fits-all training approach; some employees might be more resistant to technology and need extra “gentle help to nudge into the brave new world of AI” to help prevent the loss of valuable institutional knowledge.
On the other hand, Jonah McIntire, chief platform officer at Trimble, pointed out the importance of interpersonal skills and the ability to communicate effectively with AI systems.
As more systems become automated, McIntire said employees will need stronger relationship-building skills, noting that people are going to interact with others more than they deal with systems.
Similarly, people will need to learn to coach AI systems. With introspection, communication, and interpersonal skills needed, people are “being trained to be AI bosses, rather than [just] filling out forms.”
Shaman Ahuja, deputy CEO at Optym, emphasized that AI is “democratizing access” to expertise that previously required specialized knowledge. Employees can now interact directly with data through AI agents that can run and process information.
However, he noted the real possibility that one person with an AI agent could do the work of four people, potentially leading to job attrition.
Managing data hygiene and hallucinations
AI systems are only as good as the data they accumulate. However, McIntire pointed out that while older AI technologies were highly vulnerable to bad data, current generative AI is fundamentally different and “very resilient to low-quality data.”
Taking a more cautious stance, Lambert said companies should “view data as a core strategic asset, and that means you need to adopt a kind of mindset of data hygiene by design.”
While some applications are less sensitive to data quality, he said that others critically depend on clean data, making it a “good AI governance principle” to ensure employees are working to “cleanse your data.”
[RELATED: OpenAI's take on the future of AI]
The experts also discussed AI accuracy concerns, such as managing hallucinations, in which an AI model generates false or inaccurate details.
McIntire said setting expectations and being explicit are important with AI systems. A common error is providing too little context and leaving things implicit. “These models don’t know who you are. They don’t know what company you’re working for. They don’t know your true [intentions].”
Lambert suggested treating the AI “like an employee on its first day of the job,” as it doesn’t know industry acronyms or have a business background and needs everything explained.
Ahuja warned that AI “loses context over time” in extended conversations, noting that ChatGPT would most likely forget a thread from a few days ago.
As companies start adopting more generative AI, Lambert emphasized the importance of effective prompting as an essential soft skill. “The prompt is key, and constructing the part of the effective prompt, and knowing how to basically ask the question you want AI to solve.”
Lambert recommended establishing a persona for the question you’re asking—such as, “You’re an experienced marketing professional”—as well as telling the AI what details not to include.













