You have to love computer projections — can’t live with them, can’t live without them.
Just take note the next time a tropical depression forms somewhere off the coast of Africa and watch the multiple projections of storm paths by multiple experts. A month later, there is a good probability that at least one of those projections was right. The storm, having passed, has established reality in its actual pathway. At least one researcher will be patting themselves on the back that their model “accurately predicted” the actual storm path... this time.
Computer modeling attempts to replicate the real world. Generally, that world is pretty complex and unpredictable, but the models look at massive amounts of data so they must all be accurate. Right? Somehow “accuracy” or “truth” or “real worldliness” all comes down to how much data is looked at.
There is a theory that the physical weight of a printed research paper establishes its credibility. That might seem true for people not willing to actually vet the report. That is essentially what the new AI tools are providing; consolidating a mass of data and condensing it to a sound bite for your use. The algorithm for this might be as simple as “majority rules,” or more nuanced because advertising budgets may influence the results. AI tools might be as simple as saying all opinions have equal validity. Perhaps no priority is given to expertise, experience, etc.
The thing is, AI is a black box to you and me. You ask a question, it gives you an answer, and you have no idea how it got there. AI may find sources to help support the answer, but who knows how it chose to use those sources? What is hidden in the machine? How does it prioritize its sources?
I’ve had exposure to experts who write and conduct surveys over my career. A lot of the output depends on how you ask the questions. Sometimes they ask the same question multiple ways to see if there is bias in the answers. AI is like that. A lot depends on how you ask the question. In court trials, you hear the objection that the lawyer is “leading the witness” in the way they ask the question, essentially outlining the expected answer. AI is like that. How you ask the question determines much of how you will get a response.
So, I asked Bing’s AI, “When will battery electric semi-trucks replace diesel trucks?” And I got an answer with three references cited. Sadly, NACFE’s wealth of published deep dive reports on the topic did not make the cut.
I then rephrased the question to “When will diesel trucks be replaced by battery electric ones?” This time the answer actually referenced NACFE. Woo hoo! There were also three other sources referenced.
Fundamentally, the same question asked two different ways resulted in two different answers. The two answers, to Bing’s credit, are fundamentally consistent, and the choice of sources is impeccable, the BBC, DOE, EESI, Jim Park, Fred Lambert, NYC CTP, and NACFE. Thank you for that one Bing.
Then I asked the question as “Why are diesel semi-trucks better than battery electric ones?” Once again, there were references but a different answer.
I challenge each of you reading this blog to conduct a similar experiment and ask a question three different ways to see what answers you get.
“You get what you pay for,” and a corporate accountant once taught me, “cheap is good, free is better.” But clearly one can cherry pick from these examples to support a range of opinions on battery electric trucks versus diesel ones.
I am not here to bash AI, I am just pointing out that the way you ask the question influences the answer, and the answer can then be used to “prove your point.” AI is making it easier to get answers to the questions and that can be a good thing or a bad thing depending on where the information is sourced.
I suggest that when you are doing your research on AI, whether using AI or more traditional methods that you carefully check the sources and rely on those that tell you what you need to hear not necessarily what you want to hear.