How Much Trust Should We Give Generative AI?


Here’s an AI-generated image of me at my desk, pondering big thoughts. I have a lot of work to do with my ChatGPT prompts.

Photo: This utilizes an AI-generated image. Refer to our Terms of Use.

Does generative AI scare you, excite you, or a little of both? There’s been a ton of talk about it already in fleet publications, webinars, and conferences. 

Fleet managers are beginning to experiment with it, asking for performance metrics sorted by vehicle type, duty cycle, driving style, region, and safety scores. 

Generative AI’s appeal lies in its ability to create. While predictive AI tells us what might happen based on existing data, generative AI can develop entirely new solutions, models, or content. 

Through conversations with a program, generative AI can pull data from multiple sources — maintenance records, fuel prices, vehicle depreciation rates — and generate comprehensive cost analyses. Yes, you could do that before, but the time to completion will shrink from hours to minutes. 

Risk of Generative AI Overreliance 

However, while generative AI will be able to do even more amazing things in the future, there’s a risk of overreliance and the temptation to trust its outputs too much. 

I’ll give you a small personal example: I’ve found Grammarly to be one of the best work tools I’ve used in many years. Grammarly has essentially taken the place of a human copy editor that requires elongated back-and-forth exchanges. Grammarly really does fix my grammar and sentences. It does not, however, make the text clearer or more engaging. 

Here’s another thing: Grammarly will make AI suggestions for sentence corrections, and I accept the corrections. What Grammarly says seems right. Am I actually going to do the work to challenge the corrections? Admittedly, no. 

But when should we start caring about how an AI program gathers data and generates its responses? Grammarly is a benign enough example. But we’re already seeing different AI programs tilt toward results of a certain political stripe based on the data they’re trained on. 

Unchecked AI Data Can Cause Big Problems

We can’t let AI run the show unchecked. If the input data is biased, incomplete, or inaccurate, the AI’s outputs can be equally flawed. Here’s an example based on the article you’re reading: 

I asked ChatGPT to write an article as an opinion piece for fleet managers on the benefits and potential pitfalls of AI. I asked ChatGPT to do this based on my published writing online. 

It spit out this: 

“In one of my articles, I explored how generative AI is being used to dynamically generate the most efficient routes for drivers. These tools don’t just rely on historical data; they actively generate new routing models in response to live traffic patterns, weather conditions, and other variables.”

This is not exactly true. While generative AI is mentioned in that article, the above statement makes it seem like generative AI creates routes. Really, dynamic routing based on traffic and weather is a function of machine learning and first-level predictive AI, not generative AI. 

If I put that statement in this article, I’d look dumb to readers in the know. 

A Long Path to Truly Smart AI

As I’m hearing more and more, “AI is the dumbest it’ll ever be.” In one sense, that’s an exciting statement, meaning we have so many more benefits to reap as AI gets smarter. But getting from 85% to 99.9% smart will take a long time, while that potential error gap of 14.9% will be a big management issue. 

This ability to generate insights from vast, disparate data sources is powerful, saving fleets time and resources. But it isn’t plug-and-play. Are organizations ready to invest in the requisite training and infrastructure to make it work, so fleet staff will be prepared to use it effectively? 

We’ll be getting answers to this question quicker than we think. 

This blog post first appeared in the members-only newsletter for the Automotive Fleet Leasing Association (AFLA)



Source link

About The Author

Scroll to Top