TL/DR:
- At Gini’s suggestion, I used generative AI to help build my strategy for my fantasy baseball draft – as a way of learning its capabilities.
- While my season hasn’t panned out as hoped, the experiment succeeded.
- I learned that expertise and precision are critical for getting what you want or need from generative AI.
- I also learned that the human touch is critical to successfully using generative AI.
- An experiment doesn’t have to work to succeed.
How Generative AI Ruined My Fantasy Baseball Season
If you’ve ever had an experiment go off the rails and were still satisfied with the result, this article is for you.
If you know anything about baseball, you know that players are always looking for an edge that can help them be better than the competition. Sometimes, they channel this search negatively – think steroids or bashing trash cans to tip off opponents’ pitches. (Still looking at you, Houston Astros.)
More often, the quest for an advantage takes more benign forms – analytics and statistics, sports psychiatrists, extra time in the weight room. What’s relevant here is that it’s not at all unusual for a baseball person to look for an advantage.
I’m telling you this as a matter of self-defense. Even though I haven’t played baseball in almost 40 years, I am still an avid fan of the game and an even more avid – my wife might say fanatical – player of fantasy baseball.
And just like the ballplayer I used to be, I am always looking for an advantage over my fellow owners.
I’ve played in the same fantasy league since 1999 with 14 other expert owners who are as equally obsessed with the sport, so there’s no sneaking a sleeper past a less observant owner; you have to be a baseball expert and know the identity of the left handed middle reliever or the backup utility player on every team, even the ones you don’t follow, to be competitive in this league.
I promise this is relevant to communications!
A Fun Use For AI
As I prepared obsessively for this year’s draft in late March, Gini Dietrich had a novel suggestion for me. As she heard the details of my copious research and the list of all the baseball podcasts I was listening to, she had an insidious suggestion: What if you asked a GPT for a draft strategy?
The context for this suggestion is that Gini is an early adopter and enthusiastic believer in generative AI. She frequently uses it in our work, has adapted the PESO Model© for the era in which AI dominates search, and has even trained a PESO Model GPT to be an expert in all things PESO. She’s as all-in as you can get.
Me? I had never used it. Not because I was afraid of it (although I do believe self-awareness is coming, and far sooner than later), but because I was worried about what it means for people in my profession – not just public relations, but specifically those of us whose core strength has always been storytelling and writing.
Executive and internal communications, where I have spent the bulk of the last decade and a half of my career, are particularly reliant on excellence in those areas; if a machine could learn to tell better stories than I, then what good would I be in the final decade of my career?
I thought it was better to continue adding the human touch to my work and being better than most, rather than joining our new robot overlords. And so as the rest of the world experimented and explored generative AI over the past couple of years, I sat comfortably, if quixotically, on the sidelines.
When Gini suggested that I use GPT to help with my fantasy salary cap draft strategy, however, I was too intrigued to be entirely dismissive. Remember that this is an expert league in which the difference between 1st and 9th place often depends on knowing who the [insert name of team you don’t care about here] will call up from the minors to pitch when one of their starters goes down.
If it would help me do better in this fantasy league, I’d turn the reins over to SkyNet itself.
Besides, said Gini, this will be a good, non-threatening way for you to start to get used to generative AI; you’ll learn the tool by doing something you enjoy, and it will be less daunting to begin using it for professional reasons. Crafty one, that Gini Dietrich. She knows what she’s doing.
So, with the enthusiasm of a six-year-old learning to ride a bike—excited for the possibility but also a little bit unsure—I started feeding ChatGPT some thoughts and asked it for a winning draft strategy in a league with 15 expert owners and a $260 salary cap.
My Stat-Spewing Soulmate
When the machine started spitting strategies back and asking additional questions, angelic choirs might have been singing and a light shining down on me from heaven above, making me think even more obsessively over my draft strategy.
The more it asked, the more I responded; we entered a pas de deux of fantasy baseball strategy and statistical analysis that left me happier than a seagull with a French fry.
The more the engine spat back at me, the wiser I thought it was. When I talked about Player A and his anticipated xwOBA for the year, it would feed me details about why Player B would be a better selection.
It told me which players had BABIPs that indicated a rebound year in store for 2025. It was my stat-spewing baseball soul mate.
I walked into the draft room in central Connecticut (yes, I fly back every year to do the draft in person with all my old East Coast friends) feeling more prepared, enthusiastic, and confident than ever that I was about to slay my draft.
In our league, the top half of the league – the top eight finishers – get some level of money and winnings at season’s end. I have finished seven years running in the money and nine of the last ten years, so when I say I was more confident than ever about my draft plan, it should tell you something about how good I felt about ChatOBP (as I had taken to calling it) and the strategy we’d built together.
I’d love to tell you that the plan worked to perfection, that I am running away with first place in a dominant performance my fellow owners could only dream of. I’d love to, but I can’t tell you that.
2025 is by far the worst season I have ever had; this is my 27th year in this league, and my team has never sucked this badly before. I’m in dead last place, and it’s not even a contest; I’m going to finish last by at least 8 points, probably closer to 12 points below the next worst team. My ChatOBP draft gave me a season for the ages, just not the one I’d hoped.
And yet, I consider the experiment a success. I’m glad I used ChatOBP to help me build my strategy, even though I’m having a historically bad season and will earn the ignominy of a last-place finish this year. Gini was right; I have learned some lessons about generative AI while skidding into last place.
Generative AI Is ONLY As Good As Your Input
It would be an easy cop-out for me to blame my horrible performance this season on ChatOBP. As it turned out, I initially gave it a lousy data set. By failing to identify the gaps and flaws in the strategy I presented to it, I put the machine at a disadvantage from the beginning.
Yes, it gave me back additional ideas and feedback, but it did so based on the ideas I had given it to start with. It accepted the strategy I was building at face value and shared answers and ideas with me based on that strategy.
My plan this year deviated from what I normally do, and that – not ChatOBP – is responsible for my poor showing.
Generative AI can deliver content, but real human knowledge is needed to evaluate whether that content is accurate, appropriate, or valuable. This reinforces the value of deep learning in specific fields—something that is sometimes missing when humans use AI.
Precision Matters in Training AI
In baseball, precision matters; a three-inch variance can be the difference between a pitch being called a ball or a strike, an outfielder making a spectacular catch, or the ball flying over their head. I learned that precision matters just as much in training and working with generative AI.
When you must explain something succinctly and clearly enough for an AI to understand and act on it, you quickly discover gaps in your thinking. (Or you should, anyway. I didn’t.) This forces you to articulate ideas more precisely, which improves communication with humans, too.
Human Judgment Matters More, Not Less
In discovering that generative AI can’t make chicken salad out of chicken shit, I realized that my concerns about AI replacing human storytellers and writers are overblown.
It’s a fantastic tool, but I haven’t yet seen a bot that can make bad content brilliant; it can only write the bad stuff more effectively. Give it a bad story to tell, and it’ll tell it well.
There’s still a lot of room for the human eye and for the precision of a human storyteller to build emotion, give context, and tell the machine what to do. If the human isn’t doing their job, the machine can’t do its job.
(If I were interested in making cheesy references, I would say that baseball generative AI, like media relations, relies on having a good pitch.)
Taken in the broader context, I’ve realized that as AI handles things like basic writing, humans should develop a sharper sense of what requires human insight, like understanding context, making ethical judgments, or recognizing when something feels “off” even if it’s technically correct.
Asking Better Questions Is A Skill Worth Developing
I got excited when ChatOBP began giving me information I hadn’t asked for or considered. And I stopped asking follow-up questions or trying to process whether the machine was giving me better information than I had on my own. I just accepted that the AI would have better answers.
Instead, I should have considered whether I’d asked it the right questions. I learned that the quality of what you get from AI often depends on how thoughtfully you frame your requests. Taken more broadly, using AI should encourage more strategic thinking about problem-solving in general.
Don’t Be Afraid To Get It Wrong
Like with anything, you can’t be afraid to get things wrong. I’ll use ChatOBP again next season to help me build a draft strategy; this year’s experience was good for learning, and I will have a much better understanding of how to work with AI in developing a winning strategy next year.
More broadly, rather than expecting perfect results immediately, working with AI teaches the human using it the value of refinement through multiple rounds of feedback. This mindset also applies well to creative projects, problem-solving, and learning in general.
Make It Fun
Gini had a method to her madness when suggesting I experiment with AI in developing my strategy. By encouraging me to use the tool for something I’m passionate about, she removed any intimidation I might feel about it; making it fun makes it easier to learn.
If you’re one of the handful of Luddites like me who have avoided generative AI, ease your way into the tool with something you’re interested in and passionate about, and learn the fun way.
Why I’d Still Call It A Win
While being completely out of contention as early as May and having nothing to play for for the final four months of the season isn’t fun – and I will have to wear the loser’s hat at next year’s draft, which is not something I’m looking forward to – I consider the experiment with ChatOBP to be a successful failure, if I may borrow a phrase coined by astronaut Jim Lovell about the Apollo 13 mission.
My concerns about working with generative AI were overcome, my fears about the importance of human judgment turned out to be overblown, and I got more comfortable with a tool I will need for work.
The AI didn’t win me a championship. But it won me over. And that might matter more.
As for fantasy baseball, I have four words for my fellow owners: Wait ‘til next year!