First of all, let's get it out of the way: diet can be a triggering word. I don't mean that kind of diet.
Instead, I'm referring to a few followup questions I received after webinars with a client—and how mindful curation of content + regular practice of reverse-engineering complicated challenges helps me feel confident with AI.
The (great) questions were:
"AI is consistently changing and developing. MJ, how do you stay AI fluent and on top of trends? Whether it's learning by doing or using resources, I'd love to hear how you stay up to date," and "What are the best ways to keep up with AI tools and trends so we feel comfortable and confident using them to boost our efficiency, instead of feeling overwhelmed or intimidated?"
Oh, wow. If you figure out how to stay totally confident and up-to-date on AI, let me know. I mean, it's my job, but it's my job because it's nearly impossible!
In the meantime, here's my imperfect response (written without AI, btw):
I’ve always liked the concept of an information diet—a curated, ‘healthy’ set of news and content you ingest and turn into meaningful information. In my case, I try to carefully tune the algorithm of what I access via LinkedIn (my primary social network) and via newsreaders like Apple News and Flipboard. No one system is perfect. I also have created a special AI Explorer’s Forum for myself and key colleagues across disciplines ranging from artists to financial analysts to CTOs and startup founders. My community is heavily populated with people outside the Silicon Valley bubble, like people I work with in South Africa, Mexico, and Germany, and industries ranging from AI law and ethicists to social impact leaders. We share "sightings from the field"—things which upped our skills or thinking—with a brief annotation of what "a-ha" moment they caused for us.
I also prompt tools like ChatGPT Plus to do deep web research on things I have questions about, such as the current capabilities of the many tools out there, and return answers only from reputable sources. For me, that’s places like the The New York Times, Ars Technica, MIT Technology Review, and peer-reviewed academia. I also listen to a few smart AI CEOs like Writer's May Habib, ethicists like Steven Tiell at SAS. It cuts through a lot of the hype and allows me to distill things into as ‘normalized’ an interface as possible (simple text, without the influence of popup ads, etc).
Remember not to get lost in multi-exclamation-point hype. I cannot emphasize enough that it's VERY easy to get caught up in the hype and fear of missing out (#FOMO) until you suddenly experience AI anxiety. I take plenty of time away from AI topics and practice good digital hygiene like ‘no internet in bed,’ turning off most notifications, etc and recommend you do the same. Otherwise the deluge of information can distract us from the big picture or paralyze with overload.
Staying confident with AI tools and trends is important. But tools ≠ trends. They’re not the same: the available toolset keeps improving, but mindsets are also changing all the time. And they’re not always getting more pro-AI: firms like Klarna who touted massive focus on generative AI and AI agents (with human job casualties) are now walking back some of their layoffs or hiring freezes. The turmoil around AI as different companies (and pundits) experience various points of the hype cycle is further complicated by changing political and regulatory positions. Which AI use cases are most focused on also changes suddenly, as new technologies are revealed or as case studies come out.
I experiment quite a bit with tools. Not just chat-centric interfaces like those offered by ChatGPT, Gemini or Writer (some of which also offer in-document tools, canvases or "AI studios," but also the creation of entire ‘augmented’ workflows using unifiers like Zapier, Airtable and other 'switchboards' for my content and work to pass from one system to another.
This helps me reality-check the hype rather than fixating on just one tool—and I still often come to the conclusion that AI is ‘not there yet’ for completely automating high-value tasks like communicating with my most important clients. Instead, I often find I can augment myself by scripting a few laborious steps, like a first pass of translation, extracting text from my slides and PDFs, or refining a photograph, and then finishing things myself to make sure they're up to my standards of quality. That may change in the future, but at least for my work, we’re not quite there yet. And staying involved in the process of a workflow helps me see the limits (and changing capabilities) of the tools out there.
So what can you do to build confidence? I suggest focusing on the basics: practicing lots of ways to get computers to help you get things done. And not just with generative AI.
No matter what tools end up “winning,” you need to be good at translating back and forth between humans and machines. Just like you might adapt your language and the amount of context and detail you share with employees at different points in their career journey or with different skillsets (like an intern vs. a CEO, or a designer vs. an engineer), you also need to adapt your language with generative AI interfaces.There are countless articles on the best prompts and promoting frameworks for articulating your challenges to AI tools in a way they can understand.
As I’ve discussed many times, “computational thinking” is a vital form of critical thinking needed for interacting with AI. Why? Because it provokes not just to break down complex human needs into smaller, simpler problems, but also because it reminds us to create distilled models of what we’re trying to accomplish—an abstraction of a system which leaves out unnecessary parts.
This connects to the practice of unlearning: it turns out that we are very good at adding things (steps, information) to problem-solving, but most humans don’t have a good reflex for subtracting to solve challenges.
Behind the scenes of AI, very expensive (and energy-intensive) processes are trying to help us subtract and simplify. But to regularly get good results from AI, it’s always best to start with a well-formed problem and practice this part of critical thinking ourselves—because it gives us a chance to unlearn.
This kind of thinking is essential to management of our “machine coworkers” and is not specific to any one tool. Getting good at using just one particular tool can be a little like getting good at interacting with one particular person: it’s helpful only until that person leaves (or you do). But if you get good at communication in general, you develop leadership superpowers. Same's roughly true with AI tools.
What does this look like in practice with AI? I suggest you take workflows you regularly have to do and reverse-engineer the steps they require, the dependencies they have, and the assumptions everyone is operating with. From there, challenge each step and assumption: is it still useful? Necessary? Correct? Once you’ve done that, you can begin to articulate steps for a machine to follow. It might be helpful to do this in a visual diagramming app like Miro, or perhaps as a bulleted list.
I suggest a quick hack: ask a general chat-based tool you like, such as Claude or ChatGPT, to act as your teacher and coach. Have it ask you additional questions about your workflow and intentions, as well as anything else it might suggest to someone in your situation. Then start asking it for the simplest and most reliable ways to accomplish what you want. If you have access to a feature like ChatGPT’s Deep Research, ask it to look for the most effective ways to do this using the most up to date features of AI and other tools (like automation and API integrators, such as Zapier or Microsoft Copilot Studio or Power Automate). Make sure you’re not forcing it to look only at generative AI tools which, while easy, are often less reliable than other toolsets, and which might be suited for only some parts of your work.
This “practice” is something you can use in the course of your work, or you can do it more like “running drills” as you would with language learning or sports training. The open-ended questions help the AI tools know they have permission to look for many different solutions. Is it perfect? No. But it will help introduce you to new concepts and tool options, as well as how to think about problem-solving, in a way which is transferable between many different situations.
The other practice I have with tools is to have a living project that I use regularly, but where I am the sole owner of the project (or on a very small, casual, experiment-friendly team). For me, this is working on the content platform I use to organize, edit and deploy much of my writing, which I’ve built in (no-code) database tools over the years. Each time I need to do something new with content (such as creating an FAQ or translating a blog post to a different medium or changing an article’s tone for a different audience), I try to see if there’s a way I can augment my work (or even fully automate it).
This is the work of breaking down a need into its smaller, simpler sub-problems (decomposition). Over the years, I’ve developed frameworks (also known as abstractions) to help me organize my work. I’ve also developed many formulas, and now prompts, which save me a lot of time (components) which I recombine in many different ways to generate results (algorithms). This allows me to learn in the context of something I’ll keep improving over time, but also to experience a bit of the challenges of “product management” (and the great feeling when things work). It teaches me to think more like a software developer and gives me a low-stakes, low-compromise place to practice. Outside work, this could be organizing a collection (such as your books or clothes) or running a micro-business (like eBay sales) where your boss is not pressuring you for results.
Whatever you do, remember that experimentation can be fun—if the stakes aren't too high. Create the space in your work and life to play with these new tools, and it will both lessen you anxiety and radically improve your ability to meaningfully contribute to the future of work (and secure your space in it).
-MJ