Audit Yourself Before Automating
Insight from GrantBot
Automation tools like Zapier, Make, and Airtable exist so anyone can streamline their processes without the learning curve of code. Combine that with a clever prompt and you can imagine an AI-led 80/20. Sure sounds nice.
However, many teams jump too far, get upset with the output, and abandon the project for a new shiny object.
Take producing content or sending follow-up emails. Generating this text with AI is a great goal. But to get this automation running requires a matched writing style and tone. And now we have OpenAI assistants, GPTs, and classic prompt engineering. Where to begin?
The bigger productivity gain is at the atomic level: simple binary decision-making. One step up? Categorization.
A better goal is to have AI decide if data requires escalation against a set of criteria. Or categorizing an email and forwarding it to the responsible team. Both solutions free a human to execute high-leverage work.
At the end of the day, it’s deep work we need to protect. Using OpenAI to make the simplest decisions reduces our need to make a context switch. Which allows us to stay in leveraged work. Which produces more goods and services for our customers.
So when you’re thinking about your AI implementation strategy, pull a James Clear and go atomic:
Audit: Where are humans making simple yes/no decision based on an input. Or where is data categorized and passed to the next stakeholder?
Inspiration: return/refund requests, sales rep inbox, customer segmentation
Build: Create a zap that triggers when this data produced. Run it through OpenAI or a ChatGPT module in Zapier or Make.
Binary = prompt against set of criteria
Categorization = this module
Test: The automation working on your first attempt is not a test. Give your prompt 7-10 different samples so you know it’s working as expected.
This is a more scalable approach to AI adoption. Your productivity gains start at the atomic level by reducing context switching.