Saw this in a recent post by Ross Dawson: “All-hands custom agent development- McKinsey has launched an internal platform allowing employees to design their own AI agents using natural-language instructions, enabling tailored solutions for individual productivity needs. While this offers significant efficiency gains, safeguards are essential to prevent unintended risks. A central team ensures all agents comply with cyber, legal, risk, and data policies before deployment. This approach not only enhances productivity but may also shift work from offshored teams to smaller, highly skilled in-house teams empowered by AI agents, delivering greater value through expertise amplified by automation.”
Ross picked this up from a Bloomberg article, that covered other uses of AI agents in large firms. Not shockingly, I had a couple of thoughts especially when it collided with this excellent post from Donald Taylor, about “the Beige Wave.” It struck me that one avenue for L&D to look into is helping the org learn. Sounds obvious right? Here’s the thing - before GenAI hit, the development of low/no code authoring platforms was already gaining speed; GenAI just multiplies that. Think about a new hire trying to learn something - chances are that new hires in general will need to know about 50% of the same stuff no matter what team they’re on. So on the first laptop, we can build an agent that they can chat with and instead of a miasma of old, internal wiki links, they can actually get an answer. Now the other 50% that is team or job specific, we can build agents to help there too. We can also build agents on the backend to make sure all the content stays current. Then we could use something like XCL from Build Capable, to track engagement with that content without having to load it into an LMS.
I know this sounds like something that’s not very instructional design related but I think its actually spot on. See in that first quote how when folks are building these agents, they have to be vetted with cyber, legal, risk, etc? Why isn’t L&D up there? Why shouldn’t we build guardrails and vetting processes to ensure that agent interactions are mapping to the training/performance outcomes we want to support but let them be created in lots of places in the org? Now I posed something like this earlier and Clark Quinn rightly challenged me on it. he might challenge me again but I think this is where our middle ground might lie. If we teach the org to create agents to automate tasks including those that can help people learn and part of that coaching is raising the bar against the org on what we know best - how people learn? Some might see it as engineering yourselves out of a job but I think its the opposite. If L&D can make itself indispensable as instructors/coaches/guides helping the org to not only learn but learn how to learn, then I think that’s a much more productive path than trying to stay ahead of AI in producing compliance content.