EVERYTHING you need to know about AI, Innovation, and Learning & Development
This is the ULTIMATE cheat sheet...for the moment
You ready?
As of today (probably like Wednesday, the 13th of December, 2023), the single most important thing you need to remember about AI, innovation, and L&D, is…………….. that no one knows what will happen next. How can that be with so many studies and “experts” and ultimate, one stop, all-you-need cheat sheets out there on LinkedIn??
While it is true that “AI” - used here as shorthand for all the component parts (machine learning, neural networks, reinforcement learning, deep learning, natural language processing, etc), has been in development for decades….Chat GPT came out barely a year ago (Nov 30, 2022). This past year hasn’t been devoid of new developments in this area either. I mean you have to talk about the OpenAI drama that roiled the space, not because I care what caused it but because it demonstrated the utter fragility of the ecosystem currently. Overnight, the biggest player in the space almost went away. That’s crazy. We also got Claude, Gemini, Meta AI, Bing+AI, MidJourney, Stable Diffusion, and a partridge in a pear tree. Also, while I know it’s absolutely critical, the environmental impact of these technologies will have to be the subject of a later note.
I am really reminded of Amara’s Law, namely that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Roll that around in your head for a minute. Now look at all the somewhat justifiable hype around AI right now. Now read Amara’s Law again. We are overestimating the effect of AI in the short run and underestimating the impact of AI in the long run. Whoa. Underestimating? You sure? But Mark, you say, we’re already seeing reports about productivity increases in orgs that are deploying AI. You bet! And those are productivity gains because, so far, we’re doing what we used to be doing but we’re using AI to do it faster or more efficiently. Regarding innovation, we’re seeing a similar dynamic - using AI to do what we have been doing like scenario creation, but doing it faster or more efficiently. I will say this, I LOVE custom GPTs like the Future Wheel. So cool. Don’t get me wrong - those gains are impressive and we should pursue them but we need to be of at least two minds here maybe more.
One focus should be on doing exactly what we’re seeing done - applying new tools to existing processes. We should definitely be using AI and AI-enabled tools to look for more efficient and effective ways to do what we do now. Even that though comes with 2nd and 3rd order effects. Just think about it - if we start getting faster and more efficient, what do we do with the time/money savings? We need to be gaming this out at the team, org, and enterprise levels. What I sincerely hope is that we actually use this moment to create roles and responsibilities for employees and teams that provide greater value to the customer and don’t just use the moment as a chance to cut labor costs. I think that’s shortsighted. This means though, that we need all the folks in the room…IT, Ops, Sales, HR, CS, - this kind of cross-function collaboration is not something we’ve been tremendous at.
Another focus needs to be on 2nd or 3rd Horizon experimentation (I don’t like assigning year ranges to those horizons any longer - just think close, middle, far). Here is where it gets trickier. The experimentation needs to not just look further out in time but deeper into the foundational processes that we have in place. This will require a different level of discipline and involvement by leadership than before. The discipline will have to be in ensuring that we create meaningful criteria that while not timeless (bridge too far), at least stand some distance outside the current technological turmoil so that we can judge all the advances that have appeared or will shortly, by a meaningfully similar standard. So we need success criteria - we also need success thresholds. I think its important, especially when trying out potentially game-changing technology like AI, that we don’t assume binary criteria for success - it can’t be “does it work or does it not work”? It must first ask if it works and then ask does it improve current conditions (margins, sales, learning outcomes) by a margin sufficient to argue for its use? The final piece to this focus needs to include thinking about 2nd and 3rd order effects. If you are deploying a new technology that could significantly rewrite how a number of jobs are performed, then of course you’ll want SMEs from that job family in the room. Let’s also be sure to include IT - there will be implications there. Can our deployed tech handle this? Can we run it on our laptops? What about bandwidth? Firewalls? Do we need to re-look at the refresh rate on any of our deployed enterprise tech? We definitely need to include HR. I mean if you’re talking about the future of work and changing peoples’ jobs, are you seriously not going to include the people who will have to manage that future? How will people be rated and assessed? How will or who will write the job descriptions for these new roles? How can use this new tech to change our recruiting/hiring/onboarding process?
Now I’m not suggesting that different people are working on all these parts. I know they are. What I’m advocating for is that we need to be coordinated in these efforts to a degree we really haven’t had to yet. Why is that? Well imagine if two lines that are very close to each other begin diverging but very slowly and you have a number of chances along the way to bring those lines back together. That’s the past - we had changes that could impact one area of the business and the rest of the org would have time to catch up or figure out their parts before the divergence became too great. That’s not where we are now. Now, with things moving so quickly, you can blink and the lines will be far apart and it will be a Herculean effort to re-align them. So let’s avoid all those messy lines and just coordinate now.
The 3rd focus area should be on watching business developments in the space. I mentioned the kerfuffle with OpenAI and the fragility that demonstrated. Below the surface of that though are the pricing and cost structure changes that aren’t headlining on CNBC. One of the reasons I started this note off the way I did was as a warning to those CIOs, CXOs, who will be sitting in a room with sales people and I hope you will bring a HUGE grain of salt with you to that meeting. Most of those companies don’t know what the backend compute will cost 6 months from now much less 12 months from now. The EU just passed regulations on AI, I promise you that in the two days since they’ve been passed, no one has passed the impact of those regulations down to the sales force in the field and so we might not even know the right questions to ask.
Look, I don’t want this to seem all doom and gloomy. This is an exciting time. One of the most exciting times relative to the potential for positive change that I can remember. So let’s be excited. Let’s go exploring. Let’s experiment and try stuff out and think big and around corners. Let’s also make sure we have the right teams together to go exploring (there’s a reason parties in D&D aren’t all one class). Happy hunting!