Signals and Field Notes #3
Learning and Innovation Observed
*Another note about how and why I do this: There are a number of REALLY smart people who have a laser focus on #learninganddevelopment and the world is better for their focus. If I’m being mean to myself, I’d say my focus is scattered. If I’m in a better mood towards myself, I’d say I have a deep appreciation for context. Here’s a little story to illustrate.
One of the biggest field archaeology jobs I was ever on, was working on a Phase I archaeological survey at Fort Benning in Columbus, GA. It’s summer. In Georgia. Like 100 degrees F with 100% humidity. You’re digging you square meter “shovel test units” (aka holes) through mostly red clay. You’re super psyched when you find a shard or a fragment the size of a fingernail! Now, we’re digging on Lawson Air Field and a dump truck stops next to the fence line and the driver motions us over and asks if we’re archaeologists…we say yes and he hops back in and speeds off (in relative dump truck terms). He comes back and shows us this box with some of the most amazing artifacts I’ve ever seen outside of a museum and he asks us “what can you tell me about these?” See, all he does is watch as the dirt pours out the back of his truck and when he sees something interesting, he picks it up. That’s cool - he’s not digging them out from anywhere, he’s not stealing them from anywhere, I kind of think he’s helping save them - otherwise they’d just be in this landfill. But we can’t tell him much about the artifacts - because they were out of their context. We can’t see how deep they were, what they were adjacent to, if they were near water or on an elevation. I feel the same way here.
When I write about or point to something like any of the stories here, its because I’m looking to build context around the two main focus points I care about - learning and innovation. Those things take place within rich contexts and I have this notion that if we ignore those clues, signals maybe, that are coming from that context, we have a much shallower and faded story to tell. So I’m thankful for those with that laser focus. I think maybe with that focus and any context I can supply, together that makes for a better story.
Project Ava is an AI gaming coach that also runs your day: This post is a good example of a kind of meta Signal (no, not that META). What I mean is that I don’t really think this particular product/service will be dominant or might even survive. It is an interesting signal though, in that I can see how some of these capabilities could survive and be impactful in the future. A little breakdown…
> “First unveiled at CES 2026, Razer is previewing Project Ava, a 5.5-inch 3D hologram desk companion that’s built to sit beside your keyboard and stay involved.” > > Interesting in thinking about having a hologram on your desk…makes me start thinking about how much control I’ll have over the appearance..what if you the hologram can look like your mom, dad, sibling, maybe even a pet.
> “It’s designed for Windows, with a direct USB-C connection that supports “PC Vision Mode” so it can analyze what’s on your screen with minimal latency.” > > Read that last phrase again…this is watching, in real-time, what you’re doing or what’s happening on your screen. Um, performance support anyone? Learning in the flow of work? Making external training systems that require me to leave my work and go to another site, absolutely obsolete?
> “Ava’s boldest hook is live coaching during gameplay. The demo framing centers on tactical callouts, like when to call for air support, where to aim, and when to break line of sight and change firing style for mid-range fights.” > > Have you ever played a first-person shooter? Do you have an idea of how fast things are moving? This system is supposed to be able to react fast enough to give you useful information not just in the moment but in a way that lets you make different choices. Do you think this means that a system like this would be able to react fast enough in your enterprise situations?
So there’s a little insight into how I process these items. I try to unpack a lot of the implications that are included in these items because these weak signals can become very strong very fast.
An A.I. Start-Up Says It Wants to Empower Workers, Not Replace Them: “The new company, Humans&, has embraced the notion that A.I. should empower people rather than replace them. The founders said their goal was to build software that facilitated collaboration between people — like an A.I. version of an instant messaging app — while also helping with internet searches and other tasks that suit machines.” > > This is an interesting signal not so much for the technology, but because it makes me wonder if the backlash against the potential job loss will push AI product development in different ways.
RIP to All the Tech We’ve Lost in 2025: I’m not particularly interested in the specific tech we lost but why we “lost” them. And lost is so passive - maybe a better way to phrase it would be - “here’s the tech that didn’t make it last year and here are the reasons why.” That’s what post mortems should be looking for - the why, not the what.
Maslow’s hierarchy of AI fluency training: I’m a fan if frameworks - especially when we’re attempting to build some shared understanding of a new thing or dynamic. The only problem I have with frameworks is when people accept them as settled science and do things like build businesses around them. Myers Briggs leaps to mind. It’s pseudoscience. Its not only based on spurious foundations but its got a huge business built around it that is a huge blocker to any critique of the model. L&D has a tremendous challenge is this area. Donald Clark really did a service to the L&D field by reviewing I think around 280 learning theorists and posting critiques and a great analysis of the work behind the theories. All that being said, I like this framework - this part in particular:
”Organizations succeeding with AI transformation share common infrastructure:
Cohort-based learning for peer accountability and shared discovery
Workflow integration that brings training into daily work contexts
Role-specific pathways rather than generic content
Safe experimentation environments (AI sandboxes)
Progress tracking that measures fluency, not just completion”
Fluency I think is key. If we think about the fluency effort in terms of learning a language, that puts a different spin on how we regard the level of effort needed. Its not just learning a skill, you have to learn a whole new vocabulary about those skills. Just don’t build a whole business around it or act like it will never change.
France to ditch US platforms Microsoft Teams, Zoom for ‘sovereign platform’ amid security concerns: Seems like a key signal - sovereign models for AI and sovereign software > > “The aim is to end the use of non-European solutions and guarantee the security and confidentiality of public electronic communications by relying on a powerful and sovereign tool,” said David Amiel, minister for the civil service and state reform.”
Forget tutorials. AI professor mode is already built-in: Here’s an ugly truth for #LearningAndDevelopment, AI doesn’t have to be that good to beat us. The worse news? It’s getting better - fast. Now that low bar is not the result of L&D folks not knowing what they’re doing or not trying to do the best for their customers - I believe that the vast majority of L&D folks are out there doing their flat out best - the problem is multi-layered. Let’s look at AI - its been out in popular form, for about 3 years now. I’m going to wager that undergrad and grad school curriculum for L&D doesn’t exactly change that quickly. That’s not to say that there aren’t individual programs and professionals out there being adaptive and flexible, there are. The vast majority though are still teaching the same principles and methodologies that have been taught for decades. Then there are the corporate, non-L&D folks. They’ve been trained too. They’ve been trained that L&D activities are a waste of time. I promise you, no senior leader is touching the LMS for anything but compliance training. So we’ve got a group who have been trained to produce a thing, we’ve got another group trained to expect a certain thing, and then we have individual learners who are out there doing and using whatever they can to get their job done.
Into this mix walks AI. This system that can draw on huge bodies of knowledge and process and expectations and which can really quickly, produce something that looks close to what it has been taking humans a lot longer to make.
What’s the good news? The good news is that L&D has it in its power to redefine itself both to its customers and to senior leadership. The rough part? The redefined role won’t look a lot like the roles that are out there now. So we all have to work together - professors and depts of instructional design have to let go of some beloved ideas and theories and understand the world has shifted and people need to learn different ways forward. Corporate L&D needs to make a consistent effort to redefine how senior leadership sees L&D and what it delivers. AI can do the job we’re doing now, but it can’t do the job that we can invent.
Everyone Really Needs to Pump the Brakes on That Viral Moltbot AI Agent: So maybe just wait a week > > “Tech investor Rahul Sood pointed out on X that for Moltbot to work, it needs significant access to your machine: full shell access, the ability to read and write files across your system, access to your connected apps, including email, calendar, messaging apps, and web browser. “‘Actually doing things’ means ‘can execute arbitrary commands on your computer,’” he warned.” > > This coupled with the new Gemini in Chrome release from Google this week, and adding in Claude in Chrome, means that autonomy in the browser is a new attack surface and I’m waiting to see how the legal sides get mapped out as do what happens when an autonomous browser goes haywire? There is a ton of potential productivity to be explored here but I’d say that caution is also advised. > > See also some AI in the Browser backlash - I used one simple script to remove AI from popular browsers (including Chrome and Firefox).
AI Is Changing How We Learn at Work: So I kinda love this article because it points out a real way we’re already failing in terms of AI at work. Since I started this newsletter, I’ve been talking about not confusing our value with our activity. I think we’re running down that road now with AI. AI is letting us do so many things faster and we made the assumption or at least had the ask that we should consider spending that time saved, on higher quality, higher value activities. As this article points out though, I don’t think we are. I think we’re filling that saved time with more of the same. Instead of one doc, we can now crank out three or four in the same time but we’re not spending time deeply reading them.
The article also points out where we (Learning and Development anyway), do a bad job, and its with the most important parts. The article points out that we’re accelerating learning but not necessarily development. I don’t know if I’ve seen a lot of coursework that focused on experiential learning and leaders report that “experiential learning that shaped their expertise, resilience, judgment and their identity.”
I think L&D has been so concerned or focused on teaching the how, the process (understandable given our post WW2 roots), but we haven’t cultivated the parts of the ecosystem that can really develop the skills that separate us from AI (e.g. curiosity, empathy, identity, judgment).
So maybe that’s the real challenge - changing the ecosystem to recognize that we can learn the how faster than ever before, and that maybe we need to use that reclaimed time to learn those skills that AI can’t replicate.
Dictionary of the Oldest Written Language–It Took 90 Years to Complete, and It’s Now Free Online: Please tell me there are some other folks out there who have read Snowcrash and don’t know if this is a good idea. :-)
Motto By Langston Hughes
I play it cool
And dig all jive
That’s the reason
I stay alive.
My motto,
As I live and learn,
is:
Dig And Be Dug
In Return.



This collection is incredibly valueable and your archaeology analogy really nails why context matters in understanding signals. The point about AI not needing to be that good to beat current L&D offerings is uncomfortabel but spot-on. I've noticed how we fill saved time with more of the same instead of higher value work. Your take on experential learning versus process training hits different, totally agree we need to rethink what skills cant be replicated.