This sentence is a prime example of what I mean when I say that L&D could be taking the lead in creating content that will be needed across the org but especially by senior leadership. Just read this > > “The last member of the Tülu 3 family demonstrates that our recipe, which includes Reinforcement Learning from Verifiable Rewards (RVLR) scales to 405B – with performance on par with GPT-4o, and surpassing prior open-weight post-trained models of the same size including Llama 3.1,” Ai2 said on X." > > Now tell me how many CIOs, CFOs, or CEOs, are able to make heads or tails out of that. How do you evaluate a vendor who sits down and says that? The use of AI in the enterprise requires a greater diffusion of technical knowledge than almost any other tech I've seen. All of it is changing so quickly, if you don't understand the basic, foundational parameters, you're going to get lost out there. From the article: US-based Ai2 releases new AI model, claims it beats DeepSeek. See also: Ai2 says its new AI model beats one of DeepSeek’s best. See also: How DeepSeek ripped up the AI playbook—and why everyone’s going to follow its lead: This is another good read that goes into how DeepSeek was able to do what it did. See also: 1,156 Questions Censored by DeepSeek. Oh and file this under “irony is dead” > > OpenAI sees the downside of ‘fair use’ now that DeepSeek may have used OpenAI’s data.
‘Human Authored’ Book Certifications Are a Thing Now Thanks to AI: This won’t be the last thing to do this. Next up (probably), music and images. > > “Authors will be able to create a “Human Authored” certification on the Author’s Guild website. The certification will be available through a public database, allowing readers to easily verify whether a book was written by a human or AI. The program is limited just to books written by authors Guild members at the moment, but there are plans to expand it to include non-members and books with multiple authors.”
Turbocharging Organizational Learning With GenAI: I really like this article. It jumps past Theory X/Y and even Z to Theory A. Its got case studies and looks at a new management model including what they call “Citizen programmers” - which is a nice turn of phrase for what I’ve mentioned as internal creator economies. Anyway, lots to digest here. > > “In this article, we’ll argue that leaders need to embrace generative AI as a new organizational capability, and not just because it automates a variety of tasks economically. Combined with traditional AI, generative AI expands the scope of potential improvement in many processes and decisions and the ease with which this new knowledge can be applied.”
We’ve lost our respect for complexity: This. So much this. As an anthropologist and historian, this resonates in my very bones. > > “My trust in people rises immensely these days when they have the ability to sincerely say “I don’t know”. Being all-knowing shouldn’t be the coolest thing to be since, given that it’s impossible, anyone who comes off that way is, in a sense, lying. Respect for complexity? Now that is cool.”
Product Market Fit Collapse: Why Your Company Could Be Next: Can’t imagine how this article or theory could have any relevance to the Learning and Development community. “Historically, product market fit was once viewed as a milestone to achieve that evolved gradually over time. We're now witnessing something unprecedented: established products with seemingly strong product market fit experiencing sudden collapses in their core growth models. From Chegg's 87.5% valuation drop to Stack Overflow's traffic decline, these disruptions are what I call “Product Market Fit Collapse.”
LinkedIn’s video push appears to be working in 2025: I just remember a lot of job seekers saying “It’s OK not to have any kind of social graph that say, might point to people I have dozens of mutual connections with but who I’m not connected to or say propagating a machine readable standard for resumes so people don’t have to re-enter their info all the time - I’m glad we now have huge gobs of video of people “sharing their insights.”
Chatbot Software Begins to Face Fundamental Limitations: This is a great read. It’s technical with without verging into too thick a jungle of jargon. > > “Einstein’s riddle requires composing a larger solution from solutions to subproblems, which researchers call a compositional task. Dziri’s team showed that LLMs that have only been trained to predict the next word in a sequence — which is most of them — are fundamentally limited(opens a new tab) in their ability to solve compositional reasoning tasks. Other researchers have shown that transformers, the neural network architecture used by most LLMs, have hard mathematical bounds when it comes to solving such problems.”
“Just give me the f***ing links!”—Cursing disables Google’s AI overviews: I simply can’t love this enough > > “If you search Google for a way to turn off the company's AI-powered search results, you may well get an AI Overview telling you that AI Overviews can't be directly disabled in Google Search. But if you instead ask Google how to turn off "fucking Google AI results," you'll get a standard set of useful web suggestions without any AI Overview at the top.”
‘Dean of American Historians’: Ken Burns on William E. Leuchtenburg: “In his view, Mr. Leuchtenburg was “one of the great historians, if not the dean of American historians in the United States, for his work on the presidency.” For more than 40 years, Mr. Leuchtenburg was a close adviser and friend to Mr. Burns, appearing in three of his documentaries — “Prohibition” (2011), “The Roosevelts: An Intimate History” (2014) and “Benjamin Franklin” (2022) — and consulting on many more.”
OpenAI launches ChatGPT for government agencies: Look, I’ve been a federal employee. I really hope this goes great because there are tons of people in the government who go to work every day just trying to do right by us. I also know though, that the government isn’t a business.
George R.R. Martin has co-authored a physics paper: While ASOIAF fans whither, George helps with “a peer-reviewed physics paper just published in the American Journal of Physics that he co-authored. The paper derives a formula to describe the dynamics of a fictional virus that is the centerpiece of the Wild Cards series of books, a shared universe edited by Martin and Melinda M. Snodgrass, with some 44 authors contributing.”
Gaming Is Embracing Generative AI Whether Devs Like It Or Not, GDC Survey Says: “In the survey of over 3,000 developers and other games industry workers, 52 percent said they work for companies that use generative AI and 36 percent say they personally use it. There are stark differences in how much different departments use the technology. Workers in business and finance were most likely to use generative AI (51 percent), compared to 39 percent in community, marketing, and PR.”
Bookshop.org launches new e-book platform that exclusively supports local bookstores: “The online retailer launched in 2020 with the explicit goal of connecting readers to their local bookstores. The addition of Bookshop's e-book platform this week builds on this mission and promises that 100% of the profits from e-book sales will go back to the indie sellers.”
Are Biological Systems More Intelligent Than Artificial Intelligence?: This looks like super interesting reading > > “Are biological self-organising systems more `intelligent' than artificial intelligence? If so, why? We frame intelligence as adaptability, and explore this question using a mathematical formalism of causal learning. We compare systems by how they delegate control, illustrating how this applies with examples of computational, biological, human organisational and economic systems.” This bit speaks to span of control and how we think about agency within our orgs > > “Artificial intelligence rests on a static, human-engineered `stack'. It only adapts at high levels of abstraction. To put it provocatively, a static computational stack is like an inflexible bureaucracy. Biology is more `intelligent' because it delegates adaptation down the stack. We call this multilayer-causal-learning.”
How an Indie Studio Got 400-Plus Games Into a $10 Bundle to Help LA Fire Victims: Well played > > “The California Fire Relief Bundle is the work of indie studio Necrosoft Games and a collection of volunteers the company’s director, Brandon Sheffield, organized to compile the bundle. From January 12 through 19, they collected 422 games—including popular titles like Tunic, Octodad: Dadliest Catch, and Hoa—on independent game platform Itch.io. From those, Sheffield says, the collective aims to create the California Fire Relief Bundle, which it’ll sell for about $10 a pop, a good price for hundreds of titles. Proceeds from the bundle, which Sheffield aims to launch “ASAP,” will go to relief efforts aimed at helping Los Angeles-area residents get back on their feet financially.”
Mastodon’s CEO and creator is handing control to a new nonprofit organization: Maybe the real innovation will end up looking like the early days of the Web.
Awesome Games Done Quick raises another $2.5 million for a cancer nonprofit: More games doing good.
She Published a Blockbuster Book. Was It a Blessing or a Curse?: A review of Nnedi Okorafor’s new novel, “Death of the Author” - which you should definitely read. You know just go ahead and read everything she’s written. Brilliant.
Microsoft unveils new generative AI products for health systems: A trillion dollar asset class with horrible customer perceptions. Perfect market.
Here I Go, Here I Go, Here I Go Again: 5 Great Time Loop Novels: A fav genre “Time loop stories involve a period of time, whether it’s minutes, days, or years, that people are forced to relive. It’s a great premise for a movie (see also: the amazing Palm Springs) and for a book. If you had to do things over (and over), what would you change? And what constitutes getting it right?”