Editor’s Note: I’m going to stop here and send this one out. I have a lot more but there are some big reads in here and I don’t want them to get lost in the shuffle. Enjoy.
Razer launches new Wyvrn game dev platform with automated AI bug tester: So this is the headline but not the part of the story that grabbed my attention, this bit is > > “The other big component of Razer’s dabbling in AI is its upcoming and very uniquely named AI Gamer Copilot. It’s an AI-based voice assistant that’s designed to watch you play and give you live, on-the-fly tips for tactics in competitive multiplayer games like MOBAs or strategies on how to take down difficult enemies in single-player games…” > > This is another - let’s call it a lateral signal - from the gaming sector - do you really think that if this AI can watch you play a fast-moving game and provide you with real-time assists, that it would have trouble doing the same for you at work? I think latency will be a huge issue but also games are based on rules and have one authoritative source, the trick for enterprises will be who gets to train the AI. I usually argue that the reality of a company culture is made up by the daily lived experience of the people working there. So that’s why the “successory” posters are so hated - they usually don’t reflect the reality of work. This AI, unless its trained and updated by the people doing the actual work, could be that dynamic but at scale. This can’t be a way to try to enforce the ‘one way to do things’ on everybody - well, it shouldn’t be anyway. It should be open to innovation and discovery and I can’t wait to see what happens with it.
Prezent raises $20M to build AI for slide decks: Weirdly, given how much I think the introduction of AI into more and more workflows is inevitable, this one gives me pause. Maybe its because I’ve been in roles in which “getting the deck right” has been key. There’s a ton of editorial work that needs to be done. Work that forces you to better understand the problem and our solution. And I empathize with anyone who has had issues with public speaking but here I do wonder what critical thinking capacity will be lost. I also wonder if this puts even more of a point on if we’re actually building internal comms that are worth the effort. > > “Prezent, a startup empowering customers to build slide decks using generative AI, has raised $20 million as it further develops and refines its AI models for different use cases and expands into new markets. Los Altos-based Prezent, which has a subsidiary in Bengaluru, was founded in 2021 by Rajat Mishra, who previously worked at companies including Cisco and McKinsey. Mishra says he struggled to overcome stuttering and speech impediments at a young age, which drove his interest in communication tech.”
Claude AI catches up with ChatGPT by offering a new search tool: Upstream signal for enterprise SaaS user expectations > > “The AI assistant now features a new web search tool that allows users to access current events and information to enhance their results. The new search feature provides direct citations, allowing you to verify sources easily. Furthermore, Claude organizes and presents relevant sources in a conversational format, making the results easier to digest.” > > If your enterprise KM/L&D/Performance Support systems don’t look/work like this - you’ll have some expectations to manage.
Microsoft is exploring a way to credit contributors to AI training data: If MSFT is successful here and can pierce that curtain around training data sets, you have to wonder what impact that will have on the legal landscape around AI and copyright > > “Microsoft is launching a research project to estimate the influence of specific training examples on the text, images, and other types of media that generative AI models create. That’s per a job listing dating back to December that was recently recirculated on LinkedIn. ….the project will attempt to demonstrate that models can be trained in such a way that the impact of particular data — e.g. photos and books — on their outputs can be “efficiently and usefully estimated.”
New Study from MIT and OpenAI > > Early methods for studying affective use and emotional well-being in ChatGPT: An OpenAI and MIT Media Lab Research collaboration: “Our findings show that both model and user behaviors can influence social and emotional outcomes. Effects of AI vary based on how people choose to use the model and their personal circumstances. This research provides a starting point for further studies that can increase transparency, and encourage responsible usage and development of AI platforms across the industry.”
The Collaborative Edge in the Age of AI: Optimizing Organization Design for Speed or Stability (by Michael Arena, Ph.D., Andrea Derler, Ph.D. & Emily Klein): This is a great one and one that is really needed right now. The question of how do orgs best structure themselves right now is so much more important that which AI model you select. Arena, et al, put this on a continuum between speed and stability with team size as the fulcrum. While you should read the whole piece, this part nails it for me > > “In navigating the complex landscape of AI adoption, organizations should look to their small, specialized teams as catalysts for innovation and leverage the operating power of larger teams as initiatives mature. However, as shown above, these teams mustn’t operate in isolation.” > > Now figure out how to organize for that.
The 100 Year EdTech Project’s 2025 Design Summit: This is so cool. HERE is the YouTube playlist from the design summit with titles like The Never Ending Classroom, Automate to Elevate, and The Knowledge Nexus (and more). “As education, technology and society continue to evolve, it’s imperative that we, education changemakers, come together to innovate so that students are well-prepared for the future.”
Yahoo sells TechCrunch to investment firm Regent: I hope this works out but I am anxious. TechCrunch has been a pillar for 20 years.
How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops: Love this but > > “To scale these capabilities, firms need to innovate their business models by focusing on agile customer co-creation, data-driven delivery operations, and scalable ecosystem integration. We combine these insights into a co-evolutionary framework for scaling AI through business model innovation underscoring the mechanisms and feedback loops.” > > I wonder when we will really start to think about our own employees as being part of the co-creation process?
Lawmakers are trying to repeal Section 230 again: This could be HUGE. “Sens. Lindsey Graham (R-SC) and Dick Durbin (D-IL), the top Democrat on the Judiciary Committee, are planning to reintroduce a bill to sunset Section 230 of the Communications Decency Act in two years. Repealing the bill, first reported by The Information, would remove protections that web services and users have enjoyed since the 1990s, which underpins much of the way the internet as we know it today works.” A comment from Mike Masnick of TechDirt “Again, Meta can easily handle the resulting litigation costs to get cases dismissed. All this will actually do is destroy smaller sites and clear the field for Meta…” > > Forget AI, this could fundamentally reshape the Internet.
Trapping misbehaving bots in an AI Labyrinth: From the press release > > “Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. When you opt in, Cloudflare will automatically deploy an AI-generated set of linked pages when we detect inappropriate bot activity, without the need for customers to create any custom rules.” > > So first reaction is that GenAI brought this on itself. Second reaction is - wait, so all these AI crawlers are going to be ingesting essentially nonsense and they’ll be doing it at scale and speed and that scale could really pop if enough orgs deploy this. What will that do to the frequency of hallucinations brought on by the models being trained on this stuff? AI slop meets AI slop.