Weekly Link Roundup #35
Good plan for engagement Mark, put out a newsletter on the eve of a U.S. presidential debate...
..and just because you were dying to know my thoughts on the whole “Founder Mode” kerfluffle…I wrote about it here.
Second Circuit Says Libraries Disincentivize Authors To Write Books By Lending Them For Free: One of the stupidest, most ignorant, and divorced from reality that I’ve ever seen. “What would you think if an author told you they would have written a book, but they wouldn’t bother because it would be available to be borrowed for free from a library? You’d probably think they were delusional. Yet that argument has now carried the day in putting a knife into the back of the extremely useful Open Library from the Internet Archive.” > > This ruling flies not only in the face of logic - we've had libraries for a long time and yet somehow we also have authors writing and publishing books - and in the Court’s rejections of the factors of fair use AND the original intent of the authors of the Constitution.
YouTube group The Try Guys has quickly found success in launching subscription model: I love this as a signal that there may be cracks in the advertising business model. And this is why “Having a business that is reliant on ads is very unstable and very unpredictable,” Try Guys co-founder Zach Kornfeld told CNBC in an interview. “There’s just so much that’s out of your control, and we certainly experienced the worst of that. It’s tenuous at best. Corrosive and explosive at worst. And it also forces you creatively to constantly optimize for things that are not always in your audience’s best interest.”
Get ready for a tumultuous era of GPU cost volatility: I tend to go on about how the AI space is volatile at multiple levels, cost is obviously one of them. Now there are cost variables in every industry but this says a lot about why its key here: “Compute cost volatility is different because it will affect industries that have no experience with this type of cost management. Financial services and pharmaceutical companies, for example, don’t usually engage in energy or shipping trading, but they are among the companies that stand to benefit greatly from AI. They will need to learn fast.” > > Think about margin calls on stocks or other financial motions that could rapidly and dramatically shift the cost of a system. Not sexy but your CFO will LOVE it: How much are your enterprise’s gen AI products costing you? DigitalEx launches new expense tracking solution.
Tell Replit's AI Agent Your App Idea, and It'll Code and Deploy It for You: Coupling code creation with deployment is a strong signal. “Replit's AI agent takes this concept and applies it to the world of software development. It can reason through a task and create its own steps to complete it—such as writing code, setting up environments, and managing deployments.”
Say goodbye to static user interfaces: From a longtime user of enterprise software…YES PLEASE! > > “Generative interfaces are a new class of digital experiences where the layout, functionality, and content are dynamically generated in real time to meet individual user needs. Unlike traditional static interfaces, which are predesigned and rigid, generative interfaces use advanced AI to adapt and evolve based on user interactions and preferences.” The only (current) worry I’d have here is that if the UI continually updates to what I normally use, then does it increase my efficiency at the cost of serendipity? This passage “Generative interfaces flip the script on human-computer interaction. Until now, we’ve had to learn how to use our technology. Now, our tools are adapting to us, meeting our needs exactly when we need them” reminds me of the misquoted and tough to attribute quote that “ we shape our tools and thereafter, our tools shape us.” What happens to that dynamic now?
Arcee AI unveils SuperNova: A customizable, instruction-adherent model for enterprises: Read the article for the technical advances but I wanted to highlight this piece that I think is key: “The model gets deployed into your AWS VPC, but it also spins up a web server and a chat interface and a database to store your chat history. Everyone in your organization can interact with it.” Its key because its another step toward push-button deployment of AI/LLMs in the enterprise with protections for key issues like data privacy. Just make sure you lock in that prices for at least a year.
Is Anthropic’s new ‘Workspaces’ feature the future of enterprise AI management?: Oh look - more push-button easy deployments and lets you set “custom spend or rate limits, group API keys, track usage by project, and control access with user roles.” Let me say this again, you need a robust and varied team experimenting with AI and part of that team needs to be business and ops oriented and looking at the exact kind of features that this release includes. And shocker, another ease-of-use enterprise deployment story: ServiceNow introduces a library of enterprise AI agents you can customize to fit your workflow. And an interview with an Anthropic founder: Anthropic’s Mike Krieger wants to build AI products that are worth the hype.
This water-cooled mini PC sports a Core i9-13900H and an RTX 4060: What a cutie!
This AI Model Can Make Creative Connections Puzzles Inspired by The New York Times' hit, researchers put an LLM to the task: Bad headline for an article that basically says that LLMs, great at predicting the next word in a sentence, require a great deal of thought and work to get them to generate not just accurate but actually interesting word puzzles.
US, EU, UK, and others sign legally enforceable AI treaty: Back to my previous statement on the volatility in the AI space…"The treaty, called the Framework Convention on Artificial Intelligence, lays out key principles AI systems must follow, such as protecting user data, respecting the law, and keeping practices transparent. Each country that signs the treaty must “adopt or maintain appropriate legislative, administrative or other measures” that reflect the framework.” Might want to run those AI plans by your General Counsel.
Why A.I. Isn’t Going to Make Art: by Ted Chiang - author of Story of Your Life (that became the movie Arrival)
NVIDIA Researchers Introduce Order-Preserving Retrieval-Augmented Generation (OP-RAG) for Enhanced Long-Context Question Answering with Large Language Models (LLMs): The article addresses a solution that answers this problem “Retrieval-augmented generation (RAG), a technique that enhances the efficiency of large language models (LLMs) in handling extensive amounts of text, is critical in natural language processing, particularly in applications such as question-answering, where maintaining the context of information is crucial for generating accurate responses.” But I wanted to point it out for another reason, it is another pointer to how familiar teams AND execs need to be with multiple aspects of AI technology.