Weekly Link Roundup #16
Speed is the rate of change of an object's position. Velocity includes its speed as well as its direction of motion. The rate of change of the object's velocity gives the acceleration.
That set of definitions up there is key to understanding how fast things will move because of AI . The first story here gets it.
Something Like Fire -Will the AI revolution warm us or burn us?: Hmmm, comparing AI to fire? Of course I like that :-) This is good article but I really like this framing - its puts some perspective on what I mean when I say that better tools help build better tools faster - “Exponential growth is radically counterintuitive. How much money do you think you’ll have after 30 years if you put $1 into an account that doubles in value every year? The answer: $1 billion. Thirty years after that, you’ll need exponents to render how much money you have. Think of exponential growth as a mathematical singularity, a value that approaches, but never quite reaches, infinity. In the function y=1/x, for example, as the value of x gets closer and closer to zero, the value of y explodes. You can plot y on a graph and watch it begin as a slowly rising horizontal line that accelerates upward before becoming a nearly vertical wall. Our information technology is currently advancing at a double-exponential rate; but even if it were doing so only at a single-exponential rate, 30 years from now we will have the equivalent of a billion years of progress based on our current rate of speed, unless, for the first time ever, the growth curve finally—and dramatically—slows.”
Silicon Valley is pricing academics out of AI research: “Fei-Fei Li, the “godmother of artificial intelligence,” delivered an urgent plea to President Biden in the glittering ballroom of San Francisco’s Fairmont Hotel last June. The Stanford professor asked Biden to fund a national warehouse of computing power and data sets — part of a “moonshot investment” allowing the country’s top AI researchers to keep up with tech giants. Li is at the forefront of a growing chorus of academics, policymakers and former employees who argue the sky-high cost of working with AI models is boxing researchers out of the field, compromising independent study of the burgeoning technology.” > > This is going to be a growing problem for everyone - the general public as we lose the ability to develop consumer-first AI products, the business community as innovation gets slowed by giants acquiring not just compute power but the human brains behind AI.
How to combine Claude 3 and ChatGPT for amazing results: If by now you aren’t convinced of the transformative power of AI, then I don’t know if the ability to rapidly and easily combine these models probably won’t motivate you either but trust me- this is a wow and another signal that leadership (at multiple levels) needs to increase it’s overall technical expertise in order to understand the strengths and weaknesses of the various models and what even problems could best be solved by a stand alone LLM or a combination.
LLMs exhibit significant Western cultural bias, study finds: Are we shocked? (no, no we’re not). See prior statement about how technical knowledge can lead to you or your leadership’s ability to ask questions like what data set is your LLM trained on? What guardrails do you have in place to push back or identify cultural bias in your model? AI can bring us the ability to do things at a scale and at a speed that we haven’t had access to before - that also means that it can eliminate the time we’ve used in the past to ameliorate cultural differences.
Eight Friends Built a Secret Apartment in a Mall and Hid There Undetected for Years; A New SXSW Documentary Explains How and Why: Love this story - of this quote from Harry Harrison’s The Stainless Steel Rat - “We must be as stealthy as rats in the wainscoting of their society. It was easier in the old days, of course, and society had more rats when the rules were looser, just as old wooden buildings have more rats than concrete buildings. But there are rats in the building now as well. Now that society is all ferrocrete and stainless steel there are fewer gaps in the joints. It takes a very smart rat indeed to find these openings. Only a stainless steel rat can be at home in this environment...”
Multiverse raises $27M for quantum software targeting LLM leviathans: No, not THAT multiverse. Seriously though when I say things like the biz models underlying AI companies are shifting as fast as the tech, this is what I mean. This company’s products, if they pan out, are a technical leap but they’re also a potential leap in terms of cost. “The company claims that its services, accessed by clients via APIs, can compress LLMs “with quantum-inspired tensor networks” by more than 80% with the software, while still producing accurate results. If true, that could have large implications for how companies buy and use processors, addressing one of the big bottlenecks in the industry to date.”
AI Prompt Engineering Is Dead Long live AI prompt engineering: Couple things leapt out at me from this article. First this - “Battle says that optimizing the prompts algorithmically fundamentally makes sense given what language models really are—models. “A lot of people anthropomorphize these things because they ‘speak English.’ No, they don’t,” Battle says. “It doesn’t speak English. It does a lot of math.” > > Can’t amen that enough. These models are mathematical amalgamations of a lot of data - just because they spit results out in human language doesn’t mean that’s what they think in. Then there is this point about not using trial and error to find the right prompt but “just develop a scoring metric so that the system itself can tell whether one prompt is better than another, and then just let the model optimize itself.” > > This always makes me think about who watches the watchmen? Who will evaluate the scoring metric and apart from knowing the result is right, how will you know if its the optimal right answer?
Meta is building a giant AI model to power its ‘entire video ecosystem,’ exec says: This makes me think about Amazon’s habit of building something it needs to run it’s own business (did someone say AWS?) and then turning around and offering that capability to the wider world. Think about a video recommendation AI-empowered engine built to handle video assets at the scale of Meta. I think about this on both the internal and external sides of the house.
Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst: So now the CEO, the CIO, the CISO, and General Counsel need to be involved in your AI decisions. Did I mention the CFO? There are general copyright issues of course - what happens if a case wins against the AI model you’re using and your company gets named as a defendant who is profiting from a model trained on infringing data? The other signal this is for me is one that says, when you write your AI service contract, it needs to specify what of your data is protected, proprietary, copyright protected and/or trademarked.
Salesforce aims to blaze new generative AI trail for developers with Einstein 1 Studio: This is getting crazy fast. Here ya go “Copilot Builder allows users to create custom AI actions to accomplish specific business tasks, while Prompt Builder enables the creation and activation of custom prompts in the flow of work. Model Builder provides developers with the flexibility to build or import various AI models to meet specific needs. “We want developers to have the tools to build with AI, and to be ready for this AI-first future with no code, low code or pro code, ” Alice Steinglass, EVP and GM of the Salesforce platform said during a TrailblazerDX briefing with press.” > > Yes, there should be a push to identify requires skills to use these tools and plans for upskilling the current workforce. This skilling effort should probably be an enterprise-wide campaign - set the proficiency bar and then resource and headcount to the amount required to get there but please keep thinking two steps down the road and what happens when these tools get to the point where they are automating so many of our tasks now. How will we use that time/labor that gets returned to the org to drive higher value for our customers? The winners here will be the ones who engage in that strategic thinking and don’t default to short term thinking along the lines of laying off people.
Accenture CEO Julie Sweet shares why her firm is acquiring Udacity to launch an AI-powered training platform: Julie gets it. This had made the rounds with a lot of smart people commenting on it but it just makes sense on an operational front. What would make this even more impactful is if Accenture led the way in re-thinking a way to reflect the value of highly trained people on a spreadsheet beyond showing them as costs.
The AI wars heat up with Claude 3, claimed to have “near-human” abilities: This article is a good reminder of a couple of things - the first being that while we're all amazed at what AI can do, enterprises need to be as interested in what they cost. The second being the need to have a high enough level of technical knowledge about how AI works to know how they're priced. How many tokens will you need to be able to use/access? Third, you really need to have spent some serious time, considering where deploying AI would have the greatest impact in your org and where the positive delta is greater than not just the initial and ongoing cost but the exit and migration costs when/if you move to another provider. I know, it's not cool and shiny but these are the considerations that will keep you out of the ditches on either side of this road.
The Getty Makes Nearly 88,000 Art Images Free to Use However You Like: No deep insights here, just a great resource.