Last week we had a great call with the Institute for Innovation in Large Organizations (ILO), and we got to hear Gianni Giacomelli talk about some of the latest developments that the Collective Intelligence group at MIT are working, like the Supermind Ideator. The talk was great and as usual the audience was very involved. As someone who has managed large-scale innovation programs, the piece of Gianni’s talk that really struck me was when we got around to talking about creating a skeptical AI - engineering snarkiness.
First, a little story. When I was at Amazon, we kicked around starting a document library or Hall of Fame. That’s a different scale project at Amazon since all we do is in a written document. No Powerpoint. We wrote one-pagers, or PRFAQs, or Six Pagers but always a doc. That means, or should mean, that for every program, product, or service, launched at Amazon (certainly for the big stuff), there was/is a document. We were thinking that the superficial level of this would be to find and host all these docs in one location. Just the sheer history of it appealed to me - think about reading the doc that preceded launching the Kindle. Or the one for the purchase of Whole Foods. Then we there would be a danger there. If all we hosted were the winning docs, then we might start a rut - people would flow to the approach that they perceived as winning and we’d decrease the possibility of non-linear approaches or really innovative ideas.
Then we thought- it might help if you could see a doc AND the history of changes from the first draft to the final version - see where changes were made, maybe understand why - but still the rut possibility loomed (and technically, it would’ve been a pain). Then we had this orthogonal thought about what if we only hosted the bad ones or the ones that didn’t make it? Teach people the ways NOT to present and leave the possibility space open for finding how to make something work. Problems there too…you don’t want to embarrass people and even if you took the names off, you could figure out when, what team, etc. People might also not have the whole context for why something didn’t go forward and draw bad conclusions. So that idea passed….but that ILO call the other day and Gianni’s talk made me think of another approach.
OK -one more story (it’s short and it ties in I promise). When I was at Booz Allen, and we were say writing an RFP, you had to get it past a series of what we called “murder boards.” Not the one where you try to figure out the killer but the one where your RFP response had to face serious questions from financial, personnel, technological, angles. In some ways, it was like a version of the “doc reads” that we did at Amazon, just with a more violent name. The dynamic is the same - your idea will have to face scrutiny before moving on. Now the tie in.
Anybody who has even dabbled in innovation, will tell you that ideas are cheap, execution is hard. Part of that difficulty, part of the expense of execution if you will, if the assessment/culling/winnowing part. From an 18-wheeler of ideas, you need to get down to a pickup truck load and then probably keep going till you get an idea or two that can ride in the front seat. What if though, we took all the reasons ideas get knocked about in a doc read and all the reasons why RFPs get “murdered” and we built those into an AI model? What if permissioned that model to be unfiltered and engage in some ‘radical candor’?
Objections to ideas can be anonymized to the point that you couldn’t link them to a specific project or program. Even successful ideas or programs will have encountered objections along their path to headcount and funding so it will be a rich field. Then we can feed that top end of the idea funnel through the Discriminator and have it point out the objections that are most likely to be raised based on the corpus of ones we’ve fed it. We can even make sure it gets better over time by tracking the objections it predicted we’d face and then when we do take ideas in front of reads and boards, we can feed those actual objections back into the loop. We’d be able to compress the time between first draft and first round of feedback (probably the most obvious objections), to almost nothing. There’s a huge ROI right there. We can program the model to have different personas to represent what objections look like when they come from Legal or Finance or Ops..etc.
I really like the idea of setting up the negative boundaries and people feeling like they have the freedom to find new and innovative ways through those objections. Doesn’t feel like a rut to me. So who wants to model the Miranda Priestly or Wednesday Addams or Deadpool AI model to help sharpen your idea pitch?
Sounds a bit like a Devils advocate type of model.