Building adaptive teams for the future of AI
How might we create and structure teams given the ridiculous pace of AI?
Last night we hosted a dinner for CTOs and CPOs on the topic of AI-related rituals. After demoing what Coda launched and sharing rituals we’ve seen influenced by generative AI, the group talked about hot topics in the space, like foundation models, customer scenarios, developer tooling, data privacy, pricing, and regulation.
I found that the most interesting conversations were about how folks are setting up their teams given rapid changes in AI. This period in tech is like no other I’ve seen. Every presenter last night talked about how their slides would be out of date in a few weeks. Many product teams I know are prototyping, building, and re-imagining their own product and rituals in a world where generative AI is improving at a ridiculous pace.
So the question is — how might we build adaptive teams for such a fast-paced future?
I don’t have all the answers, but here are a few things that come to mind:
Let the makers make. One thing I’m hearing from some of the teams that have adapted best is that they are encouraging their teams to test, tinker, and try all the new tools. I think this is true not just for product, design, and engineering teams. Given the potential to reshape many rituals, I think it’s pretty critical that sales, marketing, GTM, legal, etc are all trying generative AI tools in their area. As with all tinkering and testing, it often leads to better taste and better informed opinions.
Create learning rituals. One ritual we started early this year was an AI learning group. It’s an informal drop-in meeting where someone brings a paper, video, tool, or some other artifact related to AI and we discuss it and try to learn from it. The informal nature means there is little prep required. It also feels fun and communal since everyone is on the learning journey together. I’ve heard of other teams creating similar groups, and I like how it keeps us proactive in our approach to continuously learning. And of course, hackathons are a great way to involve broader groups in fast-paced AI explorations.
Create fault tolerant teams. I recently listened to Gustav (CPTO at Spotify) reference Chris Dixon’s idea of fault-tolerant interfaces on Lenny’s podcast. The idea is that when designing ML driven interfaces like recommendations or search, you should expect and design for imperfect predictions. For example, Midjourney generates four images at once, knowing it’s highly unlikely a single image will be 100% perfect. In other words, the user has an escape hatch if the prediction isn’t great. I think the same idea can be applied to teams. The first team structure you create may not be the right one as generative AI continues to advance at such speed. So it’s helpful to set the expectations within your broader organization that team sizes, structures, etc will likely change. Said differently, it’s hard to predict such a rapidly changing future, and we should prepare for that fact that our approach to it will need to tolerate imperfect org or team choices (likely more so than other more known problem areas).
What approaches are you taking to build adaptive teams given the AI future ahead?
I’d love to hear in the comments.
I think it's worth maintaining a sort of knowledge map as a team to keep a holistic view on which things are going to be most relevant and impactful to your business/product. Learning rituals is a good first step, but when the change is as rapid and impactful as generative AI, I think it's worth considering more formal ownership over specific areas.
For example: within a design team, you might have someone who is very interested in text-based interactions and another person who is more interested in AI within the UI or integrated with the tools that exist.
Love this! Additionally, I’d add the willingness to experiment with ideas that might not necessarily be within scope or in other words, draw parallels from outside of the domain. This kind of tangential observations and patterns help form a better picture in AI implications. Eg. playing around imaging prompts helps in envisioning better and bring attention to detail to textual ideas--inspiration can come from all around.