If you were to poll the computing industry today for “most hyped technology of our times,” I posit that artificial intelligence would easily top the list.
And with good reason—the last decade of progress in AI has been exciting for sure. But the impact of that innovation follows the William Gibson principle: “The future is already here, it’s just not evenly distributed.”
What’s particularly funny about AI is that people think that AI success should be evenly distributed. If Tesla can autopilot your car and Google Photos can match your elderly parents’ faces to their baby photos, why can’t your company increase revenue and decrease cost via AI? Heck, AI can’t even figure out how to load your pile of spreadsheets into a data warehouse!
So, what’s causing the disconnect between AI innovation and impact? The issue is twofold. First — all computing challenges are not the same. While some exciting topics like computer vision have made enormous leaps in recent years, most of the classically painful business data processing problems are still well beyond the capabilities of today’s state-of-the-art AI. Second — the engineering tools and practices for successful AI and machine learning are still in their infancy.
Today’s Big Tech shops are largely solving their data and AI problems by hiring armies of expert software engineers to “hand-stitch” together data pipelines with bits of AI. This is exacerbated by the disparate state of open-source tooling. Unless your company can recruit lots of Silicon Valley-quality software developers, you’re out of luck. To democratize the progress in AI, we need to do a couple key things:
- Focus on Human-AI Interfaces: We need to admit that in many settings, AI can’t go the full distance. Instead, we need innovators to focus on AI as an augmentation of human work, not a replacement.
- Bring people together across skill sets: We need to understand that technology democratization needs to bring together groups with differing skills. The next generation of AI tools needs to allow all the key constituencies to do their work as they see fit, while sharing each other’s challenges and progress.
Today’s Big Tech shops are largely solving their data and AI problems by hiring armies of expert software engineers to “hand-stitch” together data pipelines with bits of AI. This is exacerbated by the state of open-source tooling. Unless your company can recruit lots of Silicon Valley-quality software developers, you’re out of luck.
That’s why going forward, I see three key trends that will play an important role in democratizing AI:
- Data engineering: I predict that developer-centric interfaces like SQL and Python will become increasingly interoperable with low-code tools. Underneath the software maturation, cloud-hosted services will make this new technology very easy to adopt.
- AI engineering: I predict that MLOps will enter a Cambrian Explosion phase in 2022. We see it in the startup market where companies are jostling to solve narrow pieces of the overall AI engineering pipeline. Some of those startups will find high-value leverage points in these pipelines and gain traction quickly; others will fade away.
- Low code and no code: I predict the next generation of low-code and no-code apps will be able to function like “automatic programmer assistants” that use generative AI and program synthesis. Non-coders will be able to generate the moral equivalent of custom software without needing to know how (or if) they’re doing it.
The next year promises to be a very confusing time for AI, especially in fields like MLOps where the stack hasn’t begun to shake out. Be sure to keep an eye on human-AI interfaces that facilitate augmented intelligence using low-code and no-code tools. While tech news stories about AI accomplishments will continue to tantalize you with possibilities, understand that the practical uses of AI in business will remain rare.