Orchestrate all the Things - 2021 technology trend review, part 2: AI, Knowledge Graphs, and the COVID-19 effect
Episode Date: January 26, 2021AI chips, MLOps, and ethics. Knowledge, and Graphs. COVID-19 as a mixed bag for technological progress and adoption Article published on ZDNet ...
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amadiotis and we'll be connecting the dots together.
Today's episode is an AI-generated narrative of my latest article on Zedinet.
I hope you will enjoy the podcast.
If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook.
2021 Technology Trend Review, Part 2, AI, Knowledge Graphs,
and the COVID-19 Effect. Last year, we identified blockchain, cloud, open source,
artificial intelligence, and knowledge graphs as the five key technological drivers for the 2020s.
Although we did not anticipate the kind of year that 2020 would turn out to be,
it looks like our predictions may not have been entirely off track. Let's pick up from where we left off, retracing developments in key
technologies for the 2020s, artificial intelligence and knowledge graphs, plus an honorable mention to
COVID-19-related technological developments. AI chips, MLOPs, and ethics. In our opener for the
2020s, we laid the groundwork to evaluate the array
of technologies under the umbrella term artificial intelligence. Now we'll use it to refer to some
key developments in this area, starting with hardware. The key thing to keep in mind here
is that the proliferation of machine learning workloads has boosted the use of GPUs,
previously used mostly for gaming, while also giving birth to a whole new range of
manufacturers. NVIDIA, which has come to dominate the AI chip market, had a very productive year.
First, by unveiling its new Ampere architecture in May, NVIDIA claims this brought an improvement
of about 20 times compared to Volt, its previous architecture. Then, in September, NVIDIA announced the acquisition of ARM, another chip manufacturer.
As we noted then, NVIDIA's acquisition of ARM strengthens its ecosystem and brings economies
of scale to the cloud and expansion to the edge. As others noted, however, the acquisition may face
regulatory scrutiny. The AI chip area deserves more analysis, on which we'll embark soon.
However, some honorable mentions are due, to Graphcore, for having raised more capital in seen chips deployed in the cloud and on-premise, Cerebras, for having unveiled its second-generation
wafer-scale AI chip, and Blaze, for having released new hardware and software products.
The software side of things was equally eventful, if not more.
As noted in the State of AI report for 2020, MLOPS was a major theme. MLOPS, short for Machine Learning Operations,
is the equivalent of DevOps for ML models, taking them from development to production,
and managing their lifecycle in terms of improvements, fixes, redeployments, and so on.
Some of the more popular and fastest-growing GitHub projects in 2020 are related to MLOps.
Streamlit, helping deploying applications based on machine learning models, and Dask,
boosting Python's performance and operationalized by Saturn Cloud, are just two of many examples.
Explainable AI, the ability to shed light on decisions made by ML models, may not be equally operationalized but is also gaining traction.
Another key theme was the use of machine learning in biology and healthcare.
Alpha Fold, DeepMind's system that succeeded in solving one of the most difficult computing challenges in the world, predicting how protein molecules will fold, is a prime example.
More examples of AI having an impact in biology and healthcare are either here already or on the
way. But what we think should top the list is not a technical achievement. It is what's come to be
known as AI ethics, i.e. the side effects of using AI. In a highly debated development, Google
recently resonated Timnit Gebru, a widely respected leader in AI
ethics research and former co-lead of Google's ethical AI team. Gebru was essentially resignated
for uncovering uncomfortable truths. In addition to bias and discrimination, which Gebru posits is
not just a side effect of datasets mirroring bias in the real world, there is another aspect of what
her work shows that deserves highlighting.
The dire environmental consequences that the focus on ever-bigger and more resource-hungry AI models has. DeepMind's dismissal of the issue in favor of Aji speaks volumes on the
industry's priorities. AI, knowledge, and graphs. We did say bigger and more resource-hungry AI
models, and this bill fits perfectly another one of 2020's
defining moments for AI language models. Besides costing millions to train, these models also have
another issue, they don't know what they are talking about, which becomes clear if scrutinized.
But if this is the state-of-the-art AI, is there a way to improve upon it? Opinions vary.
Yoshua Bengio, Yann LeCun, and Jeffrey Hinton
are considered the forefathers of deep learning. Some people subscribe to Hinton's view that
eventually all issues will be solved and deep learning will be able to do everything. Others,
like Gary Marcus, believe that AI and the way it is currently conflated with deep learning
will never amount to much more than sophisticated pattern recognition. Marcus, who has been consistent in his critique of deep learning
and language models based on it, is perhaps the most prominent among the ranks of scientists
and practitioners who challenge today's conventional wisdom on AI.