🏥 Healthcare and life science
Genomic medicine, which uses genetic information about a patient’s health condition to inform diagnostic and treatment, is an area that’s ripe for AI applications. As a primer on this topic, a report
from the phg foundation in Cambridge is worth a read. It evaluates why AI is relevant to genomic medicine, what applications are out there in the wild today, and policy considerations for further adoption. Part of the reason we don’t see more AI in genomic medicine is the issue of limited access to quality datasets (an issue that Pearse Keane
discussed at our RAAIS conference last week), imbalanced datasets that result in algorithmic bias, and immature technology infrastructure to facilitate the adoption of predictive systems in healthcare.
One such solution to these problems is federated learning (FL), whereby an ML model is sent to train where a dataset lives (vs. the other way around in traditional ML). Many FL initiatives (either academic, open source or in startups) focus on healthcare use cases. In this perspective paper
, the authors present an “overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond.” The driving motivation is the bountiful opportunities for ML in healthcare that is rate limited by the difficulty in safely accessing this data without breaching the subject’s privacy.
Another area of therapeutics that’s heating up is protein engineering. LabGenius (an Air Street Capital portfolio company) shared
their 8-step approach to AI-first protein engineering applied to therapeutic discovery and development. This is a great primer if you’re interested in the space.
🌎 The (geo)politics of AI
As a field that’s strongly rooted in academia, the AI industry needs a steady flow of post-graduate talent. A study
of 175 randomly selected NeurIPS 2019 papers found that of 128 authors who held undergraduate degrees from China (30% of the sample set), over half of them went on to earn graduate degrees in the US and currently work in the US. Although this trend has been quite apparent on the ground for a while, it is the US’ ability to stay as a talent magnet that is being questioned now more than ever. Cast against the government’s trade sanctions, company-non-grata lists, H1B visa freezes and overall policy hostility towards China, things aren’t looking good. For the post-graduation work share balance to really shift from the US to elsewhere (e.g. Europe), however, contending nations need to embrace immigration, open talent visas, have companies and universities pay competitive salaries, grow government expenditure on R&D and the like. Perhaps fortunately for the US, this wish list doesn’t appear to be materialising. I also suspect that US companies will be more willing (given COVID work from home) to establish foreign subsidiaries to keep hold of talent that is forced to leave the US.
On the topic of talent and training, Eric Schmidt’s foundation Schmidt Futures confirmed a donation
to the University of Cambridge called the Accelerate Program for Scientific Discovery. The initiative provides machine learning training to PhD students in the sciences.
🚗 Autonomous everything
Self-driving car companies are hitting the streets again after their COVID-induced hiatus that started in mid-March. Minivans from Waymo
and cars from Lyft
are out testing in California. On the other hand, Aurora confirmed
that it has been working on using its self-driving technology to power trucks. Co-founder Sterling Anderson affirmed that the safest application of AVs are indeed trucks.
The US’s National Highway Traffic Safety Administration unveiled
AV TEST, a new initiative to provide “an online, public-facing platform for sharing automated driving system on-road testing activities”. The initiative spans
eight US states and nine self-driving companies including Waymo, Uber, Cruise and Nuro.
Meanwhile, large investments continue to accrue to the large players. Argo closed a $2.6B deal
with Volkswagen, which sees its own Munich-based Autonomous Intelligent Driving unit folded into Argo. Ford and VW will now share the costs of developing Argo.
Amazon signed an agreement
to acquire Zoox for a reported $1-1.2B, which is just a pinch more than the amount of venture capital the company had raised. Of note, Amazon will let Zoox run as a standalone company on a mission to reinvent the autonomous car. It is expected that Zoox vehicles could plug into Amazon’s logistic network to further compress delivery times and let the company own more of its value chain.
From ground AVs to the air, Airbus demonstrated
the world’s first fully automatic vision-based autonomous taxi, takeoff and landing of an A350 aircraft.
💪 The giants
Following the release of their 175 billion parameter GPT-3 language model, OpenAI announced
they have developed an API for third party developers to access new models developed by the company. The (private beta) API runs models with weights from the GPT-3 family and offers a general purpose text-in, text-out interface. You can program the API by giving it a few examples of the task you’d like it to solve. OpenAI gives examples of chat, semantic search, table completion in Excel, translation, and text generation. The company received mixed reviews on this release. Some saw it as an unwelcome commercial move given OpenAI’s original non-profit mission and their strong reticence to release model weights when their original paper was published for fear of misuse. Others, however, welcomed the release. I actually think it’s a positive development because having a production software service outcome at the end of a multi-year R&D project ensures that the work is actually useful in the real world. The API provides useful abstractions to widen the relevance of GPT R&D and their limited beta lets the company audit use cases ahead of approval.
Snap announced their Lens Studio 3.0, which introduces SnapML
. This service now allows third party developers to add their own ML models into Lenses they create and publish to consumers on the Snapchat application. Aside from Niantic’s mobile games, Snap’s Lenses are one of the few extremely popular AR features running on mobile devices. The company also released PlantSnap and Dog Scanner, which as the names suggest enable users to point their Snapchat camera at plants and dogs to recognise species and breeds, respectively. It’s hard to say if this is a gimmick or whether user data suggests that consumers scan dogs and plants a lot. I’d be willing to bet that the former is true :-)
Facebook shared the results of their open Deepfake detection challenge
that drew over 2,000 participants and more than 35,000 model submissions on a dataset of videos produced by 3,500 paid actors. These real videos were altered using a variety of deepfake generation models and refinement techniques. The task for participants was to classify real vs. fake on a public dataset as well as black box dataset that they did not previously see. Although the best model submission achieved 82.56% average precision on the public dataset, the same model dropped to 65.18% average precision against the black box dataset. Such an outcome highlights the challenge for models to generalise to hitherto unseen samples - a task that is of critical importance for real-world, robust ML systems. What’s also interesting is that all the winning submissions used pretrained EfficientNet networks with fine-tuning on the deepfake dataset. The EfficientNet
is a computer vision model that was generated using neural architecture search. It is both smaller (by parameter number) and faster (by inference speed) than the best existing convolutional neural network at the time (in 2019).
News emerged that Graphcore (an Air Street Capital portfolio company) has shipped
tens of thousands of its IPU processors to some 100 customers around the world.
Cerebras announced that it is building a supercomputer
with the Pittsburgh Supercomputer Center thanks to a $5M grant from the NSF. The supercomputer uses two of Cerebras’ CS-1 machines, which Andy Hock
described at RAAIS last week. The company is expected to share details about its second-generation system at the upcoming Hot Chips conference.
a 20x performance speedup for ETL operations on the TPCx-BB benchmark challenge. Their approach compared a system using sixteen DGX A100 systems (128 GPUs) to a CPU system.
that it is moving away from Intel-based processors to its own ARM-based designs for its future products, giving it even deeper full-stack control over its hardware. From a product perspective, Apple Silicon takes aim at a large number of features
that would otherwise not be included in a third party CPU. These include high-efficiency audio processing, low-power video playback, power management, secure enclaves, neural engine, cryptography acceleration, and more. This means that for the first time, all Apple hardware products will be able to run similar software, which potentially allows the millions of iOS apps to run on Mac (which currently has some 28,000 apps in the store).
Cambricon, a privately-held Chinese unicorn that designs AI-focused processors, is reportedly
seeking an IPO on China’s new Nasdaq-like exchange called Star Market. Of note, the company sold technology to Huawei for inclusion in their first AI-chip (Kirin) powered smartphones. However, Huawei’s AI semiconductor division called HiSilicon has since doubled down on its own development, leaving a dent in Cambricon’s revenue.
its first AI-optimised FPGA that now incorporates a block that is tuned for tensor matrix multiplications. They claim that this device is up to 2.3x faster than NVIDIA’s V100 GPUs for BERT batch processing.