🏥 Life (and) science
The public biotech industry, alongside their technology peers, has suffered a sizable erosion of enterprise value as part of the post-covid stock market melt down in 2022. Some
128 biotechs trade at valuations below their cash at hand, suggesting in part that investors have little confidence in the likelihood of their (pre)clinical asset success. This might in fact be true. Consider that most public biotech companies of yesterday are single-asset companies predicated on years of academic science, meaning that they are generally created to exploit one drug (family) against a set of disease indications. Note too that the vast majority of academic science is
irreproducible. Thus, single asset biotechs are vulnerable herbivores searching for safety in a wide open savannah of hungry risk-off hunting lions.
By contrast, a new generation of AI-first biotechnology companies led by Exscientia and Recursion in public markets is flipping this single asset biotech model on its head. These companies systematically apply software engineering, automation, and machine learning to problems along the drug discovery and development value chain. This approach means more reliable science and, crucially, a
“platform” for discovery that churns out a multitude of diverse drug assets that can be pointed at a range of disease indications. With biotechs’ value based on the likelihood of success of their drug programs, AI-first biotechnology companies should be both more resilient and more valuable in the long term thanks to their many shots on goal. Both
Recursion and
Exscientia published their Q1 2022 financial results, which include the dosing of multiple Phase 2 clinical trials and a
huge partnership with Roche/Genentech for Recursion, and a
large partnership with Sanofi for Exscientia based on technology it acquired from Allcyte. With more research groups and companies making use of emerging tools such as AlphaFold (some 400k researchers have
accessed the EMBL AlphaFold database and 100 papers mentioning AlphaFold now published per month), we can expect even more shots on goal to be taken per biotech company.
Over in clinical medicine, the
first autonomous AI-first chest X-ray software was granted CE Class IIb certification in Europe. Its developer is Lithuanian startup Oxipit. The product processes chest X-rays, flags normal scans and sends abnormal ones for human review. The result should be a large alleviation of primary care screening burden and, next up, a “spell check” for human-reviewed cases.
In cardiology, the Mayo Clinic
demonstrated how an AI system running on an Apple Watch ECG could detect weak heart pump. It’s impressive that a wrist-worn consumer device could be used instead of today’s echocardiogram, CT scan or MRI, which are all expensive and heavy-duty. What’s more, Johns Hopkins
developed a computer vision system to predict cardiac arrests up to 10 years ahead of time using scar tissue distribution from contrast-enhanced cardiac images.
🌎 The (geo)politics of AI
Clearview AI came under fire in
Italy, the
UK, and the
US in the last two months because it had scraped billions of public photos without the consent, or knowledge, of the individuals involved. Italy fined the company €20M for breaches to EU law, including GDPR. The UK did so with over £7.5M for similar violations. In the US, Clearview was forced to settle a 2020 lawsuit accusing the company of violating BIPA, an Illinois privacy law that
had previously bitten Meta. As a result of this settlement, Clearview is required to stop selling its database to most US companies. A few days ago, the company
announced that it was expanding the sale of its facial recognition software beyond police, to US companies, through a new
“consent-based” product.
It’s not clear yet if Russia or Ukraine have
used AI-enabled weapons in Ukraine. But it’s interesting to note that more AI-enabled tools that are peripheral to the military have proved useful to Ukraine in wartime: US-based startup Primer
helped Ukrainian forces process unencrypted Russian radio transmissions, and Clearview AI
allowed them to identify Russian soldiers using facial recognition. During large scale war efforts, factories – and indeed the entire industrial sector – used to be repurposed to serve the military. In comparison with the large capital expenditures involved at the time, the repurposing of software today is almost frictionless. Seemingly inoffensive software today could prove decisive in future wars. We hope we’ll never know for sure.
As the world woke up to the importance of the semiconductor industry, TSMC’s home country, Taiwan,
made economic espionage a crime punishable by up to 12 years of prison. A very interesting thread
here on the timeline leading to this decision.
Spinning out of university is a favored route for AI founders in the UK: spinouts represent 4.3% of all AI companies, while only 0.03% of all UK companies are spinouts. A UK government report on AI commercisalisation
sheds light on the importance of improving the country’s spinout policy in order to fully realize its universities’ potential. We published a
thread with highlights of this report, and
another one on recently published Beauhurst data on UK spinouts.
🍪 Hardware
Faced with increasing competition from specialized AI chip manufacturers, incumbents Intel and NVIDIA are quickly stepping up their AI game. Intel
announced a new AI chip, Habana Gaudi 2, built by Habana Labs, a company acquired in 2019. Intel
says it is twice more performant than NVIDIA’s ubiquitous A100s. In the meantime, NVIDIA
began selling the H100 “Hopper” processor, an AI processor that will compete with Google’s TPU v4 on large scale, memory hungry, machine learning models training. Meanwhile, word on the street is that Google’s TPU effort is essentially disbanded and run by a skeleton team…
Arm’s $40B acquisition by NVIDIA was
abandoned in February, as we predicted in the
State of AI Report in 2020. Amid a catastrophic year for SoftBank’s Vision Fund, Arm
is set for an IPO (somehow).
🏭 Big tech
In the past few months, big tech companies have shown willingness to invest financially and scientifically in the fight against climate change. Meta and Alphabet have
joined a $1B fund launched by Stripe called Frontier. Meta also
said it was using an AI-designed concrete in its data centers that emits 40% less carbon than other concrete mixtures. Some have suggested that the baseline they compared against is weak, but this is another useful illustration of the use of AI in materials discovery, a flourishing application field that we’re excited about. Meta has been very active in this space, for example through the
Open Catalyst Project, which aims to discover new catalysts to help large scale production of green hydrogen.
But what about the climate impact of the models which are trained for these discoveries? Google
identified 4 practices to reduce the carbon footprint of ML models: using efficient model architectures such as sparse models, using hardware which is optimized for ML, using cloud computing rather than on-premises, and training models in “green” datacenters. Google says these practices allow them to reduce energy consumption by 100x and emissions by 1,000x.
Meta AI became the first large AI lab to (almost) fully
outsource its large language model (LLM), called OPT-175B. Both the pretrained model (all 175B parameters) and the source code used for training will be made available upon request. Notably, the model was trained on publicly available datasets. Another similar initiative is currently underway as part of the
Big science project. The project aims to train an LLM and make it available to the ML community. The project is led by Hugging Face among others but is organized as an open workshop. It’s so open you can even follow the training logs of the model
here. The next step: multimodal models. As a set up to achieve this, LAION, a non-profit organization advocating for open research in AI,
published LAION-5B, the largest publicly available image-text dataset.
The battle for the best image generation model is raging. Open AI
released DALL-E 2, the second version of its previously successful GPT-3-based
DALL-E. But DALL-E is no longer the only game in town. Google
announced Imagen, its own model generating photorealistic and artistic images. Beyond their differences, what is interesting – and new – about these models is that they are both based on diffusion models, new(ish) models which have been the state of the art in generative modeling for the past two years or so. Check out a few samples from both models, and more,
here. Another notable text-to-image generation model is Tsinghua’s
CogView2, which supports both English and Chinese.
All but one author from Google’s celebrated
Attention is All You Need paper – which introduced the Transformer architecture – have
left the company. The latest to leave, research scientists Ashish Vaswani and Niki Parmar, did so to
launch Adept alongside David Luan from OpenAI, a company that promises to automate the way individuals and businesses interact with computers.
Google was criticized again for the firing of a researcher, an article of NYT reported. The article implied that the researcher was fired because he had criticized the
work of other Google and Stanford researchers. While this might at first be reminiscent of the firing of
Timnit Gebru and
Margaret Mitchell, this time it was different. Several academics
agreed with the fact that the researcher
“had waged a years-long campaign to harass & undermine [the] work” of the first authors of the work in question. Dr. Gebru herself
expressed her discontent over the parallel drawn between her and the fired researcher in the NYT article.