View profile

🐣 Your guide to AI: March 2021

Nathan Benaich, Air Street Capital
Nathan Benaich, Air Street Capital
Hi all!
Welcome to Your guide to AI. Here you’ll find an analytical (and opinionated) narrative covering key developments in AI over March 2021. 
If you’d like to chat about something you’re working on, share some news before it happens, or have feedback on the issue, just hit reply! 
If you enjoyed the read, I’d appreciate you hitting forward to a couple of friends 🙏

🆕 AI in Industry
🏥 Life (and) science
In his popular column for Science Translational Medicine, Derek Lowe rightly pressed AI developers to “attack the right problems” if they want to have an impact on drug discovery. In short, this means pointing AI to the later stages of drug discovery. Why? This is where potential improvements have the most impact based on the cost of capital, expected return from a new drug, patent lifetimes, etc. However, most computational drug discovery companies pitch “No more stumbling around screening piles of molecules! No more tedious property optimization!”, Lowe writes, but “the real problem is having drug candidates fail in the clinic. All that other stuff is a roundoff error compared to the clinical failure rate.” He wraps by concluding, “what everyone wants are AI systems, computational techniques, and models that will reduce all that finger-crossing and tachycardia, but that’s, unfortunately, some ways off.” For the record, I’m with you, Derek. 
I also think we have a compelling answer to talk about. I’m hugely excited to publicly discuss our investment in Vienna-based Allcyte, a “startup that has found a way to uncover bespoke cancer therapies”, as per Jeremy Kahn’s story in Fortune. Today, drugs are moved into clinical trials because they exhibit favorable activity in preclinical systems such as animal models and cell lines. However, humans are not mice, and cell lines don’t recapitulate the complexity of human cancer and its (immune) microenvironment. This is where Allcyte steps in. The company makes use of high-content microscopy and computer vision to interpret the activity of cancer drugs directly in viable, primary cancer tissue from the patient at the single-cell level. In doing so, Allcyte moves beyond genomic testing of cancer patients to identify drugs that could work - an approach that sees 90% of lung cancer patients not respond to suggested therapies - and into functional precision medicine at the cellular level. This isn’t just research. At the American Society of Hematology’s annual conference in December 2020, Allcyte showed results from a clinical study of oncologists who used its technology on 143 late-stage blood cancer patients who exhausted all known treatment options. Allcyte’s system could find a drug amongst 136 candidates that lead to longer patient survival in 55% of the 56 cases in which patients were treatable. This is a major win for pharma and patients. Indeed, Allcyte’s technology is already in use with 10 major pharma companies to help them build confidence in advancing drugs into clinical trials only after having validated that the drug works in what is a “mini clinical study” on primary patient tissues. 
The Broad Institute, a powerhouse of computational biology, received a $150M gift from Eric and Wendy Schmidt to significantly expand its work in AI-first biology. We can expect this to drive even more R&D, open-sourcing, and company creation. Recall that Recursion, which filed to go public on the NASDAQ this month, making use of the Broad’s CellProfiler at the very beginning of its life to showcase how cellular phenotypes can be efficiently interpreted with computer vision. 
Clue joined Natural Cycles as the two FDA cleared digital solutions for birth control. These products produce predictions on the chances of pregnancy using either the start date of their period (Clue) or daily temperature measurements (Natural Cycles). As its user inputs more cycle data, the product can narrow the risk window down to two weeks or less. 
🌎 The (geo)politics of AI
Last month’s newsletter looked at the UK’s AI Roadmap and evidence for Chinese funding of defense-related AI research at publicly-funded UK institutions. Over in the US, the National Security Commission - a bipartisan commission led by Google’s Eric Schmidt and former US Deputy Secretary of Defense Robert Work - released its final report on how the US can “win the AI era”. It presents a strategy for the US to defend against AI threats, employ AI for national security in a responsible manner, and “win the broader technology competition for the sake of our prosperity, security, and welfare.” The authors are clear about what is at stake: 
“For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change. Simultaneously, AI is deepening the threat posed by cyber-attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy. The limited uses of AI-enabled attacks to date represent the tip of the iceberg. Meanwhile, global crises exemplified by the COVID-19 pandemic and climate change highlight the need to expand our conception of national security and find innovative AI-enabled solutions.”
Their recommendation is to drive for widespread integration of AI in the workforce (civilian and government) and the military by 2025. This includes a US Digital Service Academy to train talent for government and a push for Congress to pass a National Defense Education Act II to invest in fellowships across the higher education stack. 
On the funding front, the government should double non-defense funding for AI R&D to $32B per year by 2026. It should triple the number of National AI Research Institutes, develop a National AI Research Infrastructure for cloud computing and training data, and reform IP policies to favor patenting of AI in the US. 
The US is also called on to shore up resources for designing and fabricating microelectronics after almost entirely outsourcing these capabilities to Asia in the last decades. It must also modernize export controls and screen foreign investment to protect US enterprises. The report puts it clearly: “the U.S. supply chain for advanced chips is at risk without concerted government action. Rebuilding domestic chip manufacturing will be expensive, but the time to act is now.”
On the defense front, The Pentagon must drive organizational reforms, develop new warfighting concepts and weapons, and work with the DoD to augment and focus its AI R&D portfolio. Relatedly, President Biden passed an executive order for a 100-day review of supply chains for semiconductors, large-capacity batteries for EVs, pharmaceuticals, and rare-earth metals. The outcome of this review could add more urgency to infrastructure spending for supply chain onshoring. All the while, the US is treading a careful path of overhauling its military-industrial complex while hopefully avoiding a more expansive military-civil fusion that we see in China. 
Over in the UK, Boris Johnson has said that the £16.5B defense budget boost announced last year would be used to fund a new military research center for AI. The government will invest in AI technologies, unmanned aircraft, directed energy weapons, and other battlefield use cases. Various British and international startups are involved in this program, such as Improbable Defence, Adarga, and Rebellion Defence.
Sam Altman wrote a long piece entitled Moore’s Law for Everything in which he paints a future for the American economy when we reach AGI. He argues that this revolution will create phenomenal wealth because the price of most (if not all) labor will fall to zero. This wealth, however, needs to be distributed widely to enable more people to participate and to collectively raise the standard of living. To do so, he makes the case for an American Equity Fund, which I think is a good one. The idea is that because machines will drive labor, costs will fall precipitously. So rather than taxing labor, we should be taxing capital. For everyone to participate in the accrual of the value of companies that produce learning machines, the American Equity Fund would tax companies above a certain valuation in the form of shares. All adult citizens would receive an annual distribution from the American Equity Fund and would be free to spend it however they wish. I think we could further incentivize long-term ownership by implementing a capital gains rate that decays with the holding period. I also think that a national Equity Fund like this one could rebalance the feelings that big tech is bad and takes advantage of consumers. If consumers earn equity (not discount vouchers!) in the companies they transact with regularly, I think we’d have much more alignment. 
🍪 Hardware
Apple made a big move by announcing a €1B investment into the Munich area to develop a European Silicon Design Center. It will encompass a 30,000 square meter facility for 1,500 engineers to focus on power management design, application processors, and wireless technologies including 5G. The statement deepens Apple’s commitment to Germany where in 2015 it opened its Bavarian Design Center for 350 engineers. This facility created custom silicon to deliver improved performance and power efficiency for several Apple products including the iPhone, iPad, and M1-based Mac. By 2019, Apple opened more silicon engineering sites in Germany and a new radio technology site in Austria. 
Meanwhile, Taiwan is suffering the worst drought in 56 years, forcing the restricted use of water. In addition to the ecological challenges this poses, Taiwan’s semiconductor industry will also take a hit because it relies on massive amounts of water. TSMC needs 156,000 tons of water per day, which is ⅓ of all water used in Taiwan’s key science parks. The company does reuse almost 90% of its water. 
Taiwan has also accused Bitmain, one of China’s biggest crypto computing infrastructure companies, of illegally running two research centers in Taiwan and hiring employees from TSMC. Upon registration, these Chinese companies do not disclose their activities in chip design or research. 
Intel’s CEO made a splash by stating the company would double down on domestic semiconductor manufacturing by investing $20B to build two new chip fabs in Arizona. “Intel is back. The old Intel is now the new Intel”, he said. This temporarily shot TSMC’s share price down by 10%. But money is one thing - TSMC remains the dominant player technology-wise: the company captures almost 90% of the $21B in 2020 revenue for the most advanced 10nm-5nm processes. Indeed, recall that Apple ditched Intel chips with their own line of processors based on the Arm core. 
Graphcore pushed major updates to their Poplar software stack that enables the scale-out capabilities of the IPU compute system.  
Facebook Reality Labs (and the CTRL-Labs team) demonstrated wristbands that use electromyography to translate neural signals from muscles into actions while also providing haptic feedback. These wristbands will be compatible with VR experiences too. 
In China, there may be a hint of popular pushback against facial recognition. A state-controlled broadcaster, China Central Television (CCTV, no pun intended) ran a 10-minute investigative segment where undercover reporters talked to surveillance camera makers who showed how cameras could “recognize and document a person’s age, ethnicity, and even emotional state. The cameras also successfully identified return customers, allowing sellers to call up purchase histories in real-time.” A draft law could pass in 2021 that will let the government regulate how facial recognition is used in the commercial domain. In particular, it suggests that personal identification shall only operate in “public venues” for the purposes of public security. 
🏭 Big tech
In Japan, Honda launched their first Level-3 ADAS vehicle, which is able to pass slow-moving vehicles without driver intervention. Similarly, Volvo’s 550-person self-driving unit called Zenseact and LiDAR-maker Luminar also announced a highway autopilot system that will be offered to third parties and on specific Volvo cars. Luminar is also providing its sensor to SAIC Motor, one of China’s largest automakers, and will open a Shanghai office.  
In the US, several states have passed rulings to allow delivery robots such as Starship to operate on sidewalks. These include Pennsylvania, Virginia, Idaho, Florida, and Wisconsin. 
OpenAI shared that 9 months after launching their API, more than 300 applications are using GPT-3. The API service generates 4.5 billion words per day on average. 
Amazon began using facial recognition and biometric tracking using Netradyne cameras for their 75,000 delivery drivers in the United States. This is sparking controversy because drivers do not have a choice but to opt-in if they wish to keep their employment.
Here’s a selection of impactful work that caught my eye, grouped into categories:
Scaling local self-attention for parameter efficient visual backbones, Google Research. Self-attention has expanded from NLP to computer vision because it offers parameter independent scaling, in contrast to the parameter-dependent scaling of convolutions that dominate computer vision today. This paper pushes the envelope of self-attention by proposing two extensions (blocked local attention and attention downsampling) that together improve speed, memory usage, and the accuracy of resulting models. A new self-attention model family, HaloNets, outperforms much larger models in a transfer learning context of ImageNet-21k and has better inference performance than very strong models such as BiT and ViT. Watch this space!
Multimodal neurons in artificial neural networks, OpenAI. Earlier this year, OpenAI published CLIP, a neural network that learns visual concepts from natural language supervision. In this work, they use interpretability methods to show that CLIP contains multimodal neurons. These are neurons that activate in response to a concept (e.g. Spider-Man) whether it’s represented as an image, image of text or a sketch. They suggest that CLIP’s multimodal neuron is similar to the famous Halle Berry neuron discovered 15 years ago in the Human brain. To go deeper into CLIP, here’s a post that describes how it works.
SEER: The start of a more powerful, flexible, and accessible era for computer vision, Facebook AI. This paper describes a new billion-parameter self-supervised computer vision model. It pre-trains on a billion random, unlabelled and uncurated images from Instagram and is able to reach 84.2% top-1 accuracy on ImageNet or 77.9% top-1 accuracy when using 10% of the ImageNet dataset. To train this model, the authors made use of SwAV - a self-supervised learning approach that uses online clustering to rapidly group images with similar visual concepts and leverages their similarities. They open source a PyTorch-based library for self-supervised training called VISSL.
Haplotype-aware variant calling enables high accuracy in nanopore long-reads using deep neural networks, UC Santa Cruz Genomics. In genome sequencing, it’s not only a game of cost per genome but also of read length. Genomes are compiled computationally by layering 100s of base pairs of DNA fragments using overlapping segments. Newer technologies like Oxford Nanopore offer long-read sequence reads, which means hundreds of thousands of base pairs. The challenge has been accuracy and throughput. This paper presents a method for interpreting changes in base pairs from long sequence reads (called variant calling) using recurrent neural networks. 
M6: A Chinese multimodal pre-trainer, Alibaba and Tsinghua. This paper introduces two main contributions: the first is the largest multimodal pre-training dataset, the M6-corpus, in Chinese and the second is a cross-modal pre-training method called M6, which they scale to 100B parameters. This model is based on the transformer and is pre-trained with multiple tasks including visual Q&A, poem generation, and text-to-image generation. The dataset was built using web crawling and machine learning-based QA steps to filter for quality and spam. 
Quality at a Glance: an audit of web-crawled multilingual datasets, lots of authors. Speaking of large datasets for pre-training, this paper manually audits the quality of 205 language-specific corporate released with five major public datasets. It also audits the correctness of language codes in a sixth dataset. The purpose is to determine whether datasets that are used for pre-training are of good quality and whether they actually contain content in the languages they’re meant to. The authors find problems, in particular with low resource languages, where corporate are completely erroneous or contain a significant amount of sentences that are not of acceptable quality. Moreover, 40% of the corpora include nonstandard or ambiguous language codes or are just mislabeled. 
Learning from videos to understand the world, Facebook AI. A key strand of AI research is how to endow systems with a common-sense understanding of the world. One promising avenue is to learn common sense from video. In previous editions of the newsletter, we’ve discussed Twenty Billion Neurons and their crowd-acting approach to video understanding. This paper introduces the Learning from Videos project at Facebook, which is designed to build AI that automatically learns multilingual audio, textual and visual representations from publicly available videos on Facebook. They describe how the company took 6 months to implement a self-supervised framework for video understanding within Instagram Reels’ recommendation system. Here, a model that learns both audio and visual encoders of video content is used to automatically learn “themes” from popular videos based on their use of related audio tracks. 
Causal analysis of agent behavior for AI safety, DeepMind. This paper introduces a methodology for investigating and uncovering the causal mechanisms that underlie a black-box learning agent’s behavior. It focuses on the question of why and makes use of a simulator to produce test environments, allowing recording of reactions in response to various inputs and interventions. Then, the authors introduce a causal reasoning engine where a researcher formally expresses assumptions by encoding them as casual probabilistic models and evaluating their hypotheses. It seems that the quality of this causal analysis relies on the quality of the author’s priors. 
Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans, Cambridge University. During the pandemic, lots of papers sprung up to suggest that computer vision could provide accurate detection and prognostication of COVID-19 from radiographs and chest CT scans. From a pool of 2,212 studies, the authors screened 62 studies that passed basic quality assurance. They found that none of the models are of potential clinical utility due to methodological flaws and/or underlying biases. This highlights the importance of both peer review and rigorous pre-clinical and clinical testing of AI systems, as discussed in prior editions of this newsletter. 
Emerging behavior of our driving intelligence with end-to-end deep learning, Wayve. The company provides video evidence to show how its vehicles manage unprotected right turns in city traffic. Their system interprets the world using camera-first computer vision and learns to drive using imitation learning and reinforcement learning, both in simulation and public roads.
📑 Resources
The 2021 AI Index Report is out. You can find lots of great charts that track trends in research, talent, state of the art benchmarks, amongst others. I contributed to Section 2.7 Healthcare and Biology on page 76 in which Philippe Schwaller and I showed the progress of machine learning on synthesizing molecular synthesis. 
The UK’s Turing Institute published a policy briefing on the gender job gap in AI. The work finds that women in AI are not only vastly outnumbered to men (ref: Margaret Mitchell’s AI has a “sea of dudes problem”), but women also have a higher job turnover and attrition rates than men. And yet women have higher formal educational levels than men across all industries. The briefing includes recommendations such as setting reporting standards regarding gender and other workforce characteristics and the widespread implementation of gender-inclusive labor market policies.
In the market for an AI-based medical imaging solution in radiology? Consider the new ECLAIR guidelines
Facebook AI published a deep dive on self-supervised learning, which they believe to be a major accelerant of AI breakthroughs. 
Waymo expanded their Open Dataset to include behaviors prediction and motion forecasting training data. The next challenges focus on motion prediction, interaction prediction, real-time 2D and 3D detection. This follows Lyft’s Motion Prediction Dataset from 2020. 
A group of researchers open-sourced SpeechBrain, a PyTorch-based toolkit for speech recognition, speaker recognition, spoken language understanding, and more. 
Bessemer published a great Roadmap focused on unlocking machine learning for drug discovery.
Funding highlight reel
Allcyte, an AI-first functional precision medicine company, raised a $6M Seed round from Air Street Capital, 42CAP, PUSH Ventures, Amino Collective, and angel investors.
Insitro, an AI-first drug development company founded by Daphne Koller, raised a $400M Series C led by Canada Pension Plan Investment Board. The company has a collaboration with Gilead in NASH (a liver disease) and another with Bristol Myers Squibb in ALS (a neurological condition). 
Valo Health, a drug development company using machine learning approaches from target discovery through to clinical validation, raised a $110M Series B led by Koch Disruptive Technologies. The company was founded within Flagship Pioneering and has raised $450M to date. 
Hugging Face, the open-source NLP company, raised a $40M Series B led by Addition. Their popular Transformers library now has 42k stars and 10k forks on GitHub. The company has also launched a data and model hub for contributors’ use, in addition to a paid inference service over API. Rumors are that Hugging Face already received M&A interest as the hottest private company property in NLP today. 
Dataminr, a real-time AI platform for detecting high-impact events and emerging risks in publicly available data, raised a $475M growth round.
Feedzai, a fraud detection company, raised a $200M round led by KKR. This values the company over $1B. Feedzai counts Citigroup (an investor), Fiserv, and SoFi as customers., the AI-first medical imaging company that won Medicare and Medicaid reimbursement last year for its stroke application, raised a $71M Series C led by Scale Venture Partners and Insight Partners. The company says it will expand into other areas of acute care including cardiology, pulmonary, and trauma.
Momenta, a Chinese self-driving company, raised a $500M Series C from SAIC Motor, Toyota, and Bosch, alongside Temasek. The company sells semi-automated driving software to carmakers and buys data from various providers to keep its fleet small. 
DataGrail, a data privacy company that offers data mining and discovery, data subject request fulfillment, and consent management, raised a $30M Series B led by Felicis Ventures. It counts Snyk, Databricks, Overstock, and Crunchbase as customers. 
Datagen, a synthetic data company for computer vision, raised $18.5M from TLV Partners and Viola Ventures. 
Oishii, a vertically farmed strawberry company, raised a $50M Series A after launching their box of 8 strawberries for $50 onto shelves in New York City. 
Groq, an AI chip company led by co-authors of the Google TPU, is in talks to raise $300M led by Tiger Global. 
Health Gorilla, the API to patient healthcare data, raised a $15M Series B from IA Capital Group and Nationwide. The product uses AI to create a unified patient index that matches clinical documents from various EMR and lab systems. 
1910 Genetics, a drug design company working with small molecules and proteins, raised a $26M Series A led by M12 and Playground Global. The company uses virtual screening and generative design, in addition to in-house testing. 
Refaction, a robot delivery company founded by professors at the University of Michigan, raised a $4.2M Seed led by Pillar VC. 
Hi Marley, a text-message-based insurance claim processing company, raised a $25M Series B led by Emergence Capital. The business has 40 customers live in production including MetLife and American Family.
Incode, an identity verification company using facial recognition, raised a $25M Series A led by DN Capital and 3L Capital. 
Camunda, an open-source process automation software company from Berlin, raised an €82M Series B led by Insight Partners. The company has 4000 customers including Goldman Sachs, Lufthansa, and Orange. Similar to UiPath, Camuda started as a business process management consulting/outsourcing firm back in 2008. It released an open-source project in 2013 and grew from there. 
Flex Logix, a reconfigurable AI accelerator semiconductor designer, raised a $55M Series D led by Mithril Capital. The company focuses on edge applications using low-precision arithmetic and in-memory computing. 
VoxSmart, an enterprise communications surveillance company, raised a $25M round led by Toscafund. 
UiPath, the Romanian-born leader of enterprise robotic process automation, has filed to go public at a valuation of $35B. As of Jan 2020, the business generated $607M in annual revenue, growing 81% year-on-year. Just 5 years ago, its revenue was $1M. The company also boasts best-in-class net dollar retention of 145% with an average revenue per customer of approximately $72k. This is a major win for enterprise automation software and has drawn the world’s attention to the fact that large companies can be born in Europe. 
Talend, a NASDAQ-listed French data integration and data integrity company, announced its take-private offer from PE firm Thoma Bravo for $2.4B. The business went public in 2016 at 10 years of age when it had 1,300 customers. On the first day of trading, its valuation closed at $770M. That year, Talend generated $76M in revenue of which $63M was on subscription. Today, Talend generates $287M in revenue from over 6,000 customers and trades at 7x EV/2020 revenue. 
Voyage, the self-driving company focused on private neighborhoods (e.g. for senior citizens), was acquired by Cruise for an undisclosed sum. The company brings three products with it: Commander (their AV system), Shield (a collision mitigation system), and Telessit (a remote assistance solution), along with 60 FTE. 
Recursion, the AI-first drug discovery and development company, filed to list on the NASDAQ. Its S-1 states the company has 4 clinical-stage programs, 37 internally developed programs, and a huge amount of software to construct and search through vast quantities of imaging data that describes how drugs alter cellular phenotypes. 
Zymergen, the biomanufacturing company that creates breakthrough products, filed to list on the NASDAQ. Its S-1 describes a number of upcoming products with applications in electronics (e.g Hyaline and other optical films), consumer care (e.g. naturally-derived insect repellent and UV protection), and agriculture (e.g. nitrogen fixation and a herbicide). 
p.s. Here’s an easter egg.
Signing off, 
Nathan Benaich, 11 April 2021
Air Street Capital is a venture capital firm investing in AI-first technology and life science companies. We’re an experienced team of investors and founders based in Europe and the US with a shared passion for working with entrepreneurs from the very beginning of their company-building journey.
Did you enjoy this issue? Yes No
Nathan Benaich, Air Street Capital
Nathan Benaich, Air Street Capital @nathanbenaich

Monthly analysis of AI technology, geopolitics, research, and startups.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.