View profile

🧬 Your guide to AI: November 2020

Nathan Benaich, Air Street Capital
Nathan Benaich, Air Street Capital
Dear readers,
Welcome to the November 2020 issue of my newsletter, Your guide to AI. Here you’ll find an analytical narrative covering key developments in AI tech, geopolitics, health/bio, startups, research, and blogs.
If you enjoyed the read, I’d appreciate you hitting forward to a couple of friends.
I’d love to hear about what you’re working on and/or your feedback on the issue, just hit reply! 
Wishing you the best for the holidays,
Nathan

🆕 Technology news, trends and opinions
🏥 Life (and) science
Life (and) science
These past weeks have been a positive whirlwind for the biotechnology industry and global public health. First, we had the rapid approvals of COVID-19 vaccines, both using mRNA technology (Moderna, BioNTech) and more traditional methods (Oxford). While this news wasn’t (from what I can tell) driven by AI-based designs or workflows, another news item was: Baricitinib. This drug, which was postulated in February this year by London-based Benevolent.ai as a potential treatment for patients suffering from COVID-19 was granted emergency authorization by the FDA. Patients can now receive immediate use of Baricitinib in combination with Remdesivir because patients who received this dual treatment suffered a 35% lower mortality rate than those taking Remdesivir alone. Moreover, pre-clinical experiments conducted at the University of Washington showed that computationally designed mini proteins that mimic how an antibody would bind the COVID-19 can prevent its interaction with the ACE-2 receptor. This early work shows a) that understanding 3D protein structure is key to elucidating function, and b) that computationally-driven search and optimization of protein structure can rapidly generate useful drug candidates. 
This naturally leads to another major headline, which is that of AlphaFold version 2. Two years since DeepMind demonstrated impressive performance on predicting the folded structure of previously unseen proteins from their amino acid sequence, the company has done it again, but this time by eclipsing its competition. This task is measured by the Critical Assessment of protein Structure Prediction (CASP), which compares structure predictions from competition participants with ground truth structures generated through existing methods such as X-Ray crystallography and/or NMR. The paper describing AlphaFold 2 is yet to be released, but we know that the system was a full “rebuild”. Unlike AlphaFold 2, this newer generation instead relied on an attention-based neural network, trained end-to-end to interpret the structure of a folded protein that is represented as a spatial graph (residues are the nodes and edges connect the residues). The system was trained on public data consisting of 170k protein structures as well as large databases of protein sequences with unknown structures. Some called this work a “gigantic leap” and claimed that “protein folding is solved”, while others tempered this claim. The truth is somewhat in between: in the same way that deep learning crushing all other methods to solve ImageNet does not mean that computer vision is solved, AlphaFold 2 crushing all other entrants in CASP to max out the competition score does not mean that protein folding is solved. Either way, this news will undoubtedly draw even more worthy attention to the applications of AI to problems in biology and health, accelerating many more impactful developments and new companies in the years to come. Exciting!
Switching focus to AI for clinical workflows, Vienna-based Allcyte released results from the first-ever prospective interventional study showing that functional precision medicine can deliver clinical patient benefit in the form of longer progression-free survival and overall response rate. The study presented at the American Society of Hematology conference involved 56 leukemia patients whose treatment was guided by computer vision-based microscopy analysis of how their cancer and non-cancer biopsied cells reacted to a range of 136 small molecule drugs in vitro. This is particularly exciting because precision medicine that relies on genomics alone (i.e. sequencing cancer DNA to find driver mutations and prescribing drugs that combat that mutation) is too reductionist and many times don’t work. Functional tests that ask questions of living cancer cells are far more expressive because they more accurately mirror cancer biology.
A The (geo)politics of AI
A major thread that we pulled out in the State of AI Report 2020 was the rise of military AI, which is driven by rapid technology transfer of AI research into military contexts. As a dual-use/general-purpose technology, the translation of AI is inevitable, but how do practitioners feel about it? Georgetown’s CSET conducted a survey of AI professionals in the US to sample their view of the Department of Defense (DoD) and DoD-funded AI projects. They found that 39% are neutral, 38% positive, and 24% negative about working on DoD-funded AI projects. While most highlighted “access to unique resources” and “interesting problems” as the top benefit of working on DoD-funded AI projects, 60% said that they do not want to do harm. So how do we reconcile the two stances? Curious to hear your opinions. 
On a related topic, the scientific journal Nature surveyed 480 researchers who work in facial recognition, computer vision, and AI to understand their stance on ethical issues related to this application area. Of note, 71% of respondents agreed that facial recognition research on vulnerable populations (e.g. refugees or minority groups) could be ethically questionable even if scientists gained informed consent. Those who disagreed tried to distinguish between a) condemning and restricting unethical applications of facial recognition and b) restricting research. This highlights the tensions between research and the real world: should the translation between the two can be controlled and if so, how? Ultimately, it looks like researchers will need to consider the potential downstream applications of their research, whether they’re involved in it or not. Relatedly, the Partnership on AI launched a project called the AI Incident Database, which is meant to document the failure of AI systems. 
The UK government joins Germany, Japan, and the US is tightening its controls over potentially hostile foreign takeovers with a new National Security and Investment Bill. Under this bill, the government will take “a targeted, proportionate approach to ensure it can scrutinize, impose conditions on or, as a last resort, block a deal in any sector where there is an unacceptable risk to national security.” This of course includes AI. While this regime applies to investors/buyers from any country, the Bill says that “this will mean that no deal which could threaten the safety of the British people goes unchecked, and will ensure vulnerable businesses are not successfully targeted by potential investors seeking to cause them harm.”. While I agree that these two situational criteria do require government inspection, what I personally find puzzling is that neither the DeepMind/Google nor the ARM/SoftBank acquisitions “threatened the safety of the British people” nor did they target “vulnerable business” by seeking to “cause harm”. So without further clarification, I’m not sure I see how this Bill actually solves the problem at hand: domestic winners being offered huge sums to sell out and the UK Government doing nothing about it. 
I Autonomous everything
Waymo published two papers on its safety performance results and methodologies. The former presents more than 6.1M miles of automated driving in Phoenix, Arizona, and reports every collision and minor contact experienced during operations. There were 47 contact events for 2019 and the first three quarters of 2020. The piece notes that “nearly all events involved one or more road rule violations or other errors by a human driver or road user”. 
Oxford, UK is playing host to the UK’s first trials of a Level 4 AV vehicle. The 6 cars are run by the city’s home-grown AV startup, Oxbotica, as part of a government-backed research project called “Project Endeavour”. The trials are due to last for a year and involve driving around the Oxford Parkway rail station into Oxford’s main train station. If you haven’t been to Oxford, this route involves quite a few roundabouts, narrow streets, and a TON of bicycles driven by people of all ages and experience levels… 
Meanwhile, Uber continues to shed (presumably) non-core assets, this time focusing on their ATG self-driving unit. Over 1.5 years ago, the company spun the unity out as they were preparing to go public, raising $1B from Toyota, DENSO, and SoftBank’s Vision Fund. Now, Uber is in talks to sell the division to private, Amazon-funded rival Aurora, which lost its major auto partner VW to Argo AI a year or so ago. The move does make sense for Uber, which like other loss-making public companies, cannot sustain hefty annual investments to the tune of hundreds of millions of dollars forever. 
Following its $900M acquisition of Moovit, a mobility-as-a-service startup, Intel’s Mobileye announced a partnership with soon-to-be-SPAC’d LiDAR company Luminar. The stated goal is to build a “full end-to-end ride-hailing experience”. How many providers do we need? Why will Intel succeed where others struggle?
 Hardware
Prior editions of the newsletter have included discussions of China’s central planning approach to shoring up capabilities in semiconductor design and fabrication to reduce its dependency on foreign suppliers. For example, Huawei (which has no prior fabrication experience) is reportedly setting up a dedicated chip plant in Shanghai following tightened US export controls earlier this year, which has left the company without chips to put inside their smartphones. More broadly, China has seen 4,600 newly-registered domestic chip-related companies in Q2 2020 alone, up 207%. In fact, three new Chinese startups set up in the last year were founded by or have hired executives and engineers from US-based Synopsys and Cadence Design Systems of the US, which are two of the largest electronic design automation toolmakers. 
But much less has been said about the dark side of this race. Jeff Ding shared a fascinating translation of an investigative report on defunct state-funded semiconductor projects in five provinces. The report, which was produced by Outlook Magazine (a state-backed publication for the Communist Party) finds that new semiconductor companies “took advantage of local governments who lack industry knowledge, and basically get governments to give them free land, factories, and massive subsidies.” Billions of dollars worth of state investment were targeted at projects in second-, third- and even fourth-tier Chinese cities that do not have the talent or resources to sustain such projects. 
Back in the US, the big news these past weeks has been Apple’s announcement of their M1 silicon and their clear strategy of vertical integration. Amongst Amazon, Facebook, Google, and Microsoft, this move now leaves Microsoft as the only player without a clear in-house silicon initiative. Notably, Microsoft was an early partner and big investor in Graphcore, which is arguably at the head of the startup race for native AI semiconductors. Apple’s M1 makes use of new unified memory architecture, which means that the CPU, GPU, neural processor and image signal processor all share one pool of high bandwidth, low latency memory. The company’s new Macbook Air, Macbook Pro, and Mac mini essentially share the same M1 chip. 
TSMC, the world-leading semiconductor fabrication company, is also eating up more of its supply chain. The company previously left chip packaging services to a range of specialized suppliers but is now developing its own 3D stacking technology at a chip packaging plant in Taiwan with mass production planned for 2022. This new technology enables TSMC to stack and link different kinds of chips into one package, leading to more energy-efficient and powerful compute output in a smaller chipset. And with this, TSMC extends its already large technology lead even further. 
a‍p Enterprise software and big tech
After a couple of quarters flashing an increasingly obvious “Add Google Meet video conferencing” on every new Google calendar entry while Zoom’s share price soared, Google is now ramping up its efforts to compete. Being video and voice, conferencing is a natural battleground for AI-driven feature fighting. Google used its open-source framework for cross-platform customizable ML solutions for live and streamed media called MediaPipe. You’re now able to blur or replace your background. What’s next?
Documents! Last month’s newsletter highlighted DocuSign’s move into helping users understand the contents of the documents they’re signing. Now Google introduced their “Document AI platform”, which they’re calling a unified console for document processing. This area of enterprise AI has so far been the turf of startups (e.g. HyperScience) that help users transform documents into structured data automatically using computer vision whether for data entry, process automation, or as input for RPA. But no more! 
Three of the hottest topics in the MLOps space, which concerns lifecycle management of ML systems in production, are 1) feature stores, 2) data catalogs, and 3) data quality monitoring. These products were birthed by large technology companies who were the first to really see the challenges of maintaining healthy ML systems and teams at a large scale. To start, feature stores typically offer a central place to store the signals that are computed from raw data for the purposes of an ML model. Their goal is to improve team collaboration through the reuse and iterative improvement of features, as well as monitoring their contribution to ML model success/failure in production. More on feature stores here! Next, data catalogs are similar in the sense that they also offer a centralized destination to discover all datasets that are generated and consumed within a company, including associated metadata like who owns what, what upstream or downstream services touch a particular dataset. More on data catalogs here! Third, data quality monitoring systems help data producers and consumers understand whether there are issues/anomalies with the data. This can be nulls, schema changes, timeliness issues, or distribution changes. All of this has an impact on the performance of ML models in production (garbage in, garbage out). Airbnb shared a two part series on how they built internal data quality systems.
🔬Research & Development
Last month I highlighted how Papers With Code is now integrated into arXiv, allowing users to quickly find the code repositories behind the papers that they read (if they’re made available by the authors!). MIT Tech Review explored this topic of code openness (or lack thereof) in a new piece that I contributed to called AI is wrestling with a replication crisis. Coupled with the issue that unequal distribution of computer resources in academia furthers inequality in the types of research that can be done by specific actors, we have a serious problem on our hands. Now, here’s a selection of impactful work that caught my eye, grouped in categories:
Towards situated visual AI via end-to-end learning on video clips, TwentyBN. This work describes the release of 20bn-realtimenet, an open-source inference engine for neural network architectures that take an RGB video stream as input and output a stream of labels in real-time that describes what’s going on. In particular, the labels cover a huge range of day-to-day human actions, hand gestures, fitness exercises, and more. Systems like this endow visual common sense to camera-powered devices.
PennSyn2Real: Training Object Recognition Models without Human Labeling, UPenn. This paper releases a photo-realistic synthetic dataset of 100,000 4K images of more than 20 types of micro aerial vehicles (drones). The authors make use of the dataset to train computer vision models for object recognition tasks. Compared to a Yolov3 model trained on ImageNet, the Yolov3 trained on synthetic data performs significantly better in low-data environments (e.g. zero- (0.6 mAP vs. 0.05 mAP) to 5-shot learning (0.8 mAP vs. 0.3 mAP). 
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, Google. It is well known that larger models (by depth and width) often claim better performance than their smaller peers. But what is happening to the representations that these models learn as depth and width change? This paper demonstrates that a characteristic block structure appears in the representations of large models. Meanwhile, the representations outside of the block structure are often similar across architectures. 
Artificial intelligence in COVID-19 drug repurposing, Cleveland Clinic, Cornell, Mila, Tel Aviv University. This review paper sets out how to make effective and measured use of AI for accelerating drug repurposing or repositioning. 
Towards ML Engineering: A brief history of TensorFlow Extended (TFX), Google. This article shares lessons learned at Alphabet from the implementation of Sibyl and TensorFlow Extended, two successive end-to-end ML platforms. There’s a lot of historical insight here that’s interesting, but in particular, is their view of where things are going. This includes a drive towards interoperability and standards, increasing automation, and improved tooling such as automated mitigation of discovered issues in pipeline runes. 
Challenges in deploying machine learning: a survey of case studies, Cambridge. This survey reviews published reports of deploying machine learning solutions in a variety of use cases, industries, and applications and extracts practical considerations corresponding to stages of the machine learning deployment workflow. 
Differentially private learning needs better features (or much more data), Stanford. Privacy-preserving ML techniques have become a hot topic because they invert the traditional paradigm of centralizing data to where the model lives and let developers send models to where the data lives or to run computations on data that cannot be directly seen. This theoretically means that ML developers can open access to datasets that would otherwise not see the light of day. Interestingly, however, this work shows that differentially private end-to-end deep learning with moderate privacy budgets is outperformed by linear models trained on handcrafted features. To reach parity or outperform handcrafted features, private deep learning systems need significantly more private data or need access to features learned on public data from a similar domain.
đź“‘ Resources
đź“ś Blogs and reports
In the State of AI Report 2020, we drew attention to the importance of international students for sustaining US academia. Matthias Niessner in Munich shared data that shows how 30% of Masters and Ph.D. students in computer science run in Germany are from foreign countries. 
Safely rolling out ML models to production: Best CI/CD practices for the painless deployment of ML models and version. 
Google digitized the world’s largest collection of human voices on the planet that is maintained by StoryCorps, a national non-profit organization. 
Salesforce Research describes their generic ML training and data science platform for Salesforce Einstein. 
 Videos, talks
Now we can have more startup demo video jingles created automatically thanks to URL2Video.
I took part in a meetup discussion on opportunities in MLOps. You can watch it here.  
🧰 Open source tooling
Cambridge-based Secondmind open-sourced their first toolbox called Trieste. It is a Python library for Bayesian optimization that’s built on TensorFlow and can be extended with custom algorithms and models. It comes with out-of-the-box support for GPflow, another TensorFlow-based library by Secondmind for Gaussian process models. 
JAX is a relatively new open source library for neural networks that are growing in popularity for several reasons, in particular, that it looks like a NumPy wrapper so has a wide audience of potential fans. DeepMind released Jax_verify, a library containing JAX implementations of many widely-used neural network verification techniques.
Microsoft released FastFormers, a library to achieve highly efficient inference of Transformer models for natural language understanding.  
Google released a language interpretability tool, which is an interactive exploration and analysis tool for NLP models. Deep learning models are notoriously difficult to debug and the tooling to help us is rather immature, so new initiatives like this one are welcome.
đź’°Venture capital financings and exits
Here’s a financing round highlight reel, it’s been hugely busy! 
Nuro, the full-stack autonomous delivery road vehicle startup, raised $500M in a Series C round led by T. Rowe Price, Fidelity, and Baillie Gifford (several of whom are big investors in Tesla). While a large sum in itself, this Series C is almost 2x smaller than the company’s $940M Series B in 2019 that was led by the SoftBank Vision Fund. Nuro is approved for testing on the road in California and is focused on scaling its consumer delivery service in Houston.  
Pony.ai, a Chinese/US-based self-driving car company raised a $267M round at a valuation over $5.3B. In a rather peculiar move, the round was led by the Ontario Teachers’ Pension Plan Board Teachers’ Innovation Platform. 
Luminar, the makers of a next-generation LiDAR for autonomous vehicles, raised an undisclosed investment from Daimler that adds to its $170M SPAC that is expected to close this year. Of note, Daimler took a majority stake in AV trucking company Torc Robotics, which had been working with Luminar for two years. The companies plan to integrate the products into a Daimler AV truck. This initiative will be its second iron in the fire for AV trucks, the first being its partnership with Waymo. 
SentinelOne, an Israeli/US cybersecurity company focused on endpoint security, raised a $267M round on a $3.1B valuation from Tiger Global, Sequoia, Insight, and others. This follows a $200M round that was raised in February this year. The company claims to have 3,500 customers but does not offer revenue figures. 
Gretel, a US-based data anonymization startup focused on developers with an open-source strategy, raised a $12M Series A led by Greylock. The tool helps developers generate “fake data” to help them in developing new software or for passing sensitive data between teams. 
K Health, a New York-based data-driven digital primary care service, raised a $42M Series D led by Valor Equity Partners. The company offers free medical intelligence to millions of consumers through a chatbot service and alongside affordable telemedicine. K Health recently announced a collaboration with Mayo Clinic.
DataRobot, an enterprise AutoML company, raised $270M at a $2.7B valuation in a round led by Altimeter Capital, which tends to invest in public companies and pre-IPO private companies. The company is said to be “well over” the landmark $100M in annual recurring revenue and has played host to over 2 billion ML models being built on the platform.  
Forter, an Israeli/US fraud detection software company, raised a $125M round at a post-money valuation of $1.3B. The round was led by Bessemer Venture Partners and Felix Capital. 
Canvas, a US company using robots to install drywalls, emerged from stealth with $17M in funding from Obvious, Innovation Endeavors, and others. 
Zilliz, a Chinese open-source ML company focused on processing unstructured data, raised a $43M Series B led by Hillhouse Capital. Commentators draw attention to the fact that open-source software is not a popular or common investment strategy in China. 
Abacus.ai, an SF-based developer of large-scale customizable deep learning systems, raised a $22M Series B led by Coatue. 
M&A:
Kindred Systems, an SF-based pick-and-place robotic software company, and Haddington Dynamic, a Las Vegas-based designer of robotic arms, were both acquired by London-based full-stack online grocery business Ocado for $262M and $25M, respectively. Ocado was founded in 2000 and quickly partnered with Waitrose, a popular UK-based grocer, to fulfill online orders and deliveries to UK consumers. Ocado went public in 2010 on the LSE and has grown 10x since then. Over the years, Ocado has vertically integrated to sell their own products, build and manage their own warehouses and logistics supply chain, and, importantly, build out automated robotics infrastructure. The business has several partnerships in the US, notably with Kroger, where Ocado builds and runs automated fulfillment warehouses on behalf of their partner. 
Spacemaker, a Norwegian software company that developed AI-driven software for urban development, was acquired by Autodesk for a reported $240M. The company’s software allows architects, urban designers, and real estate developers to “generate, optimize, and iterate on” designs on the basis of explicit criteria and on-the-ground realities such as terrain, lighting, traffic, zoning, etc. Spacemaker had most recently raised a $25M round from Northzone and Atomico. Autodesk will keep Spacemaker as an autonomous unit. 
Cnvrg.io, an Israeli startup offering an end-to-end machine learning platform for building and deploying AI models, was acquired by Intel a few weeks after their acquisition of SigOpt was announced. The business had raised $8M.  
Voca.ai, an Israeli startup that builds AI-based voice assistants for customer support, was acquired by Snap for a reported $70M. The company had raised $6M and employs 40 people, all of whom are joining Snap. 
—
Signing off, 
Nathan Benaich, 6 November 2020
Air Street Capital is a venture capital firm investing in AI-first technology and life science companies. We’re an experienced team of investors and founders based in Europe and the US with a shared passion for working with entrepreneurs from the very beginning of their company building journey.
Did you enjoy this issue? Yes No
Nathan Benaich, Air Street Capital
Nathan Benaich, Air Street Capital @nathanbenaich

Monthly analysis of AI technology, geopolitics, research, and startups.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.