🚗 Autonomous everything
is the latest autonomous vehicle startup unicorn after its $530M Series B financing. What’s interesting is that Amazon corporate has invested
in the round, potentially as a means to seal a partnership to procure an autonomous technology stack for their logistics. Aurora also released a report on their safety protocols, standards
and development practices.
Instead of skipping over Level 3, Ford has adjusted its strategy to take the stepping-stone approach.
announced field trials of their ground delivery robot, Scout
. Astute observers will notice a strikingly similar design to Starship Technologies’ delivery robot. In fact, Amazon acquired
a startup called Dispatch.ai in 2017, itself inspired and designed to replicate Starship that had launched in Europe a few years prior. Amazon is also beginning to roll out automated packing robots
that operate on factory conveyor belts. These are four or five times faster than a human packer, processing up to 700 orders an hour. Amazon also launched Sagemaker Neo
, which helps developers train a model once and then output optimised code for various edge hardware substrates.
are now public companies,
much to the delight of existing investors who (especially for Uber) made extraordinary returns on early investments. Before going public, news leaked that Uber was spending $20m a month
to sustain its self-driving organisation. After talks of Uber’s self-driving unit raising $1B from SoftBank and Toyota, the deal
did end up closing pre-IPO. This new self-driving company, sitting within the Uber group, has its own board of directors with representation from Uber and investors and is valued at $7.25B.
another $1.15B of capital from T. Rowe Price Associates and existing investors, SoftBank Vision Fund, Honda and GM. It’s quite clear now that having a shot at building self-driving services requires several billion dollars as table stakes.
ran a live broadcast centred around its self-driving technology capabilities. The event was a bold move (kudos to them, honestly) during which Elon and team leads presented their approach to self-driving and fielded answers from public investment analysts. The most interesting part to me was how Tesla uses its fleet infrastructure to a) encounter odd out-of-sample events, b) calls on the wider fleet to report back with similar footage, c) uses this footage to update its Autopilot systems. The company didn’t showcase any simulation work to round out these edge cases, however. Tesla put a lot of emphasis on their in-house designed silicon for self-driving, which removes dependency from the prior NVIDIA system (response from them here
). This shows two things: a) full stack companies don’t like third-party dependencies on potentially competitive providers and b) custom silicon is really a thing in the age of ML. A few weeks later, Tesla reported a third fatal crash
that occurred only 10 seconds after the driver engaged the Autopilot system.
and the Canadian government are throwing their million dollar hats into the self-driving software ring with a $350M budget
to catalyse the development of technology built in Canada.
introduced a new
in-house built LiDAR with 95-degree vertical field of view and up to a 360-degree horizontal field of view.
💪 The giants
AI-related announcements at their Build conference in Seattle. I attend the event and sat down with David Carmona, GM for Cloud and Enterprise AI, and Lance Olson, Partner Director of Program Management for the Azure AI Platform. Through these conversations, I learned two main strategy points. Firstly, Microsoft is really doubling down on enabling the business user to make use of ML features in their workflows. Here, data scientists (and Azure ML itself) are creating a growing number of ML models that are published through Office 365 and Power BI products so that the huge user base of business users can make use of them in these tools. For example, think about using a fraud detection model directly in Excel. Azure’s ML focus is also on enabling business users to build their own models for their data without relying too much on engineering resources. Secondly, Microsoft’s cloud ML value proposition is built around enabling users to create, train and containerise models so they can export and run them in whatever environment and infrastructure they choose. This means no vendor lock-in on Microsoft (whereas AWS and GCP don’t let you export their services). This makes management teams more comfortable with data privacy, ownership and security. The company also published several features that fit nicely into the robotic process automation field, such as Form Recogniser
, which is an unsupervised learning based data extraction API that only requires seeing 4 examples of a form to work.
and Hortonworks seal their merger
in a bid to consolidate their offering for enterprise-grade AI readiness. Using their tools, one can capture, store, manage and analyze data, as well as train and serve machine learning models.
shared loads of announcements
at I/O a few days ago. ML has made it into battery management on the Pixel 3 smartphone, ML is being used to pinpoint the accurate location of a Google Maps
user using the camera (not dissimilar to startups such as Scape
or Blue Vision Labs, now Lyft Level 5), the new Google Assistant will ship on-device so it can run inference offline, a new ML kit for on-device translation between 59 languages and on-device object detection and tracking. Google has also implemented an entirely neural on-device speech recogniser
as input for GBoard. As a company built around monetising predictions, Google is staying true to its goal of injecting ML into as many of its current and future products as possible.
Separately, Google announced
a new AI Ethics Board, which included professors, scientists, and a former US deputy secretary of state. Very shortly thereafter, the Board was dissolved because of concerns over the Board member’s political views and the extent to which they would actually be able to scrutinise Google’s work. This is the latest development in a series of ethics oversight challenges at Google, DeepMind and OpenAI.
has switched its corporate structure away from being a non-profit to a new “capped-profit” company
such that it can raise billions of dollars to invent general intelligence. Here, OpenAI pitches investors a return capped at 100x their investment if the organisation succeeds at this goal.
Facebook open sourced
a general-purpose platform for managing, deploying and automating AI experiments, as well as a tool to run Bayesian hyperparameter optimisation. More on their F8 announcements here
and an NYT piece on the company’s significant efforts to clamp down on bad actors here
are some new details about Facebook’s ML hardware infrastructure. The company has made it no secret that it is designing custom chips for inference workloads.
showcased videos of their robots being repurposed for warehouse pick and place problems here
, in addition to the product of its latest acquisition, Kinema Systems.
Google has rebooted its work on robotics via a group called Robotics at Google. A profile piece ran in the NYT
on this topic.
On the topic of robots, data compiled by the Robotics Industry Association shows that US factories have received 35,880 robots last year
, 7% more than in 2017. The largest number are in the automotive components sector.
Here’s a new interview
with Nigel Toon of Graphcore
, which dives into a few of the technical specs and strategies for the company.
🏥 Healthcare and life science
A wave of pharma and biotech companies have signed co-development agreements with AI-based drug discovery startups:
with a large contract research organisation, Charles River Laboratories, to support their hit discovery and hit to lead development process. If they’re successful, Atomwise could net $2.4B in royalties over the next few years. Big bucks!
with Celgene, a pharma company focused on cancer and inflammatory disease. This deal offers an initial $25M upfront payment and the promise of milestone and royalty payments upon success.
signed a two-year partnership
with Tillotts Pharma to identify and develop new drug candidates for the treatment of inflammatory bowel diseases (IBD), such as Crohn’s disease.
entered into a three-year collaboration
with Gilead to create disease models for nonalcoholic steatohepatitis (NASH), a chronic form of liver disease with limited treatment options and that can result in cancer. insitro will receive an upfront payment of $15M, with additional near-term payments up to $35M based on operational milestones. insitro will be eligible to receive up to $200M for the achievement of preclinical, development, regulatory and commercial milestones for each of the five potential Gilead targets.
, a market leading molecular simulation company founded in the 1990s, has raised $110M from Bill Gates, D.E. Shaw and GV to vertically integrate
its drug development efforts. In the past, two cancer drugs have been discovered by its customers using Schrodinger software. Both have gone on to win FDA approval. The new financing sees Schrodinger leverage its software to run its own development programs.
Meanwhile, IBM Watson is pulling
their product for drug discovery, citing sluggish revenue growth.
🇨🇳 AI in China
Here is an overview
of AI semiconductor work in both small and large Chinese companies.
The extent to which
Baidu, Alibaba, Tencent and Huawei are entrenched leaders of China’s AI landscape across infrastructure, technology tools and applications.
Analysis of 2 million publications up to 2018 run by the Allen Institute for AI in Seattle showed that while China has already “surpassed the US in number of published AI papers, the country’s AI researchers are poised to be in the top 50% of most cited papers this year and in the top 10 per cent next year”. The results show that the US share of citations in the top 10% of AI papers has declined gradually from 47% in 1982 to 29% last year. China, on the other hand, has risen to over 26% of citations in 2018.
Geopolitical tensions between the US and China don’t seem to be abating. This piece
outlines how intertwined Chinese capital is in the US tech and venture market. What’s more, President Trump has blacklisted Huawei
and 70 of its affiliates because they are deemed to be threats to national security. This means US companies are blocked from using or purchasing Huawei equipment. As a result, suppliers to Huawei are scrambling to understand the repercussions to their own businesses.
AI around the 🌍
Finland was the first European country to put a national AI strategy in place back in October 2017). What began as a free-access university course is now being scaled nationally
to 55,000 people in partnership with government and companies. For example, technology companies Elisa and Nokia said they would train their entire workforce to be literate in AI. The Economy Minister Mika Lintilaä pledged that Finland will become a world leader in practical applications of AI.
🔮 Where AI is heading next
refers to the overall problem of automating otherwise manual steps in ML architecture, modelling and parameter training. For Google, the emphasis is on searching model architecture space to test and converge on new structures that improve model performance and other variables such as latency. For Microsoft, the emphasis is on model selection depending on the task and input data. For others, AutoML is about hyperparameter tuning. A recent post by Waymo described how their team used Google’s AutoML to automatically design CNN architectures
using pre-trained cells for the task of semantic segmentation on LiDAR data (i.e. a transfer learning-based method). Compared to manually-designed networks, the AutoML outputs showed significantly lower latency with a similar quality or even higher quality with similar latency. Next, they designed a proxy segmentation task that could be rapidly computed and applied either random search or reinforcement learning to conduct end-to-end search while optimising for network quality and latency. Using the proxy task meant exploring over 10,000 candidate architectures over two weeks on a TPU cluster. The graph below shows thousands of individual resulting architectures from random search (green dot/line) compared to the prior transfer learning architecture (red dot), random search on a refined set of architectures (yellow dot/line) and reinforcement learning-based search (blue dot/line). What’s interesting is that the architectures discovered using RL-based search exhibited 20–30% lower latency with the same quality than those developed manually. They also yielded models of with an 8–10% lower error rate at the same latency as the previous architectures. Separately, neural architecture search was recently used to improve
the hand-designed Transformer architecture.