🚗 Autonomous everything
Long Beach Container Terminal
is using $1.5B to build a fully-automated system
that is due for completion in 2021. It will handle 2x the freight volumes than before and cut down pollution to help container terminals to reach zero emissions by 2030. The results are already showing: In April, the average time it took trucks to complete their assignments at that dock was 34 minutes, compared with 105 minutes at Pier 400, according to data from the Harbor Trucking Association.
PROWLER.io released a whitepaper
that shows how probabilistic modeling can be used to solve the supply chain trade problem posed by the logistics of pooled palettes. Billions of pooled pallets pass between service centers, manufacturers, distributors, and retailers in a constant loop, enabling a circular, sustainable economy. Using probabilistic modeling on sparse data, palette pooling businesses can make forecasting predictions that result in fewer failed collections, better stock count predictions and lower idle volumes.
a Detroit-based car manufacturing business to help it mass-produce autonomous vehicles.
made its first publicly-disclosed acquisition
in the form of Blackmore, a US-based maker of a frequency-modulated continuous-wave LiDAR system. These sensors can simultaneously measure both the distance and velocity of surrounding objects. This move validates the view that self-driving companies benefit from having a full-stack system that tightly integrates hardware and software.
describes the issues with integrating third-party software or game engine-based simulators for self-driving. As a result, they built
an internal system specifically for the validation of self-driving car software. “It uses the same programming framework as the Aurora Driver software stack and the same libraries. In fact, because it is purpose-built to work with our system, the offline executor allows us to have deterministic offline testing for any module, for any team, across the entire organization. That’s a huge benefit. Otherwise, integrating game engines to work with self-driving software can sometimes feel like putting a square peg into a round hole.”
More on simulation here
released a Simulation product
to help developers test, train and validate their AI projects.
For a bearish update on the future of self-driving (and Tesla
in particular), check out this blog
(NeurIPS 2019 dataset
) and Argo
) both released public HD maps datasets and competitions to accelerate R&D for perception. Furthermore, Argo raised
$1B from Volkswagen at a $7B valuation, absorbed VW’s Autonomous Intelligent Driving 200-strong team (said to be valued at $1.6B), and established
the Argo AI Center for AV Research at CMU with a $15M grant. Lyft signed
Waymo onto their marketplace fleet.
that it’s Apollo consortium software is capable of Level 4 autonomy. Toyota signs
is said to expand its self-driving delivery service into Sweden
is pushing into Texas
as it begins to operate as an autonomous freight carrier (with a safety driver behind the wheel). This is a mere 16 months after the business was born. One of their competitors, Starsky Robotics
, collaborated with Loadsmart
to automatically dispatch an autonomous truck
to haul freight; successfully pricing, tendering, booking, then picking up and delivering the shipment without any human interaction at all.
follows Uber’s steps in spinning out
its autonomous driving unit as an independent company. The company is rolling out
its on-demand self-driving consumer proposition as a pilot in Shanghai, followed by Shenzhen and Beijing next year.
Optimus Ride launched
several self-driving shuttles in Ne2 York’s Navy Yard.
is said to be expanding its self-driving panda bus service
in China and Thailand, where it has a £470 million agreement to supply self-driving buses.
George from Comma.ai
wrote a two part
blog post to argue his company’s approach to developing self-driving systems “for the masses”. He describes the values of crowdsourcing data, shipping products assisted driving products that retrofit popular car makes/models, and having a business model from the get-go. Worth a read!
The National Transportation Safety Board released hundreds of pages
of reporting on Uber’s
Arizona death last year. Most notably, the report finds that “The [self-driving] system design did not include consideration for jaywalking pedestrians”
Vtrus describe how a self-supervised system with a model-based control algorithm enables autonomous flight through large indoor spaces.
💪 The giants
on robotic learning with a focus on curiosity-driven reinforcement learning. Here, a robot not only has to solve a task such as walking around a course but is incentivized to attempt diverse strategies to achieve its goal. This often helps the robot learn faster. The company also issued a request for proposals out to the academic AI community to “help the academic community to address problems in the area of safer online conversations.” In particular, Facebook is offering to fund projects
that create datasets or evaluation platforms that can accelerate research in hate speech and misinformation detection.
makes progress on the GLUE
commits to hiring 2,000 employees
in the UK to accelerate its technology development. This includes 170 employees in Amazon’s Cambridge, London, and Edinburgh-based development centers. The lead of Amazon’s Berlin ML development center, Ralph Herbrich, is now
SVP Data Science and ML at Zalando in Berlin. Focal Systems, an SF-based computer vision startup, ran an analysis
of the unit economics for Amazon Go stores to show that the system relies on vast amounts of costly hardware, tons of computational power, DevOps and other challenges.
Forbes ran a feature story
about enterprise RPA juggernaut, UiPath, chronicling the 10-year overnight success of the business and its founder.
share 6 lessons learned from deploying 150 customer-facing ML models. For a deep dive into the findings, check out Adrian Colyer’s piece
. Briefly, the findings are: 1) Projects introducing machine-learned models deliver strong business value; 2) Model performance is not the same as business performance; 3) Be clear about the problem you’re trying to solve; 4) Prediction serving latency matters; 5) Get early feedback on model quality; 6) Test the business impact of your models using randomised controlled trials (follows from #2).
For even more on the topic of ML in production, specifically around automating the end-to-end lifecycle of ML applications, head over to this blog post
. It makes the case that developing, deploying, and continuously improving ML models is more complex compared to more traditional software, such as a web service or a mobile application. Neil Lawrence also touches on ML continuous monitoring and deployment
in his recent Data Science Africa talk.
announced they had integrated
Transformer-based models into their flagship search experience to better understand the queries of a user. This is quite a big deal because it shows how new R&D on neural architectures within Google Brain can, through lots of open source iterations, make its way into a huge-scale production system to drive business impact. The company is also opening enterprise-level support
for TensorFlow, consistent with its push upmarket. It’s also worth noting the announcement of quantum supremacy
by Google. The company’s “quantum machine successfully performed a test computation in just 200 seconds that would have taken the best-known algorithms in the most powerful supercomputers thousands of years to accomplish.”
released a blog post announcing that it had developed a robotic hand system that can solve a Rubik’s cube using a computer vision system instructing finger movements. This ignited lots of debate
in the community because many thought the claims were overstated
. Separately, the company completed its staged release of GPT-2, its transformer-based language model. This document
describes the rationale and results of their release strategy, as well as a discussion on publishing norms in AI. OpenAI also updated its AI and compute
analysis, which now dates back to 1959. What’s interesting is to notice that “starting from the perceptron in 1959, [there is a] ~2-year doubling time for the compute used in these historical results—with a 3.4-month doubling time starting in ~2012.”
This provides evidence (but of course not necessarily causation) for how compute availability accelerates innovation in AI.
CEO took to Twitter to share that they have built “an ML engine to automatically optimize the bitfields of card network requests. (Stripe-wide N is now large enough to yield statistical significance even in unusual cases.) It will soon have generated an incremental $1 billion of revenue for stripe businesses.” This shows how companies that command a large user/customer base and have their infrastructure in order can rapidly deploy ML use cases that drive business value. Stripe would be the software-native persona example in my post here
, an additive manufacturing startup in Boston, released a software tool called Blacksmith
, which is able to automatically adjust the programming of manufacturing machines to ensure every part is produced as designed. The software analyzes a design, compares it to the scanned part, and automatically adapts the end-to-end process to produce perfectly in-spec parts.
released exciting new benchmark results
of their IPU processor on a variety of training and inference tasks.
is making it’s way closer and closer to production. Google has a slew of papers on automatically developing efficient neural networks to run at the edge using AutoML. Now, they have released a family of neural networks that are optimized
for EdgeTPUs (the EfficientNet-EdgeTPU).
came out of stealth and announced
a wafer-scale deep learning chip. The company overcame a number of engineering challenges
to create such a chip, including design, manufacturing, power, cooling, communication, and coordination.
entered the AI inference chip market with new cloud-based hardware called Hanguang 800
. This is their first semiconductor product ever. The chip can inference almost 80k images per second using a ResNet-50 model, which is 15x more than the NVIDIA T4.
scientists have successfully tested new neuroprosthetic technology
that combines robotic control with users’ voluntary control, opening avenues in the new interdisciplinary field of shared control for neuroprosthetic technologies.
Robots making printed circuit boards: a glimpse
into Tempo Automation
🏥 Healthcare and life science
, the AI-first drug discovery platform, created a Kaggle competition
using images from several batches of human cells subjected to the same experimental conditions. The goal was to explore ML methods of separating biological and technical factors in biological data. In a win for open science, the best team in the competition was not Recursion’s own data scientists. Instead, the private leaderboard competition winner
was an early 20s Master’s student from Poland with 1.5 years of work experience at NVIDIA.
The company’s CEO, Chris Gibson, wrote a great piece
on how tech and life science investors view AI-first biology companies and why the intersection of the two really matters. He makes an important point: By tech-enabled, I don’t just mean a company using a bit of technology at each stage, as is commonplace today, but the truly integrated development of a soup-to-nut process consisting of multiple integrated technological solutions substantially improving the cost, time or quality required for the discovery of new drugs.
If you’re interested in learning more about Recursion, head over to this piece
The UK government announced
an investment of £250M into AI in the National Health Service (NHS). This sees the establishment of the NHS AI Lab
. Its goals appear to be in the making, but recommendations include accelerating the safe testing, development, and adoption of AI technologies, and training the workforce of the future.
in London, a leading eye hospital, showed how doctors could utilize Google’s AutoML tools to rapidly train powerful classifiers
on OCT images without knowing how to code.
As industries increase their adoption of software, expenses that were previously CapEx become OpEx as a result of abstractions. Biology is entering this transition now with the growth of “cloud labs”. A piece in Nature
describes how robotics + automation + software enables the outsourcing of experiments to drive reproducibility and cost-effectiveness.
Here’s an interesting thread
on the challenges faced by AI-first radiology startups. It points to issues collecting quality training data to allow trained models to work across different sites, and not solving and integrating into key radiology workflows. More on the key challenges of delivering clinical impact with AI systems in this paper
from Chris Kelly and the team at DeepMind Health.
🇨🇳 AI in China
Analyzing last year’s NeurIPS publication data
shows that Chinese-born researchers conduct a relatively small portion of the most elite AI research (~9%), but a substantial portion (~25%) of upper-tier AI research. Indeed, a majority of the latter earned their graduate degrees in the US and continued to work there.
Chinese facial recognition companies continue to attract lots of negative criticism from the West. Several reports
have come out tracing back US university institutions as LPs in China-based VC firms that have invested into Megvii and SenseTime.
Nonetheless, the Beijing Academy of AI, an organization backed by the government, released Beijing (ethical) AI Principles
. These include developing AI in a way that respects “human privacy, dignity, freedom, autonomy, and rights.”
After being put onto the US blacklist, Huawei goes after Verizon Communications’ use of 230 patents. Huawei demands
licensing fees worth up to $1B.
Meanwhile, the US government is pushing Taiwan
to restrict its biggest chipmaker, TSMC, from producing semiconductors for Huawei, the Chinese telecoms group, and to institute stricter controls on technology exports to China. This move is part of the government’s efforts to ban the sale of Huawei products in the US.
Buying a new phone in China will require
a facial recognition test. This is a step beyond the current practice of asking for ID documents to purchase a Chinese SIM card.
Global financing trends for AI-first startups are shifting away from America. Around 60% of all financing went outside of US-based
companies, with China driving growth.
Check out the Qianzhan Chanye Report
on the current status and future development trends of China’s AI Industry, translated by Jeff Ding.
🌍 The politics of AI
U.S. Senator Martin Heinrich, the co-founder of the Senate Artificial Intelligence Caucus, filed the Artificial Intelligence Initiative Act
(AI-IA) in late June 2019 as an amendment to the fiscal year 2020 National Defense Authorization Act. The amendment sets out a coordinated national strategy for developing AI. It provides for $2.2B of the federal investment over five years to “build an AI-ready workforce, accelerating the responsible delivery of AI applications from government agencies, academia, and the private sector.”
Following suit to the US, Japan
’s government announced that high-tech industries will be added to a list of businesses for which foreign ownership
of Japanese firms is restricted as of 1 August 2019. The threshold ownership is 10%. The Japanese say they want to both prevent the leakage of technology as a matter of national security and prevent damage to their defense output.
The UK’s Information Commissioner’s Office and the Turing Institute published an interim report on Project ExplAIn
. The objective of this report is to produce “useful guidance that assists organizations with meeting the expectations of individuals when delivering explanations of AI decisions about them.”
In particular, this work seeks to help organizations to comply with these legal requirements, especially in the backdrop of GDPR rules on solely automated decision making. The report uses data captured from small panels of jurors representing the general public and industry. Unsurprisingly, the strongest message from the juries was that context matters - in particular, this includes the urgency of the decision, the impact of the decision, whether deciding factors can be changed, how much bias there may be.
AI in defense is a thorny subject. Some governments are actively investing in this area and procuring resources from technology companies, while others refuse. In the US, Anduril
Industries is an AI-first defense technology company that services the government sector. The company has forged partnerships
in the US government as well as with the UK Royal Marines Commando force to “get battle-winning technology straight into the hands of our warfighters.“
In a bold market statement
, Anduril raised a Series B at a $1B valuation.
Dutch Universities are expanding their investments into AI. TU Eindhoven received €100M
to set up the Eindhoven AI Systems Institute
that will attract 50 new full professors to add to the already 100 scientists working in the field.
NYT profiles the compensation rates
of human data labelers: One Indian-based company, iMerit, pay between $150 and $200 a month, while it pullus in between $800 and $1,000 of revenue.
A group from the University of Rochester studied the effects
of the “AI professor brain drain” into the industry between 2004 and 2018. They found causal evidence that “AI faculty departures from universities reduced the creation of startups by students who then graduated from those universities” by 13% on average. Furthermore, the study cites that: “Google, Amazon, and Microsoft poached the most AI professors from North American universities. From 2004 to 2018, Google and its subsidiary, DeepMind, together hired 23 tenure-track and tenured AI professors from North American universities. Amazon and Microsoft respectively hired 17 and 13 AI professors. Apart from technology firms, we also see that large firms from the finance industry poach AI professors, such as Morgan Stanley, American Express, and JP Morgan. It is worth noting that these publicly traded firms hired about 45% of 221 professors who accepted an industry job in our sample.”
More on this in the NYT
🔮 Where AI is heading next
: Consumers worldwide were caught up in at least two high-profile cases of selfie apps recording, storing and potentially monetizing data to third parties. It’s well known that today’s terms and conditions are far too difficult and dense for consumption by the untrained eye. In one case, an app called Ever
that is similar to Flickr was reported to have licensed
their facial recognition software to third parties while it used training data that was captured by their free-to-use consumer service. In another instance, FaceApp usage jumped through the roof as users generated selfies of their elderly-looking selves only to then become concerned
that the parent company is based in Russia.
Meta-learning for AGI:
Jeff Clune of Uber AI Labs pens a review
on the idea that successfully producing general AI might require us to develop an AI-generating algorithm (AI-GA). In contrast to manual AI development, which involves inventing all components of intelligence and assembling them together to create AGI, Jeff proposes the idea of learning as much as possible in an automated fashion using an AI-GA that bootstraps itself up from scratch to produce general AI. One part of this challenge is generating effective learning environments
Separately, Francois Chollet of Google published a meaty piece on The Measure of Intelligence
. Here, he presents a new formal definition of intelligence based on Algorithmic Information Theory
, “describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems.”
He then proposes a new benchmark, the Abstraction and Reasoning Corpus
(ARC), that use explicit priors that are designed to be as close as possible to innate human priors. The benchmark is available here
: In a widely popular NeurIPS 2019 workshop, researchers came together to discuss how ML can help tackle the climate crisis. A hefty position paper, Tackling climate change with ML
, was published. It described how ML can help reduce greenhouse gas emissions by identifying problems ranging from smart grids to disaster management. Problems and potential ML approaches are presented by different authors and classified as high-risk, high-leverage, or long-term. This is important and timely work. Adding to this paper, the Allen Institute/CMU/WashU published a paper called Green AI
, which motivates the need for “climate impact” evaluation criterion for research alongside accuracy and related measures. They propose “reporting the financial cost or “price tag” of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods.”
: Why per seat pricing needs to die
in the Age of AI, a post by Emergence Capital. Unlike traditional SaaS products that have static feature sets that do not improve in performance unless their developers update them somehow, AI-first software features improve with usage. As such, charging per seat inherently disincentivizes usage and leads to cannibalization of the vendor’s potential. This piece argues that AI-first SaaS should instead design pricing models that maximize product usage, and therefore, product value.
: McKinsey Global Institute set out to model the impact
of AI on the world economy. Their conclusions are that a) “AI has the potential to deliver additional global economic activity of around $13 trillion by 2030, or about 16 percent higher cumulative GDP compared with today”, and b) “leading AI countries could capture an additional 20 to 25 percent in net economic benefits, compared with today, while developing countries might capture only about 5 to 15 percent.” What’s more, companies that adopt AI become clear front-runners in the longer-term: