🌍 AI and the nation state
- The French government announced a bullish plan to offer €1.5bn of public support for AI by 2022. The goal is to stem its brain drain and catch up to the US and China. It builds on President Macron’s vision to make France a “startup nation” and is underpinned by Cédric Villani’s special report, For a Meaningful Artificial Intelligence. Villani supports the opening government data collectives, new ethical frameworks, increasing public-private partnerships, increasing academic salaries, and the recruiting of 400 AI experts to France over the next two years. Read more here (FYI, the report is 154 pages long).
- DeepMind has opened an office in Paris (announcement) to be led by Rémi Munos. Google has also launched a new AI research team in Google Paris. The company is also sponsoring academic positions at Polytechnique, projects at INRIA, as well as training programs for PhDs and Postdocs.
🇬🇧 The UK:
- The House of Lords Select Committee on AI released their own report, AI in the UK: ready, willing and able? (FYI, the report is 183 pages long). The government pledges to invest £300m more in AI research (on top of £300m pledged to technology last year). This includes funding for training 8,000 educators, 1,000 AI PhDs by 2025 and a Turing Fellowship programme to attract talent to the country. This still feels insufficient. The country needs to capitalise on the opportunity in AI especially as it prepares to leave the EU.
- The UK, along with 24 EU member states, are also signatories to a high-level cooperation agreement on AI. The how, who and when answers aren’t fleshed out.
- Of course, the big critique of these European AI plans is that they must address the pay scale delta between engineers and researchers in Europe vs. the US, both in industry and academia.
🇺🇸 The US:
- The Trump administration has ditched the AI plan Obama published on his way out from office. It has no intentions of devising its own plan. Instead, it believes there’s no need for an AI moonshot and that minimizing government intervention is the right approach. False. At the very least, the US government should be massively supporting research and commercial development of AI, given it’s mighty talent pool and leading technology companies. Yes, it already offers significant grants to universities. However, the talent market is global and while the US is winning today, it must compete like any other nation to sustain it’s advantage. It must also forward plan for the inevitable changes to the nature of work, offer regulatory guidance (autonomous cars is an example), and educate its citizens to what the future will behold.
- As we know, the country continues to blaze its AI path forward to world domination by 2030. The government is fast tracking Baidu’s self-driving technology through the Xiongan New Area, a new ‘smart city’. The company’s Apollo program has been signed to power the new city’s vehicle infrastructure, including passenger vehicle fleets, street cleaners and public busses.
- Alibaba is powering ML-based services on the public transport system (customer speech and face recognition while they purchase a ticket). As the 5th largest cloud computing provider in the world, Alibaba is already forking out $2.6bn in R&D last year and will triple this budget with their 3 year $15bn commitment on emerging technology research through their DAMO academy.
🔮 Where AI is heading next
On the current “AI revolution”: In a lovely piece
, Prof. Michael Jordan of Berkeley explores many of the central tenets driving the excitement around AI today. He makes the case for a new engineering discipline, defines the differences between human-imitative AI
(i.e. general intelligence, to some degree), intelligent infrastructure
(connected fabric of computation, data and physical entities) and intelligence augmentation
(computation and data used to create services to improve human performance at tasks). This excerpt summarises his points beautifully:
“The current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.”
On paradigm shifts in technology and bridges in between
: With hindsight, the evolution of technology is marked by paradigm shifts. The PC, the feature phone, the smartphone, the Web, etc. However, these demarkations are tricky to discern while we’re in between cycles. These cycles too can be macro and micro. That is to say, there are several cycles within the smartphone era itself. When looking at the sensor suite available to engineers of autonomous vehicles, it’s clear that LiDAR, radar, and vision are the best sensing modalities we have. However, it’s less clear when and if the puck will slide in a new direction that more faithfully approximates human perception. In this context, Ben Evans asks
whether LiDAR is the best we’ve got or whether it’s actually a ‘bridge’ to a more powerful alternative.
On bias and fairness for automated decision systems:
This question remains top of mind for businesses and governments who seek to leverage AI. There are two main sources of bias: a) statistical bias, i.e. the training datasets do not represent the statistical properties of the real world, and b) imprints of bias, i.e. the training datasets encapsulate the bias of their creators. Present solutions to these problems focus on remove specific labels that unfairly identify groups susceptible of bias (e.g. last name, ethnicity). However, this is often insufficient because other features may enable specific groups to be pulled out from the data (e.g. postcodes). We need new methods to systematically identify bias in automated decision making systems. DeepMind have set up a research group
on the topic.
On reproducibility of AI:
We’ve covered this topic in prior editions of the newsletter, but there’s still no widely adopted answer. Of course, this is true not only in AI, but in most quantitative disciplines that deal with sequential optimisation of methods and digital systems. Keeping track of one’s thought process during development, the experimental dependencies, setup, conditions and results at all times is tricky without purpose-built software in place. Indeed, as this piece points out
, “there’s no equivalent to source control or even agreed best-practices about how to archive a training process so that it can be successfully re-run in the future.”
Solutions are indeed available internally within large technology companies and some startups, including Stockholm-based Peltarion, are working on providing tools for everyone else.
On data monopolies and the future of the Internet:
Many have opined on blockchain-based approaches to liberating personal data from the clutches of monopolistic incumbents. At a high level, machine learning truly shines when data is highly granular, task-specific and centralised by the owner/operator of the model. Blockchain-based networks, on the other hand, shine when there’s token-based incentivised collaboration between highly distributed actors, none of whom have data advantages over the other. Is there a overlapping ground where these two poles meet? This piece by Fred Ersham
(with contributions from Openmined and Numerai) is a fascinating window into how decentralized machine learning marketplaces can dismantle the data monopolies of the current tech giants. It involves the training of metamodels using secure computation methods. Such a metamodel would request decentralised, staked data and model contributions from a community that is repaid as a function of marginal prediction improvements they delivered against objective evaluation metrics.
On robotic automation:
We hear lots about how robots are proliferating all around us. A Swedish economist, however, disagrees on the impact they have on the economy. In the US, labor productivity in manufacturing has not risen
in accordance with the increase in robot shipments. On a related note, the lighthouse solution to technological unemployment, namely universal basic income, experienced a setback in Finland where their public trial was cut short
🚗 Department of Driverless Cars
An autonomous Volvo XC90 vehicle operated by Uber
in Arizona struck and tragically killed a pedestrian
who was visible to the car’s front facing camera as she crossed outside of a dedicated zone at night. At the inception of their Arizona program, Uber had two people per vehicle: one in the driver seat who would intervene in case the car misbehaved and another in the passenger seat to oversee the perception systems of the vehicle. Prior to the accident, Uber had reduced their teams down to having one employee in the driver seat only. Waymo
, on the other hand, still use two operators per vehicle. Post-crash analysis
showed that neither the autonomous system nor the driver hit the breaks when the pedestrian, claiming that neither agent had detected her (more analysis here
). As a result, other automakers including NVIDIA
and Toyota have suspended their public testing programs in the US. Uber has also let go of all four co-founders
In fact, Mobileye
CTO Amnon Shashua wrote a post
on the Intel blog to show how their ADAS technology run on the police video footage could detect the passenger crossing the road with 1 second to spare. He calls for substantive conversations about safety for autonomous vehicles.
In the US, there are 770 accidents every 1 billion miles of driving, according to NVIDIA
. A fleet of 20 test cars can only cover 1 million miles a year. For this reason, amongst several others, businesses developing AV systems are investing in simulation environments. At their annual GTC in the US, NVIDIA presented their photorealistic simulation environment with the self-driving perception and planning stack running autonomously (see video
At the NYC auto show, Waymo
and Jaguar Land Rover
inked a deal where the automaker will supply up to 20,000 of its new electric vehicles to Waymo for conversion into autonomous vehicles. This deal could be worth up to £1.3bn and follows similar tie ups such as Lyft-Ford, Lyft-Magma, and Uber-Volvo. Waymo are also close to signing a partnership
with Honda to create a new autonomous vehicle from scratch.
has lost their Autopilot chief
and suffered its own fatal accident with a Model X
that had Autopilot engaged while driving on Highway 101. The driver’s hands were not detected on the steering wheel 6 seconds before the crash. Along with the Uber accident, many are calling for the implementation of greater regulation
for public testing. Indeed, I think there should be an independent body charged assessing the performance and safety of the perception, control, and planning systems of self-driving car operators. This piece
explores scenarios for how the industry will play out.
have released their open autonomous safety manual
(a-la-API documentation) covering scenario testing, functional safety, autonomy assessment and a testing toolkit. Cool initiative to drive protocol consensus in this space.
filed for a patent at the nexus of virtual reality and autonomous vehicles
. They lay out potential configurations of vehicles where VR is the primary mode of entertainment and means through which to perceive the outside world. You could basically be hang gliding or surfing down the 101 instead of falling asleep in traffic :-)
have finally launched a purpose built center for AV R&D
outside Munich! Just in time to try and compete on talent with others who have already set up shop, including Lyft ;-)
is running public road tests
with its self-driving development cars and is hunting for 50 people for its team (of unknown current size). No blessing from the government yet (like its rival Baidu).
💪 The giants
There’s been a reorganisation
at the helm for Google AI
. John Giannandrea (JG), SVP Engineering who previously oversaw both search and AI product development, left the company and joined Apple as their Head of AI. Jeff Dean, who has played a pivotal role in Google from the early days and is widely considered to be a legend, is now in charge of company’s AI work. This is clearly a sign for how serious the company is with disseminating AI throughout its products. Indeed, their 2017 annual letter to shareholders lists many real use cases
where ML powers Google products and features.
underwent its own reorganisation
to focus on AI. The company is now split into two divisions: Experiences + Devices, and Cloud + AI platforms. More details here
also came under scrutiny by its own AI developers following word that the company would engage with the US defense department. With Project Maven
, the idea is to use Google’s cloud computer vision capabilities “to increase the ability of weapon systems to detect objects”. While the Google Cloud organisation seemed for the contract, AI engineers have been signing a petition for the company to halt this project, otherwise they would resign. Good response.
Scarily, a similar situation has played out in Korea. In February, leading defense business (Hanwha Systems) and academic institution (KAIST) launched a joined research center at KAIST to co-develop AI applied to military weapons. It was said to include AI-based missile systems, unmanned submarines and armed quadcopters.
Following on this of ethics, automation and privacy, Bloomberg ran a piece on Palantir
and their practices of data aggregation in the enterprise, as well as the battlefield.
In a more bullish note, Jeff Bezos wrote to his shareholders
to say that “tens of thousands of customers are also using a broad range of Amazon Web Services machine learning services, with active users increasing more than 250 percent in the last year, spurred by the broad adoption of Amazon SageMaker.”
Impressive launch! Alexa too is getting an upgrade to her brain
, including a newfound memory unit!
Goldman Sachs published a report where they sized the market opportunity for hardware in the age of AI. The 30,000 ft view suggests that the overall AI hardware TAM has potential to grow from $12bn in 2017 to $35bn/$100bn+ by 2020/2025. This is driven by a) the need for significantly more compute and memory for training networks with enormous numbers of parameters, b) AI-related workloads accounting for a greater share of all datacenter workloads and c) the higher bill of materials for the hardware itself.
NVIDIA announced their DGX-2 box
, which claims 10x the compute of the DGX-1 because it includes 16 GPUs (2x more than DGX-1), 32GB (and faster) memory per GPU, and a faster GPU interconnect. They’ve created a network fabric with 5x the bandwidth of top PCIe switches on the market to connect all 16 GPUs together vs. a point to point connection. What’s more, it now takes 18 minutes to train AlexNet on a DGX-2 vs. 6 days on two GTX 500 chips (state of the art in 2012). 500x speedup in 5 years!
have shipped their Autopilot 2.5 hardware (tear down here
), which includes a secondary GPU.
has set up a semiconductor group
to develop its own system-on-chip/ASIC, firmware and driver development team, mostly likely due to the cost and performance advantages derived from creating a custom chip fit for Facebook’s workloads.
have said they’re entering the AI hardware space
, although details are scarce. This is concomitant with a push for Cortana not to be left totally by the wayside.
(as it’s now called) published benchmarks
on the computational time and cost required to train on networks on ImageNet using different frameworks, networks and hardware. They show that ImageNet classification training time and cost using Google Cloud TPUs+AmoebaNet (architecture learned via evolutionary search) to produce a train a model to 93% top-5 accuracy takes less that 7.5 hours and costs just shy of $50.
Spotify is working on a in-car device for streaming their content and will include a slew of audio commands. Interestingly, Uber also seems to be making a push towards voice by setting up a conversational AI group.
Robots have been shown to automatically plan and control the assembly of an IKEA chair
! However, the robot still needs to be told the step-by-step instructions from those pesky plans.
We’re also set to see more innovation in the materials science industry, where it’s been recently shown that machine learning predictions can help discover novel materials
, the UK-based mobile telemedicine startup, will deploy its services via WeChat thanks to a deal signed
with Tencent. One billion users will be able to message medical systems to Babylon’s app and receive healthcare advice in return. This deal follows a similar arrangement with the Saudi Arabian ministry of health.
The FDA issued an authorisation for IDx-DR
, a deep learning based system for diagnosing eye disease in diabetic adults. The screening involves standard retinal imaging, takes less than a minute, and can be performed without a clinician’s interpretation of the images or results. Interestingly, approval required a clinical trial to achieve screening recommendations and corresponding retinal images from over 800 diabetic patients at ten different primary care sites. The study “indicated that IDx-DR correctly identified the presence of more than mild diabetic retinopathy 87.4% of the time and correctly identified patients with less than mild diabetic retinopathy 89.5% of the time.”