Daily Archives: July 22, 2021

News: Filing: Instant grocery startup Gopuff to raise $750M at a $13.5B valuation; sources: it’s actually $1B on a $15B valuation

Gopuff, the “instant” grocery delivery startup that has been on an acquisition and expansion tear in the last several months to scale its business, is also racing to raise money to fuel those efforts. Documents uncovered by Prime Unicorn Index and shared with TechCrunch show that the startup has filed papers in Delaware to raise

Gopuff, the “instant” grocery delivery startup that has been on an acquisition and expansion tear in the last several months to scale its business, is also racing to raise money to fuel those efforts. Documents uncovered by Prime Unicorn Index and shared with TechCrunch show that the startup has filed papers in Delaware to raise up to $750 million in a Series H round of funding — at a valuation of $13.5 billion if all shares are issued. While the company is not commenting on the filing, a well-placed source tells us that it’s actually closing as a $1 billion raise at a $15 billion valuation.

As with all Delaware filings, they only tell part of the story, so the company might ultimately raise more or less before the round closes. (And in this case it looks like “more.”)

For some funding context, it was only in March that Gopuff raised $1.15 billion at an $8.9 billion valuation. And that round came just months after a $380 million round (at a $3.8 billion) valuation. With Gopuff’s instant grocery model comes instant funding, it seems: together the three rounds would total around $2.5 billion in funding in the space of 10 months. (Investors in the company’s previous rounds have included Accel, D1 Capital Partners, Fidelity Management and Research Company, Baillie Gifford, Eldridge, Reinvent Capital, Luxor Capital, and SoftBank.)

Much like the investment race in the transportation-on-demand market, a large part of the fundraising in instant grocery seems to be aimed at scaling as fast as possible to build technological, operational and customer moats.

So for Gopuff, some of the money it’s raised so far has been used to expand organically. That is, it’s investing to acquire new customers and build out its infrastructure — riders, “dark” stores stocked with their products, and most recently “Gopuff kitchens” — within the 650+ cities in the U.S. where it already operates its $1.95 flat fee “in minutes” delivery service. It will likely be doing so at a particularly fast pace, considering that others like DoorDash are also moving in to compete in earnest around the same model for quick deliveries of a limited selection of food and drinks, home essentials, and over-the-counter medication.

But alongside that, some of the cash it is amassing is also being used for acquisitions. So far, these have been limited to the U.S. and to expand Gopuff’s breadth in that market. It bought alcohol retailer BevMo for $350 million in November 2020; and in June of this year Gopuff acquired logistics tech company rideOS for $115 million.

The next stage of that acquisition process looks like it may be focused on snapping up similar companies in key markets where Gopuff wants to be in the future, particularly internationally, as it works to fill out a reported ambition of reaching $1 billion in revenues this year (3x last year’s numbers).

In June, there were rumors around that Gopuff had approached Flink, an instant grocery player in Germany. While that has not gone anywhere (yet?), well-placed sources have told us — and, it seems, others — that Gopuff is also casting its international eye on England, engaging in discussions to acquire two different instant delivery companies based out of London, first Fancy back in February, and more recently, Dija.

Gopuff also declined to coment on Dija but we have multiple, well-placed sources telling us it’s in the works.

London is a hugely competitive market for instant grocery delivery at the moment — not least because it is dense and often hard to get around, has demonstrated a strong consumer appetite for on-demand delivery services, and has a population of younger people with a decent amount of disposable income to pay a little more for convenience.

That speaks of opportunity, but also possibly too many hopefuls as well. In addition to Dija and Fance, we have Turkey’s Getir, backed by Sequoia and a number of others on an ambitious international roll at the moment; Gorillas (like Flink, from Berlin); Zapp; and Weezy — all offering “instant” grocery delivery. And these are just the standalone, newer startups. Still to come: established restaurant delivery players like Deliveroo that might also throw their hats into the ring.

Perhaps unsurprisingly, given that field, we’ve heard that Dija has been struggling to raise more money, and that led to the startup looking for buyers as an alternative.

That is a trend that’s playing out elsewhere too: In Spain Getir earlier this month acquired Blok, another new instant player that was struggling to get investors on board. We confirmed with well-placed sources that Dija had also talked with Getir in this context (that didn’t go anywhere) before Gopuff entered the picture. There will likely be more of these.

“It’s going to be a bloodbath,” is how one big investor recently described the instant grocery market to me.

Given that online grocery remains a relatively minor part of the market — even with the pandemic and its habit-changing impact on e-commerce, it’s still under 10% of sales, even in the most adoption-friendly cities — there is still a lot to play for in “instant” groceries. But if this latest round shows us anything, it’s that the most promising of these delivery companies will continue raising a lot more money to position themselves as consolidators within it.

Additional reporting: Natasha Lomas

News: UK’s Mindtech raises $3.25M from In-Q-Tel, among others, to train CCTV cameras on synthetic humans

Imagine a world where no one’s privacy is breached, no faces are scanned into a gargantuan database, and no privacy laws are broken. This is a world that is fast approaching. Could companies simply dump the need for real-world CCTV footage, and switch to synthetic humans, acting out potential scenarios a million times over? That’s

Imagine a world where no one’s privacy is breached, no faces are scanned into a gargantuan database, and no privacy laws are broken. This is a world that is fast approaching. Could companies simply dump the need for real-world CCTV footage, and switch to synthetic humans, acting out potential scenarios a million times over? That’s the tantalizing prospect of a new UK startup that has attracted funding from an influential set of investors.

UK-based Mindtech Global has developed what it describes as an end-to-end synthetic data creation platform. In plain English, its system can imagine visual scenarios such as someone’s behavior inside a store, or crossing the street. This data is then used to train AI-based computer vision systems for customers such as big retailers, warehouse operators, healthcare, transportation systems and robotics. It literally trains a ‘synthetic’ CCTV camera inside a synthetic world.

It’s now closed a $3.25 million early-stage funding round led by UK regional backer NPIF – Mercia Equity Finance, with Deeptech Labs and In-Q-Tel.

That last investor is significant. In-Q-Tel invests in startups that support US intelligence capabilities and is based in Arlington, Virginia…

Mindtech’s Chameleon platform is designed to help computers understand and predict human interactions. As we all know, current approaches to training AI vision systems require companies to source data such as CCTV footage. The process is fraught with privacy issues, costly, and time-consuming. Mindtech says Chameleon solves that problem, as its customers quickly “build unlimited scenes and scenarios using photo-realistic smart 3D models”.

An added bonus is that these synthetic humans can be used to train AI vision systems to weed out human failings around diversity and bias.

Mindtech CEO Steve Harris

Mindtech CEO Steve Harris

Steve Harris, CEO, Mindtech said: “Machine learning teams can spend up to 80% of their time sourcing, cleaning, and organizing training data. Our Chameleon platform solves the AI training challenge, freeing the industry to focus on higher-value tasks like AI network innovation. This round will enable us to accelerate our growth, enabling a new generation of AI solutions that better understand the way humans interact with each other and the world around them.”

So what can you do with it? Consider the following: A kid slips from its parent’s hand at the mall. The synthetic CCTV running inside Mindtech’s scenario is trained thousands of times over how to spot it in real-time and alert staff. Another: a delivery robot meets kids playing in a street and works out how to how to avoid them. Finally: a passenger on the platform is behaving erratically too close to the rails – the CCTV is trained to automatically spot them and send help.

Nat Puffer, Managing Director (London), In-Q-Tel commented: “Mindtech impressed us with the maturity of their Chameleon platform and their commercial traction with global customers. We’re excited by the many applications this platform has across diverse markets and its ability to remove a significant roadblock in the development of smarter, more intuitive AI systems.”

Miles Kirby, CEO, Deeptech Labs said: “As a catalyst for deeptech success, our investment, and accelerator program supports ambitious teams with novel solutions and the appetite to build world-changing companies. Mindtech’s highly-experienced team are on a mission to disrupt the way AI systems are trained, and we’re delighted to support their journey.”

There is of course potential for darker applications, such a spotting petty theft inside supermarkets, or perhaps ‘optimising’ hard-pressed warehouse workers in some dystopian fashion. However, in theory, Mindtech’s customers can use this platform to rid themselves of the biases of middle-managers, and better serve customers.

News: Sendlane raises $20M to convert shoppers into loyal customers

The co-founders set out to build an email marketing automation platform for customers that wanted to do more than email campaigns and newsletters.

Sendlane, a San Diego-based multichannel marketing automation platform, announced Thursday it raised $20 million in Series A funding.

Five Elms Capital and others invested in the round to give Sendlane total funding of $23 million since the company was founded in 2018.

Though the company officially started three years ago, co-founder and CEO Jimmy Kim told TechCrunch he began working on the idea back in 2013 with two other co-founders.

They were all email marketers in different lines of business, but had some common ground in that they were all using email tools they didn’t like. The ones they did like came with too big of a price tag for a small business, Kim said. They set out to build their own email marketing automation platform for customers that wanted to do more than email campaigns and newsletters.

When two other companies Kim was involved in exited in 2017, he decided to put both feet into Sendlane to build it into a system that maximized revenue based on insights and integrations.

In late 2018, the company attracted seed funding from Zing Capital and decided in 2019 to pivot into e-commerce. “Based on our personal backgrounds and looking at the customers we worked with, we realized that is what we did best,” Kim said.

Today, more than 1,700 e-commerce companies use Sendlane’s platform to convert more than 100 points of their customers’ data — abandoned carts, which products sell the best and which marketing channel is working — into engaging communications aimed at driving customer loyalty. The company said it can increase revenue for customers between 20% and 40% on average.

The company itself is growing 100% year over year and seeing over $7 million in annual recurring revenue. It currently has 54 employees right now, and Kim expects to be at around 90 by the end of the year and 150 by the end of 2022. Sendlane currently has more than 20 open roles, he said.

That current and potential growth was a driver for Kim to go after the Series A funding. He said Sendlane became profitable last year, which is why it has not raised a lot of money so far. However, as the rapid adoption of e-commerce continues, Kim wants to be ready for the next wave of competition coming in, which he expects in the next year.

He considers companies like ActiveCampaign and Klaviyo to be in line with Sendlane, but says his company’s differentiator is customer service, boasting short wait times and chats that answer questions in less than 15 seconds.

He is also ready to go after the next vision, which is to unify data and insights to create meaningful interactions between customers and retailers.

“We want to start carving out a new space,” Kim added. “We have a ton of new products coming out in the next 12 to 18 months and want to be the single source for customer journey data insights that provides flexibility for your business to grow.”

Two upcoming tools include Audiences, which will unify customer data and provide insights, and an SMS product for two-way communications and enabled campaign-level sending.

 

News: US Secretary of Transportation Pete Buttigieg is coming to Disrupt

The myriad emerging and longer-term transportation technologies promise to change how people and packages move about the world or within their own neighborhoods. They also present myriad regulatory and policy hurdles that lawmakers, advocates and even investors and industry executives are attempting to navigate. At the center — at least in the United States —

The myriad emerging and longer-term transportation technologies promise to change how people and packages move about the world or within their own neighborhoods. They also present myriad regulatory and policy hurdles that lawmakers, advocates and even investors and industry executives are attempting to navigate.

At the center — at least in the United States — sits Secretary of Transportation Pete Buttigieg. The small-town mayor in Indiana turned presidential candidate and now cabinet member under the Biden administration oversees public transport, highway safety and nascent technologies like autonomous vehicles. The Harvard graduate, Rhodes Scholar at Oxford University and former U.S. Navy officer is in a position to bring complexity or clarity to the future of transportation.

At Disrupt 2021, Secretary Buttigieg will join us for a fireside chat where we’ll dig into some of the thorniest questions around transportation and how to ensure that moving from Point A to Point B is a universal right, not a privilege. We’ll ask Buttigieg about micromobility and public transit, President Biden’s push for the federal government to use electric vehicles, autonomous vehicle guidance and new regulatory requirements around reporting vehicle crashes when an advanced driver assistance and automated driving system is engaged — a move that could spur a new wave of startups and benefit some in-car technologies.

The upshot: If it involves technology that moves people and packages, we aim to talk about it.

Secretary Buttigieg is just one of the many high-profile speakers who will be on our Disrupt Stage and the Extra Crunch Stage. During the three-day event, writer, director, actor and Houseplant co-founder Seth Rogen will be joined by Houseplant Chief Commercial Officer Haneen Davies and co-founder and CEO Michael Mohr to talk about the business of weed, Duolingo CEO and co-founder Luis von Ahn will discuss gamifying education and prepping for a public offering and Coinbase CEO Brian Armstrong will dig into the volatile world of cryptocurrency and his company’s massive direct listing earlier this year.

Other speakers include Twitter CISO Rinki Sethi, Calendly founder and CEO Tope Awotona, Mirror co-founder and CEO Brynn Putnam, Evil Geniuses CEO Nicole LaPointe Jameson and Andreessen Horowitz General Partner Katie Haun.

Disrupt 2021 wouldn’t be complete without Startup Battlefield, the competition that launched some of the world’s biggest tech companies, including Cloudflare and Dropbox. Join Secretary Buttigieg and over 10,000 of the startup world’s most influential people at Disrupt 2021 online this September 21-23Get your pass to attend now for under $99 for a limited time!

News: Canada’s startup market booms alongside hot global VC investment

Canada’s startup industry seems to be benefiting from both domestic and international trends, a wide genre focus and more than one hub. Let’s talk aboot it.

Continuing our global look into the torrid pace of venture capital investment in the second quarter, today we turn to Canada. While many markets have posted impressive results, like the United States setting the pace for new all-time records in dollars invested into startups, Canada’s numbers stand out.

The country, now famous in the startup world for giving birth to Shopify, has already crushed prior yearly records for venture investment thus far in 2021. Indeed, CB Insights data indicates that Canadian startups this year have already raised more than double their 2020 totals.

The same data set indicates that Canada’s venture capital results now rival those of the entire Latin American region, with exits and mega-deals coming in roughly on par in the second quarter, and a similar number of total venture capital rounds in the period.

That caught our attention.


The Exchange explores startups, markets and money.

Read it every morning on Extra Crunch or get The Exchange newsletter every Saturday.


The Exchange reached out to a number of venture capitalists to expand our perspective on the Canadian market beyond the data points. Matt Cohen, a Toronto-based investor at Ripple Ventures, told The Exchange that “Canada is in a venture explosion” today, leading to results that are “unprecedented” for the country.

Taking the data and investor notes in aggregate, Canada’s startup industry seems to be benefiting from both domestic and international trends, a wide genre focus and more than one hub. Let’s talk aboot it.

A venture capital blowout

In the first half of 2021, Canadian startups raised $6.3 billion across 414 deals, per CB Insights data. Both numbers compare favorably to Canada’s 2020 results, when 617 deals led to $2.9 billion in total capital raised by Canadian startups. Canada has already bested its previous record in venture dollars invested ($4.3 billion, 2019), and is on pace to beat its all-time deal count as well (720, 2018).

By itself, the second quarter’s outsize results are even more extreme than its H1 2021 results might have led you to expect, amazingly. Observe the following chart from the same data set:

Image Credits: CB Insights

Canadian startups just had their single best quarter ever in both deal volume and dollar volume terms. Furthermore, the country boosted capital raised by nearly 10x from its local minimum in Q4 2020.

Notably, no Canadian startup deal in the quarter was worth more than $500 million; indeed, Trulioo’s $394 million Series D was the largest. From there the list includes $300 million for ApplyBoard’s Series D and Vena’s $242 million Series C. We read that list of results as indicative of an investing landscape in Canada that is not dominated by a handful of companies raising billion-dollar rounds. That’s good news, mind you: The data implies that the Canadian startup market is not being bolstered by one or two standout companies, but rather performing well more generally.

News: Leaving the cube

I did a bit of a double-take on this one: $100 million is a big number at any point, but two-and-a-half months after a $56 million round is pretty wild. At the very least, we know Path Robotics is ready to put its money where its mouth is — and that Tiger Global likes what

I did a bit of a double-take on this one: $100 million is a big number at any point, but two-and-a-half months after a $56 million round is pretty wild. At the very least, we know Path Robotics is ready to put its money where its mouth is — and that Tiger Global likes what it sees in the welding robotics firm.

The “pre-emptive” Series C brings its total funding to $171 and leapfrogs it toward the top of the most well-funded construction robotics companies. There’s a lot of room here, of course. The global construction market is in the tens of trillions of dollars, annually. And one of the beauties of the industry is precisely how many flanks there are to attack it from.

Image Credits: Path Robotics

Path’s particular funding…well, path, points to ambitions beyond welding. But that’s a good place to start, with a massive labor shortage of around 400,000 jobs in the U.S. alone by 2024. Tiger Global partner Griffin Schroeder pulls the curtain back a touch, stating:

Path’s innovative approach to computer vision and proprietary AI software allows robots to sense, understand and adapt to the challenges of each unique welding project. We believe this breakthrough technology can be adopted for many other applications and products beyond just welding, to serve their customers holistically.

I do think there’s a risk of taking on too much too fast for a startup — even one as well-funded as Path.

Image Credits: ADUSA Distribution

Verve Motion’s funding round just barely missed the cutoff for roundup inclusion last week. It’s tough when your lead-in is a $100 million round, but $15 million’s certainly nothing to scoff at. A spinout of some of the really interesting work being done at Conor Walsh’s lab at Harvard’s Wyss Institute and the John A. Paulson School of Engineering and Applied Sciences, Verve Motion is one of a number of startups in the exoskeleton/exosuit category.

There are two largely distinct audiences for this tech: people with mobility issues and the blue-collar labor force. For now, at least, Verve is targeting the latter, with its soft exosuits designed to help reduce workplace injuries from activities like repetitive lifting. Honestly, it fits the dull, dirty, dangerous paradigm pretty well.

Less fun news out of OpenAI, which quietly disbanded its robotics team. The move actually came last October, but Venture Beat reported on it last week. The team was probably best known for its Rubik’s Cube-solving robotics hand — a fascinating project, but apparently a bit of a dead end. Quoting a spokesperson:

After advancing the state of the art in reinforcement learning through our Rubik’s Cube project and other initiatives, last October we decided not to pursue further robotics research and instead refocus the team on other projects. Because of the rapid progress in AI and its capabilities, we’ve found that other approaches, such as reinforcement learning with human feedback, lead to faster progress in our reinforcement learning research.

And in the department of horribly butchering a funny thing Mark Twain once said, reports of Pepper’s death are…if not exaggerated, than at least disputed by the source. What remains clear is that the robotic face of Softbank wasn’t doing what the firm had hoped, and at the very least, it has decided to go back to the drawing board.

In addition to continuing refurbished sales of the signage-holding humanoid bot, Softbank Robotics CMO Kazutaka Hasumi told Reuters, “We will still be selling Pepper in five years.” It’s hard to know what to make of that. As far as these things go, Pepper wasn’t a particularly useful robot, in spite of it having a solid pedigree owing to Softbank’s acquisition of French firm, Aldebaran.

At the very least, the company is mulling over some kind of redesign. That alone seems unlikely to move the needle much.

News: Mercedes-Benz to build eight battery factories in push to become electric-only automaker by 2030

Mercedes-Benz laid out Thursday a 40 billion-euro ($47B) plan to become an electric-only automaker by the end of the decade, a target that will push the company to become more vertically integrated, train its workforce and secure the batteries needed to power its products. Mercedes has provide some wiggle room in that ambitious goal, noting

Mercedes-Benz laid out Thursday a 40 billion-euro ($47B) plan to become an electric-only automaker by the end of the decade, a target that will push the company to become more vertically integrated, train its workforce and secure the batteries needed to power its products.

Mercedes has provide some wiggle room in that ambitious goal, noting that it will be “ready to go all electric by the end of the decade, where markets allow.” This could mean that some combustion engine Mercedes, which are already equipped with 48-volt mild hybrid systems, will be produced and sold beyond the decade.

“We have a very clear plan to rapidly scale BEVs; we want to get on this journey to a BEV (battery electric) only world,” Daimler AG and Mercedes-Benz AG CEO Ola Källenius said during a media call after the announcement. “We want to be the people that make that happen, not just let it happen for us and go with the flow, but really taking initiative. And we believe also that the luxury segment, where we belong, has all the hallmarks of being a leading segment for this transition.”

The company has already taken some action, announcing Thursday it acquired UK-based electric motor company YASA and has determined it will need battery capacity of more than 200 gigawatt hours. To hit meet those needs, Mercedes plans to set up eight battery factories with partners to produce cells.

The new plants, one of which will be located in the United States, is on top of the company’s already planned network of nine factories that will be dedicated to building battery systems. The company said it will team up with new European partners to develop and efficiently produce future cells and modules. That “European partners” designation is strategic and one that Mercedes says will ensure the region “remains at the heart of the auto industry.”

Mercedes said it has partnered with Sila Nano, the Silicon Valley battery materials startup that raised $590 million earlier this year, to help it improve its next generation of batteries. Specifically, SilaNano is helping Mercedes  increase energy density by using silicon-carbon composite in the anode, which should boost range and allow for shorter charging times.

Mercedes is also looking into solid-state battery technology and said it is in talks with partners to develop batteries with even higher energy density and safety.

The plan unveiled Thursday piggybacks on previous goals to build and sell more EVs. Back in 2017, Mercedes said it would electrify — which means gas-hybrid, plug-in hybrid or battery electric — its entire lineup by 2022. The German automaker said Thursday that by next year it will offer battery-electric vehicles in every segment that it serves.

Its EV-only plan will accelerate from there. By 2025, the company said its three newly launched vehicle architectures will be electric-only. The company said it expects that all-electric and hybrids will make up 50% of its sales that’s up from its previous guidance of 25%. Customers will also be to choose an all-electric alternative for every model the company makes.

Källenius said the company’s goal marks a “profound reallocation of capital.” He stressed that the company’s profitability targets would be safeguarded and met despite this hefty investment and shift away from the internal combustion engine.

To meet this target, Mercedes is launching three electric-only architectures which will form the basis of all of its new vehicles. It’s so-called MB.EA platform will be used for its medium to large passenger cars, while AMG.EA will underpin its performance Mercedes-AMG cars and the VAN.EA will be dedicated architecture for electric passenger minivans and light commercial vehicles. The company has already announced its “electric first” compact car architecture, known as MMA, will launch in vehicles by 2024.

News: DeepMind puts the entire human proteome online, as folded by AlphaFold

DeepMind and several research partners have released a database containing the 3D structures of nearly every protein in the human body, as computationally determined by the breakthrough protein folding system demonstrated last year, AlphaFold. The freely available database represents an enormous advance and convenience for scientists across hundreds of disciplines and domains, and may very

DeepMind and several research partners have released a database containing the 3D structures of nearly every protein in the human body, as computationally determined by the breakthrough protein folding system demonstrated last year, AlphaFold. The freely available database represents an enormous advance and convenience for scientists across hundreds of disciplines and domains, and may very well form the foundation of a new phase in biology and medicine.

The AlphaFold Protein Structure Database is a collaboration between DeepMind, the European Bioinformatics Institute and others, and consists of hundreds of thousands of protein sequences with their structures predicted by AlphaFold — and the plan is to add millions more to create a “protein almanac of the world.”

“We believe that this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date, and is a great example of the kind of benefits AI can bring to society,” said DeepMind founder and CEO Demis Hassabis.

From genome to proteome

If you’re not familiar with proteomics in general — and it’s quite natural if that’s the case — the best way to think about this is perhaps in terms of another major effort: that of sequencing the human genome. As you may recall from the late ’90s and early ’00s, this was a huge endeavor undertaken by a large group of scientists and organizations across the globe and over many years. The genome, finished at last, has been instrumental to the diagnosis and understanding of countless conditions, and in the development of drugs and treatments for them.

It was, however, just the beginning of the work in that field — like finishing all the edge pieces of a giant puzzle. And one of the next big projects everyone turned their eyes toward in those years was understanding the human proteome — which is to say all the proteins used by the human body and encoded into the genome.

The problem with the proteome is that it’s much, much more complex. Proteins, like DNA, are sequences of known molecules; in DNA these are the handful of familiar bases (adenine, guanine, etc.), but in proteins they are the 20 amino acids (each of which is coded by multiple bases in genes). This in itself creates a great deal more complexity, but it’s only the start. The sequences aren’t simply “code” but actually twist and fold into tiny molecular origami machines that accomplish all kinds of tasks within our body. It’s like going from binary code to a complex language that manifests objects in the real world.

Practically speaking this means that the proteome is made up of not just 20,000 sequences of hundreds of acids each, but that each one of those sequences has a physical structure and function. And one of the hardest parts of understanding them is figuring out what shape is made from a given sequence. This is generally done experimentally using something like x-ray crystallography, a long, complex process that may take months or longer to figure out a single protein — if you happen to have the best labs and techniques at your disposal. The structure can also be predicted computationally, though the process has never been good enough to actually rely on — until AlphaFold came along.

Taking a discipline by surprise

Without going into the whole history of computational proteomics (as much as I’d like to), we essentially went from distributed brute-force tactics 15 years ago — remember Folding@home? — to more honed processes in the last decade. Then AI-based approaches came on the scene, making a splash in 2019 when DeepMind’s AlphaFold leapfrogged every other system in the world — then made another jump in 2020, achieving accuracy levels high enough and reliable enough that it prompted some experts to declare the problem of turning an arbitrary sequence into a 3D structure solved.

I’m only compressing this long history into one paragraph because it was extensively covered at the time, but it’s hard to overstate how sudden and complete this advance was. This was a problem that stumped the best minds in the world for decades, and it went from “we maybe have an approach that kind of works, but extremely slowly and at great cost” to “accurate, reliable, and can be done with off the shelf computers” in the space of a year.

Examples of protein structures predicted by AlphaFold

Image Credits: DeepMind

The specifics of DeepMind’s advances and how it achieved them I will leave to specialists in the fields of computational biology and proteomics, who will no doubt be picking apart and iterating on this work over the coming months and years. It’s the practical results that concern us today, as the company employed its time since the publication of AlphaFold 2 (the version shown in 2020) not just tweaking the model, but running it… on every single protein sequence they could get their hands on.

The result is that 98.5% of the human proteome is now “folded,” as they say, meaning there is a predicted structure that the AI model is confident enough (and importantly, we are confident enough in its confidence) represents the real thing. Oh, and they also folded the proteome for 20 other organisms, like yeast and E. coli, amounting to about 350,000 protein structures total. It’s by far — by orders of magnitude — the largest and best collection of this absolutely crucial information.

All that will be made available as a freely browsable database that any researcher can simply plug a sequence or protein name into and immediately be provided the 3D structure. The details of the process and database can be found in a paper published today in the journal Nature.

“The database as you’ll see it tomorrow, it’s a search bar, it’s almost like Google search for protein structures,” said Hassabis in an interview with TechCrunch. “You can view it in the 3D visualizer, zoom around it, interrogate the genetic sequence… and the nice thing about doing it with EMBL-EBI is it’s linked to all their other databases. So you can immediately go and see related genes, And it’s linked to all these other databases, you can see related genes, related in other organisms, other proteins that have related functions, and so on.”

“As a scientist myself, who works on an almost unfathomable protein,” said EMBL-EBI’s Edith Heard (she didn’t specify which protein), “it’s really exciting to know that you can find out what the business end of a protein is now, in such a short time — it would have taken years. So being able to access the structure and say ‘aha, this is the business end,’ you can then focus on trying to work out what that business end does. And I think this is accelerating science by steps of years, a bit like being able to sequence genomes did decades ago.”

So new is the very idea of being able to do this that Hassabis said he fully expects the entire field to change — and change the database along with it.

“Structural biologists are not yet used to the idea that they can just look up anything in a matter of seconds, rather than take years to experimentally determine these things,” he said. “And I think that should lead to whole new types of approaches to questions that can be asked and experiments that can be done. Once we start getting wind of that, we may start building other tools that cater to this sort of serendipity: What if I want to look at 10,000 proteins related in a particular way? There isn’t really a normal way of doing that, because that isn’t really a normal question anyone would ask currently. So I imagine we’ll have to start producing new tools, and there’ll be demand for that once we start seeing how people interact with this.”

That includes derivative and incrementally improved versions of the software itself, which has been released in open source along with a great deal of development history. Already we have seen an independently developed system, RoseTTAFold, from researchers at the University of Washington’s Baker Lab, which extrapolated from AlphaFold’s performance last year to create something similar yet more efficient — though DeepMind seems to have taken the lead again with its latest version. But the point was made that the secret sauce is out there for all to use.

Practical magic

Although the prospect of structural bioinformaticians attaining their fondest dreams is heartwarming, it is important to note that there are in fact immediate and real benefits to the work DeepMind and EMBL-EBI have done. It is perhaps easiest to see in their partnership with the Drugs for Neglected Diseases Institute.

The DNDI focuses, as you might guess, on diseases that are rare enough that they don’t warrant the kind of attention and investment from major pharmaceutical companies and medical research outfits that would potentially result in discovering a treatment.

“This is a very practical problem in clinical genetics, where you have a suspected series of mutations, of changes in an affected child, and you want to try and work out which one is likely to be the reason why our child has got a particular genetic disease. And having widespread structural information, I am almost certain will improve the way we can do that,” said DNDI’s Ewan Birney in a press call ahead of the release.

Ordinarily examining the proteins suspected of being at the root of a given problem would be expensive and time-consuming, and for diseases that affect relatively few people, money and time are in short supply when they can be applied to more common problems like cancers or dementia-related diseases. But being able to simply call up the structures of 10 healthy proteins and 10 mutated versions of the same, insights may appear in seconds that might otherwise have taken years of painstaking experimental work. (The drug discovery and testing process still takes years, but maybe now it can start tomorrow for Chagas disease instead of in 2025.)

Illustration of RNA polymerase II ( a protein) in action in yeast. Image Credits: Getty Images / JUAN GAERTNER/SCIENCE PHOTO LIBRARY

Lest you think too much is resting on a computer’s prediction of experimentally unverified results, in another, totally different case, some of the painstaking work had already been done. John McGeehan of the University of Portsmouth, with whom DeepMind partnered for another potential use case, explained how this affected his team’s work on plastic decomposition.

“When we first sent our seven sequences to the DeepMind team, for two of those we already had experimental structures. So we were able to test those when they came back, and it was one of those moments, to be honest, when the hairs stood up on the back of my neck,” said McGeehan. “Because the structures that they produced were identical to our crystal structures. In fact, they contained even more information than the crystal structures were able to provide in certain cases. We were able to use that information directly to develop faster enzymes for breaking down plastics. And those experiments are already underway, immediately. So the acceleration to our project here is, I would say, multiple years.”

The plan is to, over the next year or two, make predictions for every single known and sequenced protein — somewhere in the neighborhood of a hundred million. And for the most part (the few structures not susceptible to this approach seem to make themselves known quickly) biologists should be able to have great confidence in the results.

Inspecting molecular structure in 3D has been possible for decades, but finding that structure in the first place is difficult. Image Credits: DeepMind

The process AlphaFold uses to predict structures is, in some cases, better than experimental options. And although there is an amount of uncertainty in how any AI model achieves its results, Hassabis was clear that this is not just a black box.

“For this particular case, I think explainability was not just a nice-to-have, which often is the case in machine learning, but it was a must-have, given the seriousness of what we wanted it to be used for,” he said. “So I think we’ve done the most we’ve ever done on a particular system to make the case with explainability. So there’s both explainability on a granular level on the algorithm, and then explainability in terms of the outputs, as well the predictions and the structures, and how much you should or shouldn’t trust them, and which of the regions are the reliable areas of prediction.”

Nevertheless, his description of the system as “miraculous” attracted my special sense for potential headline words. Hassabis said that there’s nothing miraculous about the process itself, but rather that he’s a bit amazed that all their work has produced something so powerful.

“This was by far the hardest project we’ve ever done,” he said. “And, you know, even when we know every detail of how the code works, and the system works, and we can see all the outputs, it’s still just still a bit miraculous when you see what it’s doing… that it’s taking this, this 1D amino acid chain and creating these beautiful 3D structures, a lot of them aesthetically incredibly beautiful, as well as scientifically and functionally valuable. So it was more a statement of a sort of wonder.”

Fold after fold

The impact of AlphaFold and the proteome database won’t be felt for some time at large, but it will almost certainly — as early partners have testified — lead to some serious short-term and long-term breakthroughs. But that doesn’t mean that the mystery of the proteome is solved completely. Not by a long shot.

As noted above, the complexity of the genome is nothing compared to that of the proteome at a fundamental level, but even with this major advance we have only scratched the surface of the latter. AlphaFold solves a very specific, though very important problem: given a sequence of amino acids, predict the 3D shape that sequence takes in reality. But proteins don’t exist in a vacuum; they’re part of a complex, dynamic system in which they are changing their conformation, being broken up and reformed, responding to conditions, the presence of elements or other proteins, and indeed then reshaping themselves around those.

In fact a great deal of the human proteins for which AlphaFold gave only a middling level of confidence to its predictions may be fundamentally “disordered” proteins that are too variable to pin down the way a more static one can be (in which case the prediction would be validated as a highly accurate predictor for that type of protein). So the team has its work cut out for it.

“It’s time to start looking at new problems,” said Hassabis. “Of course, there are many, many new challenges. But the ones you mentioned, protein interaction, protein complexes, ligand binding, we’re working actually on all these things, and we have early, early stage projects on all those topics. But I do think it’s worth taking, you know, a moment to just talk about delivering this big step… it’s something that the computational biology community’s been working on for 20, 30 years, and I do think we have now broken the back of that problem.”

News: Silicon Valley comms expert Caryn Marooney shares how to nail the narrative

“You may think that the message is boring because you’ve said it once. But repetition never spoils the prayer.”

Caryn Marooney, Silicon Valley communications professional turned venture capitalist, spoke extensively on storytelling at TechCrunch Early Stage: Marketing and Fundraising. During her talk, she broke down messaging into four critical parts.

Marooney knows what she’s talking about: Throughout her time in Silicon Valley, she helped companies like Salesforce, Amazon, Facebook and more launch products and maintain messaging. In 2019, she left Facebook, where she was VP of technology communication, and joined Coatue Management as a general partner.

The presentation is summarized below and lightly edited for readability. Marooney breaks down her method into the acronym of RIBS: Relevance, Inevitability, Believability and keeping it Simple. A video of her presentation is also embedded below and contains 20 minutes of Q&A where she answers audience questions and covers a lot of ground.

Marooney has written extensively on this subject for TechCrunch, including this article, where she describes her RIBS method in detail. Last month, she expanded on this topic with her go-to-market strategy around building a hamburger.

‘The gift of editing is critical. Do not just write all your ideas and get very excited about what you think and ship it.’

Relevant

Why should anyone care? Does anyone care? That’s the point Marooney is making here. The message must be relevant to the audience before anything else.

The very first thing is why anyone should care. And it’s important to remember that as a startup, you’re in a situation where nobody knows you. And nobody thinks, “Oh, I should really care about this. So you need to be very specific about who your audiences are and why they should care and why it matters to them. Early on, too. Relevance is usually to a very small audience, and you earn the right every day to expand that audience.

So, for example, when I was first working with Salesforce, it was a very narrow set of salespeople, for small- and medium-sized businesses, there was always the sense that it was going to be a cloud provider for companies of every size, but you have to start somewhere. And when you’re starting somewhere, you can paint the bigger picture. But you have to be specific about the benefits to your smaller audience. (Timestamp: 1:48)

Inevitable

In addition to talking about Tesla, Marooney uses the counter-example of the Segway, which shows a great idea alone is not enough. Even though Segways were introduced as a world-changing mode of transportation, in 2021, Segways are mainly only used by mall cops and tourists.

News: OnePlus announces $150 Pro noise cancelling earbuds

With a couple of generations of wireless earbuds under its belt, OnePlus finally has the AirPods Pro — and the rest of the premium market — in its sights. As part of an event today that also includes the launch of its budget Nord 2, the company officially announced the OnePlus Buds Pro. The top-line

With a couple of generations of wireless earbuds under its belt, OnePlus finally has the AirPods Pro — and the rest of the premium market — in its sights. As part of an event today that also includes the launch of its budget Nord 2, the company officially announced the OnePlus Buds Pro.

The top-line feature here is adaptive noise canceling, which uses a trio of on-board mics to filter out ambient sound up to 40 dBs. The company says the tech compares favorably to more standard active noise cancelling, which offers a set level of filtering. The buds are powered by a pair of 11mm dynamic drivers, with support for Dolby Atmos.

All told, battery life is up to 10 hours on the buds (sans noise cancelling) and 38 with the case (ditto). The case will charge wireless with third-party Qi pads, or the system can get 10 hours of life with 10 minutes plugged in.

Image Credits: OnePlus

At $150, they’re $100 cheaper than the AirPods, though the number everyone is looking at here is almost certainly the $99 price tag Nothing announced for its upcoming Ear (1) buds. OnePlus did manage, however, to beat its co-founder’s new company to the punch by a full week.

Coincidence? You be the judge. (Honestly, it’s probably a coincide, given that the Ear (1) were delayed, but I digress.)

The price tag puts them more directly in line with the recently announced Beats Studio Buds, along with Google’s Pixel Buds and Samsung’s Galaxy Buds Pro — that is to say, somewhere in the middle of the pack. Design-wise, they appear most similar to the AirPods Pro, albeit with a metallic stem popping out from the bottom of the black or white buds.

I was fairly underwhelmed by the company’s first fully wireless set, the OnePlus Buds. To the company’s credit, they were extremely aggressively priced, at $70, in keeping with the release of the original Nord handset. The OnePlus Buds Pro will arrive in the U.S. and Canada on September 1.

 

Generated by Feedzy
WordPress Image Lightbox Plugin