Monthly Archives: December 2020

News: AWS expands startup assistance program

Last year, AWS launched the APN Global Startup Program, which is sort of AWS’s answer to an incubator for mid-to-late stage startups deeply involved with AWS technology. This year, the company wants to expand that offering, and today it announced some updates to the program at the Partner keynote today at AWS re:Invent. While startups

Last year, AWS launched the APN Global Startup Program, which is sort of AWS’s answer to an incubator for mid-to-late stage startups deeply involved with AWS technology. This year, the company wants to expand that offering, and today it announced some updates to the program at the Partner keynote today at AWS re:Invent.

While startups technically have to pay a $2500 fee if they are accepted to the program, AWS typically refunds that fee, says Doug Yeum, head of the Global Partner Organization at AWS — and they get a lot of benefits for being part of the program.

“While the APN has a $2,500 annual program fee, startups that are accepted into the invite-only APN Global Startup Program get that fee back, as well as free access to substantial additional resources both in terms of funding as well as exclusive program partner managers and co-sell specialists resources,” Yeum told TechCrunch.

And those benefits are pretty substantial including access to a new “white glove program” that lets them work with a program manager with direct knowledge of AWS and who has experience working with startups. In addition, participants get access to an ISV program to work more directly with these vendors to increase sales and access to data exchange services to move third party data into the AWS cloud.

What’s more, they can apply to the new AI/ML Acceleration program. As AWS describes it, “This includes up to $5,000 AWS credits to fund experiments on AWS services, enabling startups to explore AWS AI/ML tools that offer the best fit for them at low risk.”

Finally, they get partially free access to the AWS Marketplace, offsetting the normal marketplace listing fees for the first five offerings. Some participants will also get access to AWS sales to help use the power of the large company to drive a startup’s sales.

While you can apply to the program, the company also recruits individual startups that catch its attention. “We also proactively invite mid-to-late stage startups built on AWS that, based on market signals, are showing traction and offer interesting use cases for our mutual enterprise customers,” Yeum explained.

Among the companies currently involved in the program are HashiCorp, Logz.io and Snapdocs. Interested startups can apply on the APN Global Startup website.

News: Find out how startups like Skyroot and Bluefield are building new industries at TC Sessions: Space 2020

At our fast-approaching first TC Sessions: Space event, which is happening December 16-17, we’re going to be highlighting some of the most exciting startups and founders tackling big problems with innovative and groundbreaking solutions. Some of those companies are focused on building tomorrow’s spacecraft, and others are working on in-space technologies that could become the

At our fast-approaching first TC Sessions: Space event, which is happening December 16-17, we’re going to be highlighting some of the most exciting startups and founders tackling big problems with innovative and groundbreaking solutions.

Some of those companies are focused on building tomorrow’s spacecraft, and others are working on in-space technologies that could become the next big anchor upon which countless other businesses are built.

Two of the companies joining us at TC Sessions: Space are Skyroot and Bluefield. Skyroot is India’s first private space launch startup, founded in 2018 with the goal of developing a low-cost and reliable launch vehicle to help democratize access of space.

Founder, CEO and CTO Pawan Kumar Chandana will join us to talk about building his new business, his prior experience developing rockets for the Indian Space Research Organization (ISRO) and how Skyroot’s Vikram-series launch vehicles plan to achieve the company’s ambitious goals.

Bluefield Technologies is focused on an entirely different, but potentially just as impactful opportunity: Observation, monitoring and analysis of methane emissions data on Earth. Their satellite-based methane observation technology offer a new high bar of precision and detail.

Bluefield founder and CEO Yotam Ariel will join us to talk about what becomes possible across a range of industries once you offer them the ability to track up to 90% of the Earth’s methane emissions with pinpoint accuracy, at costs that are up to 100 percent cheaper than existing solutions on up to a daily basis.

We’ll have conversations with Chandana, Ariel and others as part of our ‘Founders in Focus’ series, just one small part of the all-star agenda at TC Sessions: Space. Tickets are still available at the Late Registration price with discounts for students, government/military employees and groups, so grab yours below to attend this fully virtual event.

News: VCs who want better outcomes should use data to reduce founder team risk

An objective, data-backed process helps us make better investment decisions, avoid costly mistakes and discover opportunities we might have otherwise overlooked.

Janneke Niessen
Contributor

Janneke Niessen is a partner at CapitalT, a serial entrepreneur and a tech diversity advocate.

VCs expect the companies they invest in to use data to improve their decision-making. So why aren’t they doing that when evaluating startup teams?

Sure, venture capital is a people business, and the power of gut feeling is real. But using an objective, data-backed process to evaluate teams — the same way we do when evaluating financial KPIs, product, timing and market opportunities — will help us make better investment decisions, avoid costly mistakes and discover opportunities we might have otherwise overlooked.

An objective assessment process will also help investors break free from patterns and back someone other than a white male for a change. Is looking at how we have always done things the best way to build for the future?

Sixty percent of startups fail because of problems with the team. Instinct matters, but a team is too big a risk to leave to intuition. I will use myself as an example. I have founded two companies. I know what it takes to build a company and to achieve a successful exit. I like to think I can sense when someone has that special something and when a team has chemistry. But I am human. I am limited by bias and thought patterns; data is not.

You can (and should) take a scientific approach to evaluating a startup team. A “strong” team isn’t a vague concept — extensive research confirms what it takes to execute a vision. Despite what people expect, soft skills can be measured. VCVolt is a computerized selection model that analyzes the performance of companies and founding teams developed by Eva de Mol, Ph.D., my partner at CapitalT.

We use it to inform every investment decision we make and to demystify a common hurdle to entrepreneurial success. (The technology also evaluates the company, market opportunity, timing and other factors, but since most investors aren’t taking a structured, data-backed approach to analyzing teams, let’s focus on that.)

VCVolt allows us to reduce team risk early on in the selection and due diligence process, thereby reducing confirmation bias and fail rates, discovering more winning teams and driving higher returns.

I will keep this story brief for privacy reasons, but you will get the point. While testing the model, we advised another VC firm not to move forward with an investment based on the model’s findings. The firm moved forward anyway because they were in love with the deal, and everything the model predicted transpired. It was a big loss for the investors, and a reminder that hunch and gut feeling can be wrong — or at least blind you to some serious risk factors.

The platform uses a validated model that is based on more than five years of scientific research, data from more than 1,000 companies and input from world-class experts and scientists. Its predictive validity is noted in top-tier scientific journals and other publications, including Harvard Business Review. By asking the right questions — science-based questions validated by more than 80,000 datapoints — the platform analyzes the likelihood that a team will succeed. It considers:

News: MLCommons debuts with public 86,000-hour speech dataset for AI researchers

If you want to make a machine learning system, you need data for it, but that data isn’t always easy to come by. MLCommons aims to unite disparate companies and organizations in the creation of large public databases for AI training, so that researchers around the world can work together at higher levels, and in

If you want to make a machine learning system, you need data for it, but that data isn’t always easy to come by. MLCommons aims to unite disparate companies and organizations in the creation of large public databases for AI training, so that researchers around the world can work together at higher levels, and in doing so advance the nascent field as a whole. Its first effort, the People’s Speech dataset, is many times the size of others like it, and aims to be more diverse as well.

MLCommons is a new non-profit related to MLPerf, which has collected input from dozens of companies and academic institutions to create industry-standard benchmarks for machine learning performance. The endeavor has met with success, but in the process the team encountered a paucity of open datasets that everyone could use.

If you want to do an apples-to-apples comparison of a Google model to an Amazon model, or for that matter a UC Berkeley model, they really all ought to be using the same testing data. With computer vision one of the most widespread datasets is ImageNet, which is used and cited by all the most influential papers and experts. But there’s no such dataset for, say, speech to text accuracy.

“Benchmarks get people talking about progress in a sensible, measurable way. And it turns out that if the goal is the move the industry forward, we need datasets we can use — but lots of them are difficult to use for licensing reasons, or aren’t state of the art,” said MLCommons co-founder and executive director David Kanter.

Certainly the big companies have enormous voice datasets of their own, but they’re proprietary and perhaps legally restricted from being used by others. And there are public datasets, but with only a few thousand hours their utility is limited — to be competitive today one needs much more than that.

“Building large datasets is great because we can create benchmarks, but it also moves the needle forward for everyone. We can’t rival what’s available internally but we can go a long way towards bridging that gap,” Kanter said. MLCommons is the organization they formed to create and wrangle the required data and connections.

The People’s Speech Dataset was assembled from a variety of sources, with about 65,000 of its hours coming from audiobooks in English, with the text aligned with the audio. Then there are 15,000 hours or so sourced from around the web, with different acoustics, speakers, and styles of speech (for example conversational instead of narrative). 1,500 hours of English audio were sourced from Wikipedia, and then 5,000 hours of synthetic speech of text generated by GPT-2 were mixed in (“A little bit of the snake eating its own tail,” joked Kanter). 59 languages in total are represented in some way, though as you can tell it is mostly English.

Although diversity is the goal — you can’t build a virtual assistant in Portuguese from English data — it’s also important to establish a baseline for what’s needed for present purposes. Is 10,000 hours sufficient to build a decent speech-to-text model? Or does having 20,000 available make development that much easier, faster, or effective? What if you want to be excellent at American English but also decent with Indian and English accents? How much of those do you need?

The general consensus with datasets is simply “the larger the better,” and the likes of Google and Apple are working with far more than a few thousand hours. Thus the 86,000 hours in this first iteration of the dataset. And it is definitely the first of many, with later versions due to branch out into more languages and accents.

“Once we verify we can deliver value, we’ll just release and be honest about the state it’s in,” explained Peter Mattson, another co-founder of MLCommons and currently head of Google’s Machine Learning Metrics Group. “We also need to learn how to quantify the idea of diversity. The industry wants this; we need more dataset construction expertise — there’s tremendous ROI for everybody in supporting such an organization.”

The organization is also hoping to spur sharing and innovation in the field with MLCube, a new standard for passing models back and forth that takes some of the guesswork and labor out of that process. Although machine learning is one of the tech sector’s most active areas of research and development, taking your AI model and giving to someone else to test, run, or modify isn’t as simple as it ought to be.

Their idea with MLCube is a wrapper for models that describes and standardizes a few things, like dependencies, input and output format, hosting and so on. AI may be fundamentally complex, but it and the tools to create and test it are still in their infancy.

The dataset should be available now, or soon, from MLCommons’ website, under the CC-BY license, allowing for commercial use; a few reference models trained on the set will also be released.

News: Space startup Aevum debuts world’s first fully autonomous orbital rocket launching drone

Launching things to space doesn’t have to mean firing a large rocket vertically using massive amounts of rocket-fuel powered thrust – startup Aevum breaks the mould in multiple ways, with an innovative launch vehicle design that combines uncrewed aircraft with horizontal take-off and landing capabilities, with a secondary stage that deploys at high altitude and

Launching things to space doesn’t have to mean firing a large rocket vertically using massive amounts of rocket-fuel powered thrust – startup Aevum breaks the mould in multiple ways, with an innovative launch vehicle design that combines uncrewed aircraft with horizontal take-off and landing capabilities, with a secondary stage that deploys at high altitude and can take small payloads the rest of the way to space.

Aevum’s model actually isn’t breaking much new ground in terms of its foundational technology, according to founder and CEO Jay Skylus, who I spoke to prior to today’s official unveiling of the startup’s Ravn X launch vehicle. Skylus, who previously worked for a range of space industry household names and startups including NASA, Boeing, Moon Express and Firefly, told me that the startup has focused primarily on making the most of existing available technologies to create a mostly reusable, fully automated small payload orbital delivery system.

To his point, Ravn X doesn’t look too dissimilar from existing jet aircraft, and bears obvious resemblance to the Predator line of UAVs already in use for terrestrial uncrewed flight. The vehicle is 80 feet long, and has a 60-foot wingspan, with a total max weight of 55,000 lbs including payload. 70% of the system is fully reusable today, and Skylus says that the goal is to iterate on that to the point where 95% of the launch system will be reusable in the relatively near future.

Image Credits: Aevum

Ravn X’s delivery system is design for rapid response delivery, and is able to get small satellites to orbit in as little as 180 minutes – with the capability of having it ready to fly and deliver another again fairly shortly after that. It uses traditional jet fuel, the same kind used on commercial airliners, and it can take off and land in “virtually any weather,” according to Skylus. It also takes off and lands on any 1-mile stretch of traditional aircraft runway, meaning it can theoretically use just about any active airport in the world as a launch and landing site.

One of they key defining differences of Aevum relative to other space launch startups is that what they’re presenting isn’t theoretical, or in development – the Ravn X already has paying customers, including over $1 billion in U.S. government contracts. It’s first mission is with the U.S. Space Force, the ASLON-45 small satellite launch mission (set for late 2021), and it also has a contract for 20 missions spanning 9 years with the U.S. Air Force Space and Missile Systems Center.  Deliveries of Aevum’s production launch vehicles to its customers have already begun, in fact, Skylus says.

The U.S. Department of Defense has been actively pursuing space launch options that provide it with responsive, short turnaround launch capabilities for quite some time now. That’s the same goal that companies like Astra, which was originally looking to win the DARPA challenge for such systems (since expired) with its Rocket small launcher. Aevum’s system has the added advantage of being essentially fully compatible with existing airfield infrastructure – and also of not requiring that human pilots be involved or at risk at all, as they are with the superficially similar launch model espoused by Virgin Orbit.

Aevum isn’t just providing the Ravn X launcher, either; its goal is to handle end-to-end logistics for launch services, including payload transportation and integration, which are parts of the process that Skylus says are often overlooked or underserved by existing launch providers, and that many companies creating payloads also don’t realize are costly, complicated and time-consuming parts of actually delivering a working small satellite to orbit. The startup also isn’t “re-inventing the wheel” when it comes to its integration services – Skylus says they’re working with a range of existing partners who all already have proven experience doing this work but who haven’t previously had the motivation or the need to provide these kinds of services to the customers that Skylum sees coming online, both in the public and private sector.

The need isn’t for another SpaceX, Skylus says; rather, thanks to SpaceX, there’s a wealth of aerospace companies who previously worked almost exclusively with large government contracts and the one or two massive legacy rocket companies to put missions together. They’re now open to working with the greatly expanded market for orbital payloads, including small satellites that aim to provide cost-effective solutions in communications, environmental monitor, shipping and defense.

Aevum’s solution definitely sounds like it addresses a clear and present need, in a way that offers benefits in terms of risk profile, reusability, cost and flexibility. The company’s first active missions will obviously be watched closely, by potential customers and competitors alike.

News: Android’s winter update adds new features to Gboard, Maps, Books, Nearby Share and more

Google announced this morning Android phones will receive an update this winter that will bring some half-dozen new features to devices, including improvements to apps like Gboard, Google Play Books, Voice Access, Google Maps, Android Auto, and Nearby Share. The release is the latest in a series of update bundles that now allow Android devices

Google announced this morning Android phones will receive an update this winter that will bring some half-dozen new features to devices, including improvements to apps like Gboard, Google Play Books, Voice Access, Google Maps, Android Auto, and Nearby Share. The release is the latest in a series of update bundles that now allow Android devices to receive new features outside of the usual annual update cycle.

The bundles may not deliver Android’s latest flagship features, but they offer steady improvements on a more frequent basis.

One of the more fun bits in the winter update will include a change to “Emoji Kitchen,” the feature in the Gboard keyboard app that lets users combine their favorite emoji to create new ones that can be shared as customized stickers. To date, users have remixed emoji over 3 billion times since the feature launched earlier this year, Google says. Now, the option is being expanded. Instead of offering hundreds of design combinations, it will offer over 14,000. You’ll also be able to tap two emoji to see suggested combinations or double tap on one emoji to see other suggestions.

Image Credits: Google

This updated feature had been live in the Gboard beta app, but will now roll out to Android 6.0 and above devices in the weeks ahead.

Another update will expand audiobook availability on Google Play Books. Now, Google will auto-generate narrations for books that don’t offer an audio version. The company says it worked with publishers in the U.S. and U.K. to add these auto-narrated books to Google Play Books. The feature is in beta but will roll out to all publishers in early 2021.

An accessibility feature that lets people use and navigate their phone with voice commands, Voice Access, will also be improved. The feature will soon leverage machine learning to understand interface labels on devices. This will allow users to refer to things like the “back” and “more” buttons, and many others by name when they are speaking.

The new version of Voice Access, now in beta, will be available to all devices worldwide running Android 6.0 or higher.

An update for Google Maps will add a new feature to one of people’s most-used apps.

In a new (perhaps Waze-inspired) “Go Tab,” users will be able to more quickly navigate to frequently visited places — like a school or grocery store, for example — with a tap. The app will allow users to see directions, live traffic trends, disruptions on the route, and gives an accurate ETA, without having to type in the actual address. Favorite places — or in the case of public transit users, specific routes — can be pinned in the Go Tab for easy access. Transit users will be able to see things like accurate departure and arrival times, alerts from the local transit agency, and an up-to-date ETA.

Image Credits: Google

One potentially helpful use case for this new feature would be to pin both a transit route and driving route to the same destination, then compare their respective ETAs to pick the faster option.

This feature is coming to both Google Maps on Android as well as iOS in the weeks ahead.

Android Auto will expand to more countries over the next few months. Google initially said it would reach 36 countries, but then updated the announcement language as the timing of the rollout was pushed back. The company now isn’t saying how many countries will gain access in the months to follow or which ones, so you’ll need stay tuned for news on that front.

Image Credits: Google

The final change is to Nearby Share, the proximity-based sharing feature that lets users share things like links, files, photos and and more even when they don’t have a cellular or Wi-Fi connection available. The feature, which is largely designed with emerging markets in mind, will now allow users to share apps from Google Play with people around them, too.

To do so, you’ll access a new “Share Apps” menu in “Manage Apps & Games” in the Google Play app. This feature will roll out in the weeks ahead.

Some of these features will begin rolling out today, so you may receive them earlier than a timeframe of several “weeks,” but the progress of each update will vary.

News: iPhones can now automatically recognize and label buttons and UI features for blind users

Apple has always gone out of its way to build features for users with disabilities, and Voiceover on iOS is an invaluable tool for anyone with a vision impairment — assuming every element of the interface has been manually labeled. But the company just unveiled a brand new feature that uses machine learning to identify

Apple has always gone out of its way to build features for users with disabilities, and Voiceover on iOS is an invaluable tool for anyone with a vision impairment — assuming every element of the interface has been manually labeled. But the company just unveiled a brand new feature that uses machine learning to identify and label every button, slider, and tab automatically.

Screen Recognition, available now in iOS 14, is a computer vision system that has been trained on thousands of images of apps in use, learning what a button looks like, what icons mean, and so on. Such systems are very flexible — depending on the data you give them, they can become expert at spotting cats, facial expressions, or as in this case the different parts of a user interface.

The result is that in any app now, users can invoke the feature and a fraction of a second later every item on screen will be labeled. And by “every,” they mean every — after all, screen readers need to be aware of every thing that a sighted user would see and be able to interact with, from images (which iOS has been able to create one-sentence summaries of for some time) to common icons (home, back) and context-specific ones like “…” menus that appear just about everywhere.

The idea is not to make manual labeling obsolete — developers know best how to label their own apps, but updates, changing standards, and challenging situations (in-game interfaces, for instance) can lead to things not being as accessible as they could be.

I chatted with Chris Fleizach from Apple’s iOS accessibility engineering team, and Jeff Bigham from the AI/ML accessibility team, about the origin of this extremely helpful new feature. (It’s described in a paper due to be presented next year.)

“We looked for areas where we can make inroads on accessibility, like image descriptions,” said Fleizach. “In iOS 13 we labeled icons automatically – Screen Recognition takes it another step forward. We can look at the pixels on screen and identify the hierarchy of objects you can interact with, and all of this happens on device within tenths of a second.”

The idea is not a new one, exactly; Bigham mentioned a screen reader, Outspoken, which years ago attempted to use pixel-level data to identify UI elements. But while that system needed precise matches, the fuzzy logic of machine learning systems and the speed of iPhones’ built-in AI accelerators means that Screen Recognition is much more flexible and powerful.

It wouldn’t have been possibly just a couple years ago — the state of machine learning and the lack of a dedicated unit for executing it meant that something like this would have been extremely taxing on the system, taking much longer and probably draining the battery all the while.

But once this kind of system seemed possible, the team got to work prototyping it with the help of their dedicated accessibility staff and testing community.

“VoiceOver has been the standard bearer for vision accessibility for so long. If you look at the steps in development for Screen Recognition, it was grounded in collaboration across teams — Accessibility throughout, our partners in data collection and annotation, AI/ML, and, of course, design. We did this to make sure that our machine learning development continued to push toward an excellent user experience,” said Bigham.

It was done by taking thousands of screenshots of popular apps and games, then manually labeling them as one of several standard UI elements. This labeled data was fed to the machine learning system, which soon became proficient at picking out those same elements on its own.

It’s not as simple as it sounds — as humans, we’ve gotten quite good at understanding the intention of a particular graphic or bit of text, and so often we can navigate even abstract or creatively designed interfaces. It’s not nearly as clear to a machine learning model, and the team had to work with it to create a complex set of rules and hierarchies that ensure the resulting screen reader interpretation makes sense.

The new capability should help make millions of apps more accessible, or just accessible at all, to users with vision impairments. You can turn it on by going to Accessibility settings, then VoiceOver, then VoiceOver Recognition, where you can turn on and off image, screen, and text recognition.

It would not be trivial to bring Screen Recognition over to other platforms, like the Mac, so don’t get your hopes up for that just yet. But the principle is sound, though the model itself is not generalizable to desktop apps, which are very different from mobile ones. Perhaps others will take on that task; the prospect of AI-driven accessibility features is only just beginning to be realized.

News: Microsoft launches Azure Purview, its new data governance service

As businesses gather, store and analyze an ever-increasing amount of data, tools for helping them discover, catalog, track and manage how that data is shared are also becoming increasingly important. With Azure Purview, Microsoft is launching a new data governance service into public preview today that brings together all of these capabilities in a new

As businesses gather, store and analyze an ever-increasing amount of data, tools for helping them discover, catalog, track and manage how that data is shared are also becoming increasingly important. With Azure Purview, Microsoft is launching a new data governance service into public preview today that brings together all of these capabilities in a new data catalog with discovery and data governance features.

As Rohan Kumar, Microsoft’s corporate VP for Azure Data told me, this has become a major paint point for enterprises. While they may be very excited about getting started with data-heavy technologies like predictive analytics, those companies’ data- and privacy- focused executives are very concerned to make sure that the way the data is used is compliant or that the company has received the right permissions to use its customers’ data, for example.

In addition, companies also want to make sure that they can trust their data and know who has access to it and who made changes to it.

“[Purview] is a unified data governance platform which automates the discovery of data, cataloging of data, mapping of data, lineage tracking — with the intention of giving our customers a very good understanding of the breadth of the data estate that exists to begin with, and also to ensure that all these regulations that are there for compliance, like GDPR, CCPA, etc, are managed across an entire data estate in ways which enable you to make sure that they don’t violate any regulation,” Kumar explained.

At the core of Purview is its catalog that can pull in data from the usual suspects like Azure’s various data and storage services but also third-party data stores including Amazon’s S3 storage service and on-premises SQL Server. Over time, the company will add support for more data sources.

Kumar described this process as a ‘multi-semester investment,’ so the capabilities the company is rolling out today are only a small part of what’s on the overall roadmap already. With this first release today, the focus is on mapping a company’s data estate.

Image Credits: Microsoft

“Next [on the roadmap] is more of the governance policies,” Kumar said. “Imagine if you want to set things like ‘if there’s any PII data across any of my data stores, only this group of users has access to it.’ Today, setting up something like that is extremely complex and most likely you’ll get it wrong. That’ll be as simple as setting a policy inside of Purview.”

In addition to launching Purview, the Azure team also today launched Azure Synapse, Microsoft’s next-generation data warehousing and analytics service, into general availability. The idea behind Synapse is to give enterprises — and their engineers and data scientists — a single platform that brings together data integration, warehousing and big data analytics.

“With Synapse, we have this one product that gives a completely no code experience for data engineers, as an example, to build out these [data] pipelines and collaborate very seamlessly with the data scientists who are building out machine learning models, or the business analysts who build out reports for things like Power BI.”

Among Microsoft’s marquee customers for the service, which Kumar described as one of the fastest-growing Azure services right now, are FedEx, Walgreens, Myntra and P&G.

“The insights we gain from continuous analysis help us optimize our network,” said Sriram Krishnasamy, senior vice president, strategic programs at FedEx Services. “So as FedEx moves critical high value shipments across the globe, we can often predict whether that delivery will be disrupted by weather or traffic and remediate that disruption by routing the delivery from another location.”

Image Credits: Microsoft

News: As Metromile looks to go public, insurtech funding is on the rise

Earlier this week, TechCrunch covered the latest venture round for AgentSync, a startup that helps insurance agents comply with rules and regulations. But while the product area might not keep you up tonight, the company’s growth has been incredibly impressive, scaling its annual recurring revenue (ARR) 10x in the last year and 4x since the

Earlier this week, TechCrunch covered the latest venture round for AgentSync, a startup that helps insurance agents comply with rules and regulations. But while the product area might not keep you up tonight, the company’s growth has been incredibly impressive, scaling its annual recurring revenue (ARR) 10x in the last year and 4x since the start of the pandemic.

Little surprise, then, that the company’s latest venture deal was raised just months after its last; investors wanted to get more money into AgentSync rapidly, boosting a larger venture-wide wager on insurtech startups more broadly that we’ve seen throughout 2020.


The Exchange explores startups, markets and money. Read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


But private investors aren’t the only ones getting in on the action. Public investors welcomed the Lemonade IPO earlier this year, giving the rental insurance unicorn a strong debut. Root also went public, but has lost around half of its value after a strong pricing run, comparing recent highs with its current price.

But with one success and one struggle for the sector on the scoreboard this year, Metromile is also looking to get in on the action. And, per a TechCrunch data analysis this morning and some external data work on the insurtech venture capital market, it appears that private insurtech investment is matching the attention public investors are also giving the sector.

This morning let’s do a quick exploration of the Metromile deal and take a look at the insurtech venture capital market to better understand how much capital is going into the next generation of companies that will want to replicate the public exits of our three insurtech pioneers.

Finally, we’ll link public results and recent private deal activity to see if both sides of the market are currently aligned.

Metromile

Let’s start with Metromile’s debut. It’s going public via a SPAC, namely INSU Acquisition Corp. II. Here’s the equivalent of an S-1 from both parties, going over the economics of the blank-check company and Metromile itself.

On the economics front for the insurtech startup, we have to start with some extra work. During nearly every 2020 IPO we’ve spent lots of time examining how quickly the company in question is growing. We’re not doing that today because Metromile is not growing in GAAP terms and we need to understand why that’s the case.

In simple terms, a change to Metromile’s reinsurance setup last May led to the company ceding “a larger percentage of [its] premium than in prior periods,” which resulted “in a significant decrease in our revenues as reported under GAAP,” the company said.

Ceded premiums don’t count as revenue. Lemonade, in its recent earnings results, explained the concept well from the perspective of its own, related change to its business:

While our July 1, 2020 reinsurance contracts deliver a significant improvement in the fundamentals of our business, they also result in a significant change in GAAP revenue, as GAAP excludes all ceded premiums (and proportional reinsurance is fundamentally about ceding premium). This led to a spike in GAAP gross margin and a dip in GAAP revenue on July 1 – even though no corresponding change in the scope or profitability of our business took place at midnight on June 30.

So Lemonade has shaken up its business, cutting its revenues and tidying its economics. The impact has been sharp, with the company’s GAAP revenues falling from $17.8 million in the year-ago quarter, to $10.5 million in Q3 2020.

Root has undertaken similar steps. Starting July 1, it has “transfer[ed] 70% of our premiums and related losses to reinsurers, while also gaining a 25% commission on written premium to offset some of our up-front and ongoing costs.” The result has been falling GAAP revenue and improving economics once again.

All neo-insurance companies that have provided financial results while going public have changed their reinsurance approach, making their results look a bit wonky in the short term, leaving investors to decipher what they are really worth.

News: Sight Tech Global day 2 is live! Hear from Apple, Waymo, Microsoft, Sara Hendren and Haben Girma

Day 2 for the virtual event Sight Tech Global is streaming on TechCrunch from 8 a.m. PST to 12:30. The event looks at how AI-based technologies are rapidly changing the field of accessibility, especially for blind people and those with low vision. Today’s programming includes top accessibility product and technology leaders from Apple, Waymo, Microsoft

Day 2 for the virtual event Sight Tech Global is streaming on TechCrunch from 8 a.m. PST to 12:30. The event looks at how AI-based technologies are rapidly changing the field of accessibility, especially for blind people and those with low vision. Today’s programming includes top accessibility product and technology leaders from Apple, Waymo, Microsoft and Google, plus sessions featuring disability rights lawyer Haben Girma and author and designer Sara Hendren. Check out the event’s full agenda.

The Sight Tech Global project aims to showcase the remarkable community of technologists working on accessibility-related products and platforms. It is a project of the nonprofit Vista Center for the Blind and Visually Impaired, which is based in Silicon Valley.

This year’s event sponsors include: Waymo, Verizon Media, TechCrunch, Ford, Vispero, Salesforce, Mojo Vision, iSenpai, Facebook, Ability Central, Google, Microsoft, Wells Fargo, Amazon, Eyedaptic, Verizon 5G, Humanware, APH, and accessiBe. Our production partners: Cohere Studio (design),  Sunol Media Group (video production), Fable (accessibility crowd testing), Clarity Media (speaker prep), Be My Eyes (customer service), 3Play and Vitac  (captioning).

WordPress Image Lightbox Plugin