Monthly Archives: April 2021

News: Atomico’s talent partners share 6 tips for early-stage people ops success

There are still a number of surefire measures any early-stage company can and should put in place to achieve “people ops” success as they begin scaling.

In the earliest stages of building a startup, it can be hard to justify focusing on anything other than creating a great product or service and meeting the needs of customers or users. However, there are still a number of surefire measures that any early-stage company can and should put in place to achieve “people ops” success as they begin scaling, according to venture capital firm Atomico‘s talent partners, Caro Chayot and Dan Hynes.

You need to recruit for what you need, but you also need to think about what is coming down the line.

As members of the VC’s operational support team, both work closely with companies in the Atomico portfolio to “find, develop and retain” the best employees in their respective fields, at various stages of the business. They’re operators at heart, and they bring a wealth of experience from time spent prior to entering VC.

Before joining Atomico, Chayot led the EMEA HR team at Twitter, where she helped scale the business from two to six markets and grew the team from 80 based in London to 500 across the region. Prior to that, she worked at Google in people ops for nine years.

Hynes was responsible for talent and staffing at well-known technology companies including Google, Cisco and Skype. At Google, he grew the EMEA team from 60 based in London to 8,500 across Europe by 2010, and at Skype, he led a talent team that scaled from 600 to 2,300 in three years.

Caro Chayot’s top 3 tips

1. Think about your long-term org design (18 months down the line) and hire back from there

When most founders think about hiring, they think about what they need now and the gaps that exist in their team at that moment. Dan and I help founders see things a little differently. You need to recruit for what you need, but you also need to think about what is coming down the line. What will your company look like in a year or 18 months? Functions and team sizes will depend on the sector — whether you are building a marketplace, a SaaS business or a consumer company. Founders also need to think about how the employees they hire now can develop over the next 18 months. If you hire people who are at the top of their game now, they won’t be able to grow into the employees you need in the future.

2. Spend time defining what your culture is. Use that for hiring and everything else people-related

If org design is the “what,” then culture is the “how.” It’s about laying down values and principles. It may sound fluffy, but capturing what it means to work at your company is key to hiring and retaining the best talent. You can use clearly articulated values at every stage of talent-building to shape your employer brand. What do you want potential employees to feel when they see your website? What do you want to look for in the interview process to make sure you are hiring people who are additive to the culture? How do you develop people and compensate them? These are all expressions of culture.

News: Docugami’s new model for understanding documents cuts its teeth on NASA archives

You hear so much about data these days that you might forget that a huge amount of the world runs on documents: a veritable menagerie of heterogeneous files and formats holding enormous value yet incompatible with the new era of clean, structured databases. Docugami plans to change that with a system that intuitively understands any

You hear so much about data these days that you might forget that a huge amount of the world runs on documents: a veritable menagerie of heterogeneous files and formats holding enormous value yet incompatible with the new era of clean, structured databases. Docugami plans to change that with a system that intuitively understands any set of documents and intelligently indexes their contents — and NASA is already on board.

If Docugami’s product works as planned, anyone will be able to take piles of documents accumulated over the years and near-instantly convert them to the kind of data that’s actually useful to people.

Because it turns out that running just about any business ends up producing a ton of documents. Contracts and briefs in legal work, leases and agreements in real estate, proposals and releases in marketing, medical charts, etc, etc. Not to mention the various formats: Word docs, PDFs, scans of paper printouts of PDFs exported from Word docs, and so on.

Over the last decade there’s been an effort to corral this problem, but movement has largely been on the organizational side: put all your documents in one place, share and edit them collaboratively. Understanding the document itself has pretty much been left to the people who handle them, and for good reason — understanding documents is hard!

Think of a rental contract. We humans understand when the renter is named as Jill Jackson, that later on, “the renter” also refers to that person. Furthermore, in any of a hundred other contracts, we understand that the renters in those documents are the same type of person or concept in the context of the document, but not the same actual person. These are surprisingly difficult concepts for machine learning and natural language understanding systems to grasp and apply. Yet if they could be mastered, an enormous amount of useful information could be extracted from the millions of documents squirreled away around the world.

What’s up, .docx?

Docugami founder Jean Paoli says they’ve cracked the problem wide open, and while it’s a major claim, he’s one of few people who could credibly make it. Paoli was a major figure at Microsoft for decades, and among other things helped create the XML format — you know all those files that end in x, like .docx and .xlsx? Paoli is at least partly to thank for them.

“Data and documents aren’t the same thing,” he told me. “There’s a thing you understand, called documents, and there’s something that computers understand, called data. Why are they not the same thing? So my first job [at Microsoft] was to create a format that can represent documents as data. I created XML with friends in the industry, and Bill accepted it.” (Yes, that Bill.)

The formats became ubiquitous, yet 20 years later the same problem persists, having grown in scale with the digitization of industry after industry. But for Paoli the solution is the same. At the core of XML was the idea that a document should be structured almost like a webpage: boxes within boxes, each clearly defined by metadata — a hierarchical model more easily understood by computers.

Illustration showing a document corresponding to pieces of another document.

Image Credits: Docugami

“A few years ago I drank the AI kool-aid, got the idea to transform documents into data. I needed an algorithm that navigates the hierarchical model, and they told me that the algorithm you want does not exist,” he explained. “The XML model, where every piece is inside another, and each has a different name to represent the data it contains — that has not been married to the AI model we have today. That’s just a fact. I hoped the AI people would go and jump on it, but it didn’t happen.” (“I was busy doing something else,” he added, to excuse himself.)

The lack of compatibility with this new model of computing shouldn’t come as a surprise — every emerging technology carries with it certain assumptions and limitations, and AI has focused on a few other, equally crucial areas like speech understanding and computer vision. The approach taken there doesn’t match the needs of systematically understanding a document.

“Many people think that documents are like cats. You train the AI to look for their eyes, for their tails… documents are not like cats,” he said.

It sounds obvious, but it’s a real limitation: advanced AI methods like segmentation, scene understanding, multimodal context, and such are all a sort of hyper-advanced cat detection that has moved beyond cats to detect dogs, car types, facial expressions, locations, etc. Documents are too different from one another, or in other ways too similar, for these approaches to do much more than roughly categorize them.

And as for language understanding, it’s good in some ways but not in the ways Paoli needed. “They’re working sort of at the English language level,” he said. “They look at the text but they disconnect it from the document where they found it. I love NLP people, half my team is NLP people — but NLP people don’t think about business processes. You need to mix them with XML people, people who understand computer vision, then you start looking at the document at a different level.”

Docugami in action

Illustration showing a person interacting with a digital document.

Image Credits: Docugami

Paoli’s goal couldn’t be reached by adapting existing tools (beyond mature primitives like optical character recognition), so he assembled his own private AI lab, where a multi-disciplinary team has been tinkering away for about two years.

“We did core science, self-funded, in stealth mode, and we sent a bunch of patents to the patent office,” he said. “Then we went to see the VCs, and Signalfire basically volunteered to lead the seed round at $10 million.”

Coverage of the round didn’t really get into the actual experience of using Docugami, but Paoli walked me through the platform with some live documents. I wasn’t given access myself and the company wouldn’t provide screenshots or video, saying it is still working on the integrations and UI, so you’ll have to use your imagination… but if you picture pretty much any enterprise SaaS service, you’re 90 percent of the way there.

As the user, you upload any number of documents to Docugami, from a couple dozen to hundreds or thousands. These enter a machine understanding workflow that parses the documents, whether they’re scanned PDFs, Word files, or something else, into an XML-esque hierarchical organization unique to the contents.

“Say you’ve got 500 documents, we try to categorize it in document sets, these 30 look the same, those 20 look the same, those 5 together. We group them with a mix of hints coming from how the document looked, what it’s talking about, what we think people are using it for, etc,” said Paoli. Other services might be able to tell the difference between a lease and an NDA, but documents are too diverse to slot into pre-trained ideas of categories and expect it to work out. Every set of documents is potentially unique, and so Docugami trains itself anew every time, even for a set of one. “Once we group them, we understand the overall structure and hierarchy of that particular set of documents, because that’s how documents become useful: together.”

Illustration showing a document being turned into a report and a spreadsheet.

Image Credits: Docugami

That doesn’t just mean it picks up on header text and creates an index, or lets you search for words. The data that is in the document, for example who is paying whom, how much and when, and under what conditions, all that becomes structured and editable within the context of similar documents. (It asks for a little input to double check what it has deduced.)

It can be a little hard to picture, but now just imagine that you want to put together a report on your company’s active loans. All you need to do is highlight the information that’s important to you in an example document — literally, you just click “Jane Roe” and “$20,000” and “5 years” anywhere they occur — and then select the other documents you want to pull corresponding information from. A few seconds later you have an ordered spreadsheet with names, amounts, dates, anything you wanted out of that set of documents.

All this data is meant to be portable too, of course — there are integrations planned with various other common pipes and services in business, allowing for automatic reports, alerts if certain conditions are reached, automated creation of templates and standard documents (no more keeping an old one around with underscores where the principals go).

Remember, this is all half an hour after you uploaded them in the first place, no labeling or pre-processing or cleaning required. And the AI isn’t working from some preconceived notion or format of what a lease document looks like. It’s learned all it needs to know from the actual docs you uploaded — how they’re structured, where things like names and dates figure relative to one another, and so on. And it works across verticals and uses an interface anyone can figure out a few minutes. Whether you’re in healthcare data entry or construction contract management, the tool should make sense.

The web interface where you ingest and create new documents is one of the main tools, while the other lives inside Word. There Docugami acts as a sort of assistant that’s fully aware of every other document of whatever type you’re in, so you can create new ones, fill in standard information, comply with regulations, and so on.

Okay, so processing legal documents isn’t exactly the most exciting application of machine learning in the world. But I wouldn’t be writing this (at all, let alone at this length) if I didn’t think this was a big deal. This sort of deep understanding of document types can be found here and there among established industries with standard document types (such as police or medical reports), but have fun waiting until someone trains a bespoke model for your kayak rental service. But small businesses have just as much value locked up in documents as large enterprises — and they can’t afford to hire a team of data scientists. And even the big organizations can’t do it all manually.

NASA’s treasure trove

Image Credits: NASA

The problem is extremely difficult, yet to humans seems almost trivial. You or I could glance through 20 similar documents and a list of names and amounts easily, perhaps even in less time than it takes for Docugami to crawl them and train itself.

But AI, after all, is meant to imitate and excel human capacity, and it’s one thing for an account manager to do monthly reports on 20 contracts — quite another to do a daily report on a thousand. Yet Docugami accomplishes the latter and former equally easily — which is where it fits into both the enterprise system, where scaling this kind of operation is crucial, and to NASA, which is buried under a backlog of documentation from which it hopes to glean clean data and insights.

If there’s one thing NASA’s got a lot of, it’s documents. Its reasonably well maintained archives go back to its founding, and many important ones are available by various means — I’ve spent many a pleasant hour perusing its cache of historical documents.

But NASA isn’t looking for new insights into Apollo 11. Through its many past and present programs, solicitations, grant programs, budgets, and of course engineering projects, it generates a huge amount of documents — being, after all, very much a part of the federal bureaucracy. And as with any large organization with its paperwork spread over decades, NASA’s document stash represents untapped potential.

Expert opinions, research precursors, engineering solutions, and a dozen more categories of important information are sitting in files searchable perhaps by basic word matching but otherwise unstructured. Wouldn’t it be nice for someone at JPL to get it in their head to look at the evolution of nozzle design, and within a few minutes have a complete and current list of documents on that topic, organized by type, date, author, and status? What about the patent advisor who needs to provide a NIAC grant recipient information on prior art — shouldn’t they be able to pull those old patents and applications up with more specificity than any with a given keyword?

The NASA SBIR grant, awarded last summer, isn’t for any specific work, like collecting all the documents of such and such a type from Johnson Space Center or something. It’s an exploratory or investigative agreement, as many of these grants are, and Docugami is working with NASA scientists on the best ways to apply the technology to their archives. (One of the best applications may be to the SBIR and other small business funding programs themselves.)

Another SBIR grant with the NSF differs in that, while at NASA the team is looking into better organizing tons of disparate types of documents with some overlapping information, at NSF they’re aiming to better identify “small data.” “We are looking at the tiny things, the tiny details,” said Paoli. “For instance, if you have a name, is it the lender or the borrower? The doctor or the patient name? When you read a patient record, penicillin is mentioned, is it prescribed or prohibited? If there’s a section called allergies and another called prescriptions, we can make that connection.”

“Maybe it’s because I’m French”

When I pointed out the rather small budgets involved with SBIR grants and how his company couldn’t possibly survive on these, he laughed.

“Oh, we’re not running on grants! This isn’t our business. For me, this is a way to work with scientists, with the best labs in the world,” he said, while noting many more grant projects were in the offing. “Science for me is a fuel. The business model is very simple – a service that you subscribe to, like Docusign or Dropbox.”

The company is only just now beginning its real business operations, having made a few connections with integration partners and testers. But over the next year it will expand its private beta and eventually open it up — though there’s no timeline on that just yet.

“We’re very young. A year ago we were like five, six people, now we went and got this $10M seed round and boom,” said Paoli. But he’s certain that this is a business that will be not just lucrative but will represent an important change in how companies work.

“People love documents. Maybe it’s because I’m French,” he said, “but I think text and books and writing are critical — that’s just how humans work. We really think people can help machines think better, and machines can help people think better.”

News: How to choose and deploy industry-specific AI models

Industry-specific AI models are only going to boom in popularity over the next few years and businesses from across sectors will realize their power in delivering accurate and powerful insights.

DJ Das
Contributor

DJ Das is the founder and CEO of ThirdEye Data, a company that transforms enterprises with AI applications. A serial and parallel entrepreneur, DJ is also an angel investor in various data-centric startups in Silicon Valley.

As artificial intelligence becomes more advanced, previously cutting-edge — but generic — AI models are becoming commonplace, such as Google Cloud’s Vision AI or Amazon Rekognition.

While effective in some use cases, these solutions do not suit industry-specific needs right out of the box. Organizations that seek the most accurate results from their AI projects will simply have to turn to industry-specific models.

Any team looking to expand its AI capabilities should first apply its data and use cases to a generic model and assess the results.

There are a few ways that companies can generate industry-specific results. One would be to adopt a hybrid approach — taking an open-source generic AI model and training it further to align with the business’ specific needs. Companies could also look to third-party vendors, such as IBM or C3, and access a complete solution right off the shelf. Or — if they really needed to — data science teams could build their own models in-house, from scratch.

Let’s dive into each of these approaches and how businesses can decide which one works for their distinct circumstances.

Generic models alone often don’t cut it

Generic AI models like Vision AI or Rekognition and open-source ones from TensorFlow or Scikit-learn often fail to produce sufficient results when it comes to niche use cases in industries like finance or the energy sector. Many businesses have unique needs, and models that don’t have the contextual data of a certain industry will not be able to provide relevant results.

Building on top of open-source models

At ThirdEye Data, we recently worked with a utility company to tag and detect defects in electric poles by using AI to analyze thousands of images. We started off using Google Vision API and found that it was unable to produce our desired results — with the precision and recall values of the AI models completely unusable. The models were unable to read the characters within the tags on the electric poles 90% of the time because it didn’t identify the nonstandard font and varying background colors used in the tags.

So, we took base computer vision models from TensorFlow and optimized them to the utility company’s precise needs. After two months of developing AI models to detect and decipher tags on the electric poles, and another two months of training these models, the results are displaying accuracy levels of over 90%. These will continue to improve over time with retraining iterations.

Any team looking to expand its AI capabilities should first apply its data and use cases to a generic model and assess the results. Open-source algorithms that companies can start off with can be found on AI and ML frameworks like TensorFlow, Scikit-learn or Microsoft Cognitive Toolkit. At ThirdEye Data, we used convolutional neural network (CNN) algorithms on TensorFlow.

Then, if the results are insufficient, the team can extend the algorithm by training it further on their own industry-specific data.

News: GV partner Terri Burns is joining us to judge the Startup Battlefield

One of the best parts of TechCrunch Disrupt is the Startup Battlefield competition, and one of the most important pieces of the Startup Battlefield is our lineup of expert judges — they’re the ones the founders are trying to impress. Once the demos and presentations are done, the judges need to think quickly and ask

One of the best parts of TechCrunch Disrupt is the Startup Battlefield competition, and one of the most important pieces of the Startup Battlefield is our lineup of expert judges — they’re the ones the founders are trying to impress. Once the demos and presentations are done, the judges need to think quickly and ask probing questions about each startup. And then, of course, they choose the winner who gets to take home $100k and the Disrupt Cup.

This year, at our second virtual Startup Battlefield, GV partner Terri Burns will be joining us as one of our judges. Burns joined the firm (formerly known as Google Ventures) in 2017 as a principal, then was promoted to partner last year — making her the first Black female partner at GV, and its youngest partner as well.

Burns previously worked as a developer evangelist and front-end engineer at Venmo and an associate product manager at Twitter. At GV, she’s invested in high school social app HAGS and social audio app Locker Room.

During an interview about her role last fall, Burns told us she’s interested in backing Gen Z founders, and she pointed to HAGS as a good example of a product that was “built by and for Gen Z.”

“That generation is coming to an age where they are building and they are creating and they are at the forefront of the cultural landscape,” she said. “So to find founders and builders and engineers, and designers who are part of that generation and building for their own demographic, I think it’s just a new wave of entrepreneurship and builders that are coming into technology and in Silicon Valley.”

Disrupt 2021 runs September 21-23 and will be 100% virtual this year. Get your pass to attend with the rest of the TechCrunch community for less than $100 if you secure your seat before next month. Applications to compete in the Startup Battlefield are also open now until May 13.

News: UiPath’s first IPO pricing could be a warning to late-stage investors

A few months back, robotic process automation (RPA) unicorn UiPath raised a huge $750 million round at a valuation of around $35 billion. The capital came ahead of the company’s expected IPO, so its then-new valuation helped provide a measuring stick for where its eventual flotation could price. UiPath then filed to go public. But

A few months back, robotic process automation (RPA) unicorn UiPath raised a huge $750 million round at a valuation of around $35 billion. The capital came ahead of the company’s expected IPO, so its then-new valuation helped provide a measuring stick for where its eventual flotation could price.

UiPath then filed to go public. But today the company’s first IPO price range was released, failing to value the company where its final private backers expected it to.

In an S-1/A filing, UiPath disclosed that it expects its IPO to price between $43 and $50 per share. Using a simple share count of 516,545,035, the company would be worth $22.2 billion to $25.8 billion at the lower and upper extremes of its expected price interval. Neither of those numbers is close to what it was worth, in theory, just a few months ago.

According to IPO watching group Renaissance Capital, UiPath is worth up to $26.0 billion on a fully diluted basis. That’s not much more than its simple valuation.

For UiPath, its initial IPO price interval is a disappointment, though the company could see an upward revision in its valuation before it does sell shares and begin to trade. But more to the point, the company’s private-market valuation bump followed by a quick public-market correction stands out as a counter-example to something that we’ve seen so frequently in recent months.

Is UiPath’s first IPO price interval another indicator that the IPO market is cooling?

Remember Roblox?

If you think back to the end of 2020, Roblox decided to cancel its IPO and pursue a direct listing instead. Why? Because a few companies like Airbnb had gone public at what appeared to be strong valuation marks only to see their values rocket once they began to trade. So, Roblox decided to raise a huge amount of private capital, and then direct list.

News: Biden’s cybersecurity dream team takes shape

President Biden has named two former National Security Agency veterans to senior government cybersecurity positions, including the first national cyber director. The appointments, announced Monday, land after the discovery of two cyberattacks linked to foreign governments earlier this year — the Russian espionage campaign that planed backdoors in U.S. technology giant SolarWinds’ technology to hack

President Biden has named two former National Security Agency veterans to senior government cybersecurity positions, including the first national cyber director.

The appointments, announced Monday, land after the discovery of two cyberattacks linked to foreign governments earlier this year — the Russian espionage campaign that planed backdoors in U.S. technology giant SolarWinds’ technology to hack into at least nine federal agencies, and the mass exploitation of Microsoft Exchange servers linked to hackers backed by China.

Jen Easterly, a former NSA official under the Obama administration who helped to launch U.S. Cyber Command, has been nominated as the new head of CISA, the cybersecurity advisory unit housed under Homeland Security. CISA has been without a head for six months after then-President Trump fired former director Chris Krebs, who Trump appointed to lead the agency in 2018, for disputing Trump’s false claims of election hacking.

Biden has also named former NSA deputy director John “Chris” Inglis as national cyber director, a new position created by Congress late last year to be housed in the White House, charged with overseeing the defense and cybersecurity budgets of civilian agencies.

Inglis is expected to work closely with Anne Neuberger, who in January was appointed as the deputy national security adviser for cyber on the National Security Council. Neuberger, a former NSA executive and its first director of cybersecurity, was tasked with leading the government’s response to the SolarWinds attack and Exchange hacks.

Biden has also nominated Rob Silvers, a former Obama-era assistant secretary for cybersecurity policy, to serve as undersecretary for strategy, policy, and plans at Homeland Security. Silvers was recently floated for the top job at CISA.

Both Easterly and Silvers’ positions are subject to Senate confirmation. The appointments were first reported by The Washington Post.

Former CISA director Krebs praised the appointments as “brilliant picks.” Dmitri Alperovitch, a former CrowdStrike executive and chair of Silverado Policy Accelerator, called the appointments the “cyber equivalent of the dream team.” In a tweet, Alperovitch said: “The administration could not have picked three more capable and experienced people to run cyber operations, policy and strategy alongside Anne Neuberger.”

Neuberger is replaced by Rob Joyce, a former White House cybersecurity czar, who returned from a stint at the U.S. Embassy in London earlier this year to serve as NSA’s new cybersecurity director.

Last week, the White House asked Congress for $110 million in new funding for next year to help Homeland Security to improve its defenses and hire more cybersecurity talent. CISA hemorrhaged senior staff last year after several executives were fired by the Trump administration or left for the private sector.

News: Twitter to set up its first African presence in Ghana

Twitter CEO Jack Dorsey, via a tweet today, announced that the company would be setting up a presence in Africa. “Twitter is now present on the continent. Thank you, Ghana and Nana Akufo-Addo,” he said. 🇬🇭 Twitter is now present on the continent. Thank you Ghana and @NAkufoAddo. #TwitterGhana https://t.co/tt7KR3kvDg — jack (@jack) April 12,

Twitter CEO Jack Dorsey, via a tweet today, announced that the company would be setting up a presence in Africa. “Twitter is now present on the continent. Thank you, Ghana and Nana Akufo-Addo,” he said.

🇬🇭 Twitter is now present on the continent.

Thank you Ghana and @NAkufoAddo. #TwitterGhana https://t.co/tt7KR3kvDg

— jack (@jack) April 12, 2021

In a statement attached to the tweet, Twitter says it is now actively building a team in Ghana “to be more immersed in the rich and vibrant communities that drive the conversations taking place every day across the continent.”

Twitter indicated several roles from product and engineering to design, marketing, and communications for job openings in the company. However, individuals will fill these roles remotely as Twitter makes plans to establish an office in the country later.

Ghanaian President Nana Akufo-Addo, enthused about the news said “the choice of Ghana as HQ for Twitter’s Africa operations is excellent news. Government and Ghanaians welcome very much this announcement and the confidence reposed in our country.”

He also revealed that he held a virtual meeting with Dorsey on the 7th of April, where the two parties might have finalized the deal.

“As I indicated to Jack in our virtual meeting on 7th April 2021, this is the start of a beautiful partnership between Twitter and Ghana, which is critical for the development of Ghana’s hugely important tech sector. These are exciting times to be in and to do business in Ghana,” he added.

According to Twitter, the decision to kick off its African expansion with Ghana stems from the country’s dealings with AfCFTA and its openness towards the internet.

“As a champion for democracy, Ghana is a supporter of free speech, online freedom, and the Open Internet, of which Twitter is also an advocate. Furthermore, Ghana’s recent appointment to host The Secretariat of the African Continental Free Trade Area aligns with our overarching goal to establish a presence in the region that will support our efforts to improve and tailor our service across Africa,” the statement read.

News: Cybersecurity training startup Hack The Box raises $10.6M Series A led by Paladin Capital

Cybersecurity training startup Hack The Box, which emerged originally from Greece, has raised a Series A investment round of $10.6 million, led by Paladin Capital Group and joined by Osage University Partners, Brighteye Ventures, and existing investors Marathon Venture Capital. It will use the funding to expand. Most recently it launched Hack The Box Academy.

Cybersecurity training startup Hack The Box, which emerged originally from Greece, has raised a Series A investment round of $10.6 million, led by Paladin Capital Group and joined by Osage University Partners, Brighteye Ventures, and existing investors Marathon Venture Capital. It will use the funding to expand. Most recently it launched Hack The Box Academy.

Started in 2017, Hack The Box specializes in using ‘ethical hacking’ to train cybersecurity techniques. Users are given challenges to “attack” virtual vulnerable labs in a simulated, gamified, and test environment. This approach has garnered over 500,000 platform members, from beginners to experts, and brought in around 800 organizations (such as governments, Fortune 500 companies, and academic institutions) to improve their cyber-adversarial knowledge.

Haris Pylarinos, Hack The Box Co-Founder and CEO said: “Everything we do is geared around creating a safer Internet by empowering corporate teams and individuals to create unbreakable systems.”

Gibb Witham, Senior Vice President, Paladin Capital Group commented: “We’re excited to be backing Hack The Box at this inflection point in their growth as organizations recognize the increasing importance of an adversarial security practice to combat constantly evolving cyber attacks.”

Hack The Box competes with Offensive Security, Immersive Labs,   
INE, and eLearnSecurity (acquired by INE).

Hack The Box is using a SaaS business model. In the B2C market it provides monthly and annual subscriptions that provide unrestricted access to the training content and in the B2B market, it provides bi-annual and annual licenses which provide access to dedicated adversarial training environments with value-added admin capabilities.

News: When wildfires rage close, Perimeter wants to tell you where to go

Out the window, a fire is raging — and it’s moving ever closer. Confusion. Fear. A run for the car. Roads open and then suddenly closed by authorities. Traffic jams. A fire break that stalls the flames and then suddenly the flames jump, changing direction. Everyone has a plan for what to do — a

Out the window, a fire is raging — and it’s moving ever closer. Confusion. Fear. A run for the car. Roads open and then suddenly closed by authorities. Traffic jams. A fire break that stalls the flames and then suddenly the flames jump, changing direction. Everyone has a plan for what to do — a plan that gets ripped up the second someone leaves their home to evacuate.

In the heat of the moment, everyone needs to know exactly what to do and where to go. Unfortunately, that information is rarely available in the format they need.

Bailey Farren’s family has experienced this four times living in California north of San Francisco. The wildfires are more common than ever with the aridness of climate change, yet the evacuations remain a pandemonium. While a student at Berkeley, she started investigating what was happening, and why her family constantly lacked the information they needed to get out safely and swiftly. “I thought that first responders had everything they need,” she said.

They don’t. Firefighters on the frontlines often lack the technology needed to relay accurate information to operations centers, which can then guide citizens on how to evacuate. With the pressing need to keep citizens up-to-date, most authorities rely on simple text messages to just tell everyone in, say, an entire county to evacuate, with nary more detail.

The Camp Fire in California in 2018, the worst fire in California’s history, triggered her to go beyond interviewing public safety officers to building a solution. She graduated in spring 2019, and at the same time, founded Perimeter with fellow Berkeley grad Noah Wu.

Perimeter is an emergency response platform designed to “bridge the gap between agencies and citizens” in Farren’s words by offering better two-way communication centered on geospatial data.

The company announced today that it has raised a $1 million pre-seed round led by Parade Ventures with Dustin Dolginow, social-impact organization One World, and Alchemist Accelerator participating. Alchemist was the first money into the startup.

Using Perimeter, citizens can upload geospatial-tagged information such as a new fire outbreak or a tree that has fallen and is now blocking a road. “Sometimes citizens have the most accurate and real-time information, before first responders show up — we want citizens to share that with … government officials,” Farren said. That information is not immediately disseminated to the public though. Instead, first responders can vet the information, ensuring that citizens are always using accurate information in planning their actions. “We do not want it to be a social-media platform,” she explained.

In the other direction, operations centers can use Perimeter to send citizens accurate and detailed evacuation maps with routes on where to go. Unlike with just a text message, Perimeter will send both the message and a URL, which can then display maps and real-time information on how a disaster is progressing.

Right now, the platform is distributed as a web app, so that citizens don’t need to have it pre-installed when a disaster strikes. Farren noted that the company is working on native apps as well, particularly for first responders who need robust offline capabilities due to intermittent cell signals that are typical in disaster zones.

Farren and her team have interviewed emergency management agencies extensively, and she says that her first customer is Palo Alto’s Office of Emergency Services. Over the past two fire seasons, “we had an R&D focus in that we were building hand-in-hand with agencies … and we took two fire seasons to beta test our technology,” she said.

The company has four full-time employees working remotely, but all based in California.

News: ntel’s Mobileye teams with Udelv to launch 35,000 driverless delivery vehicles by 2028

Intel subsidiary Mobileye is ratcheting up its autonomous vehicle ambitions and getting into delivery. The company said Monday it struck a deal with Udelv to supply its self-driving system to thousands of purpose-built autonomous delivery vehicles. The companies said they plan to put more than 35,000 autonomous vehicles dubbed Transporters on city streets by 2028.

Intel subsidiary Mobileye is ratcheting up its autonomous vehicle ambitions and getting into delivery.

The company said Monday it struck a deal with Udelv to supply its self-driving system to thousands of purpose-built autonomous delivery vehicles. The companies said they plan to put more than 35,000 autonomous vehicles dubbed Transporters on city streets by 2028. Commercial operations are slated to begin in 2023.

Donlen, a U.S. commercial fleet leasing and management company, has made the first pre-order for 1,000 of these Udelv Transporters.

The announcement is notable for both companies. Udelv, which initially launched as an autonomous vehicle delivery startup, has opted to adopt Mobileye’s self-driving system and focus on “creating the hardware and software that allows for autonomous deliveries,” its CEO Daniel Laury said in an emailed statement to TechCrunch.

“This is a hardcore engineering problem to solve when one understands the multiplicity of goods to deliver, the variety of ways to do it, and some other intricately complex issues linked to the automation of last and middle mile deliveries,” Laury said. “By partnering with Mobileye, Udelv can focus 100% of its resources and efforts to perfecting the business application while Mobileye provides the tool to scale fast. It is a win-win situation.”

For Mobilieye it marks yet another expansion for a company that got its start as a developer of camera-based sensors, which are now used by most automakers to support advanced driver assistance systems. Today, more than 54 million vehicles have Mobileye technology.

“This is a great combination of the two partners together and we expect some great scale,” Jack Weast, a senior principal engineer at Intel and the Vice President of Automated Vehicle Standards at Mobileye, said in a recent interview. “And this does kind of mark, officially, the first proof point of Mobileye’s technology getting into goods delivery in addition to all the other spaces that we’ve already announced.”

The company, which was acquired by Intel for $15.3 billion in 2017, has widened its scope in recent years, moving beyond its advanced driver assistance technology and toward the development of a self-driving vehicle system. More than two years ago, Mobileye announced plans to launch a kit that includes visual perception, sensor fusion, its REM mapping system and software algorithms. And in 2018, the company made an unlikely turn and announced plans to become a robotaxi operator, not just a supplier. Mobileye also plans to deploy autonomous shuttles with Transdev ATS and Lohr Group beginning in Europe. Mobileye also plans to begin operating an autonomous ride-hailing service in Israel in early 2022.

This latest deal shows Mobileye’s ambition to see its self-driving systems used in other applications beyond robotaxis.

The self-driving system, now branded as Mobilieye Drive, is made up of a system-on-chip based compute, redundant sensing subsystems based on camera, radar and lidar technology, its REM mapping system and a rules-based Responsibility-Sensitive Safety (RSS) driving policy. Mobileye’s REM mapping system essentially crowdsources data by tapping into more than 1 million vehicles equipped with its tech to build high-definition maps that can be used to support in ADAS and autonomous driving systems.

Udelv will work with Mobileye to integrate the self-driving technology with its own delivery management system. Mobileye will also provide over-the-air software support throughout the lifetime of the vehicles.

These purpose-built vehicles won’t have the typical mechanical features one might find in a human driven truck or delivery van. It will be designed to be capable of so-called Level 4 self-driving, a designation by SAE that means the vehicle can handle all operations without a human under certain conditions. It will also come with four-directional four-way steering, LED screens to great the people picking up the delivery and special compartments for goods.

There will be a teleoperations system that will allow for the maneuvering of the vehicles in parking lots, loading zones, apartment complexes and private roads, according to Udelv.

WordPress Image Lightbox Plugin