Monthly Archives: December 2020

News: AWS launches SageMaker Data Wrangler, a new data preparation service for machine learning

AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning

AWS launched a new service today, Amazon SageMaker Data Wrangler, that makes it easier for data scientists to prepare their data for machine learning training. In addition, the company is also launching SageMaker Feature Store, available in the SageMaker Studio, a new service that makes it easier to name, organize, find and share machine learning features.

AWS is also launching Sagemaker Pipelines, a new service that’s integrated with the rest of the platform and that provides a CI/CD service for machine learning to create and automate workflows, as well as an audit trail for model components like training data and configurations.

As AWS CEO Andy Jassy pointed out in his keynote at the company’s re:Invent conference, data preparation remains a major challenge in the machine learning space. Users have to write their queries and the code to get the data from their data stores first, then write the queries to transform that code and combine features as necessary. All of that is work that doesn’t actually focus on building the models but on the infrastructure of building models.

Data Wrangler comes with over 300 pre-configured data transformation built-in, that help users convert column types or impute missing data with mean or median values. There are also some built-in visualization tools to help identify potential errors, as well as tools for checking if there are inconsistencies in the data and diagnose them before the models are deployed.

All of these workflows can then be saved in a notebook or as a script so that teams can replicate them — and used in SageMaker Pipelines to automate the rest of the workflow, too.

 

It’s worth noting that there are quite a few startups that are working on the same problem. Wrangling machine learning data, after all, is one of the most common problems in the space. For the most part, though, most companies still build their own tools and as usual, that makes this area ripe for a managed service.

News: WaveOne aims to make video AI-native and turn streaming upside down

Video has worked the same way for a long, long time. And because of its unique qualities, video has been largely immune to the machine learning explosion upending industry after industry. WaveOne hopes to change that by taking the decades-old paradigm of video codecs and making them AI-powered — while somehow avoiding the pitfalls that

Video has worked the same way for a long, long time. And because of its unique qualities, video has been largely immune to the machine learning explosion upending industry after industry. WaveOne hopes to change that by taking the decades-old paradigm of video codecs and making them AI-powered — while somehow avoiding the pitfalls that would-be codec revolutionizers and “AI-powered” startups often fall into.

The startup has until recently limited itself to showing its results in papers and presentations, but with a recently raised $6.5M seed round, they are ready to move towards testing and deploying their actual product. It’s no niche: video compression may seem a bit in the weeds to some, but there’s no doubt it’s become one of the most important processes of the modern internet.

Here’s how it’s worked pretty much since the old days when digital video first became possible. Developers create a standard algorithm for compressing and decompressing video, a codec, which can easily be distributed and run on common computing platforms. This is stuff like MPEG-2, H.264, and that sort of thing. The hard work of compressing a video can be done by content providers and servers, while the comparatively lighter work of decompressing is done on the end user’s machines.

This approach is quite effective, and improvements to codecs (which allow more efficient compression) have led to the possibility of sites like YouTube. If videos were 10 times bigger, YouTube would never have been able to launch when it did. The other major change was beginning to rely on hardware acceleration of said codecs — your computer or GPU might have an actual chip in it with the codec baked in, ready to perform decompression tasks with far greater speed than an ordinary general-purpose CPU in a phone. Just one problem: when you get a new codec, you need new hardware.

But consider this: many new phones ship with a chip designed for running machine learning models, which like codecs can be accelerated, but unlike them the hardware is not bespoke for the model. So why aren’t we using this ML-optimized chip for video? Well, that’s exactly what WaveOne intends to do.

I should say that I initially spoke with WaveOne’s cofounders, CEO Lubomir Bourdev and CTO Oren Rippel, from a position of significant skepticism despite their impressive backgrounds. We’ve seen codec companies come and go, but the tech industry has coalesced around a handful of formats and standards that are revised in a painfully slow fashion. H.265, for instance, was introduced in 2013, but years afterwards its predecessor, H.264, was only beginning to achieve ubiquity. It’s more like the 3G, 4G, 5G system than version 7, version 7.1, etc. So smaller options, even superior ones that are free and open source, tend to get ground beneath the wheels of the industry-spanning standards.

This track record for codecs, plus the fact that startups like to describe practically everything is “AI-powered,” had me expecting something at best misguided, at worst scammy. But I was more than pleasantly surprised: In fact WaveOne is the kind of thing that seems obvious in retrospect and appears to have a first-mover advantage.

The first thing Rippel and Bourdev made clear was that AI actually has a role to play here. While codecs like H.265 aren’t dumb — they’re very advanced in many ways — they aren’t exactly smart, either. They can tell where to put more bits into encoding color or detail in a general sense, but they can’t, for instance, tell where there’s a face in the shot that should be getting extra love, or a sign or trees that can be done in a special way to save time.

But face and scene detection are practically solved problems in computer vision. Why shouldn’t a video codec understand that there is a face, then dedicate a proportionate amount of resources to it? It’s a perfectly good question. The answer is that the codecs aren’t flexible enough. They don’t take that kind of input. Maybe they will in H.266, whenever that comes out, and a couple years later it’ll be supported on high-end devices.

So how would you do it now? Well, by writing a video compression and decompression algorithm that runs on AI accelerators many phones and computers have or will have very soon, and integrating scene and object detection in it from the get-go. Like Krisp.ai understanding what a voice is and isolating it without hyper-complex spectrum analysis, AI can make determinations like that with visual data incredibly fast and pass that on to the actual video compression part.

Image Credits: WaveOne

Variable and intelligent allocation of data means the compression process can be very efficient without sacrificing image quality. WaveOne claims to reduce the size of files by as much as half, with better gains in more complex scenes. When you’re serving videos hundreds of millions of times (or to a million people at once), even fractions of a percent add up, let alone gains of this size. Bandwidth doesn’t cost as much as it used to, but it still isn’t free.

Understanding the image (or being told) also lets the codec see what kind of content it is; a video call should prioritize faces if possible, of course, but a game streamer may want to prioritize small details, while animation requires yet another approach to minimize artifacts in its large single-color regions. This can all be done on the fly with an AI-powered compression scheme.

There are implications beyond consumer tech as well: A self-driving car, sending video between components or to a central server, could save time and improve video quality by focusing on what the autonomous system designates important — vehicles, pedestrians, animals — and not wasting time and bits on a featureless sky, trees in the distance, and so on.

Content-aware encoding and decoding is probably the most versatile and easy to grasp advantage WaveOne claims to offer, but Bourdev also noted that the method is much more resistant to disruption from bandwidth issues. It’s one of the other failings of traditional video codecs that missing a few bits can throw off the whole operation — that’s why you get frozen frames and glitches. But ML-based decoding can easily make a “best guess” based on whatever bits it has, so when your bandwidth is suddenly restricted you don’t freeze, just get a bit less detailed for the duration.

Example of different codecs compressing the same frame.

These benefits sound great, but as before the question is not “can we improve on the status quo?” (obviously we can) but “can we scale those improvements?”

“The road is littered with failed attempts to create cool new codecs,” admitted Bourdev. “Part of the reason for that is hardware acceleration; even if you came up with the best codec in the world, good luck if you don’t have a hardware accelerator that runs it. You don’t just need better algorithms, you need to be able to run them in a scalable way across a large variety of devices, on the edge and in the cloud.”

That’s why the special AI cores on the latest generation of devices is so important. This is hardware acceleration that can be adapted in milliseconds to a new purpose. And WaveOne happens to have been working for years on video-focused machine learning that will run on those cores, doing the work that H.26X accelerators have been doing for years, but faster and with far more flexibility.

Of course, there’s still the question of “standards.” Is it very likely that anyone is going to sign on to a single company’s proprietary video compression methods? Well, someone’s got to do it! After all, standards don’t come etched on stone tablets. And as Bourdev and Rippel explained, they actually are using standards — just not the way we’ve come to think of them.

Before, a “standard” in video meant adhering to a rigidly defined software method so that your app or device could work with standards-compatible video efficiently and correctly. But that’s not the only kind of standard. Instead of being a soup-to-nuts method, WaveOne is an implementation that adheres to standards on the ML and deployment side.

They’re building the platform to be compatible with all the major ML distribution and development publishers like TensorFlow, ONNX, Apple’s CoreML, and others. Meanwhile the models actually developed for encoding and decoding video will run just like any other accelerated software on edge or cloud devices: deploy it on AWS or Azure, run it locally with ARM or Intel compute modules, and so on.

It feels like WaveOne may be onto something that ticks all the boxes of a major b2b event: it invisibly improves things for customers, runs on existing or upcoming hardware without modification, saves costs immediately (potentially, anyhow) but can be invested in to add value.

Perhaps that’s why they managed to attract such a large seed round: $6.5 million, led by Khosla Ventures, with $1M each from Vela Partners and Incubate Fund, plus $650K from Omega Venture Partners and $350K from Blue Ivy.

Right now WaveOne is sort of in a pre-alpha stage, having demonstrated the technology satisfactorily but not built a full-scale product. The seed round, Rippel said, was to de-risk the technology, and while there’s still lots of R&D yet to be done, they’ve proven that the core offering works — building the infrastructure and API layers comes next and amounts to a totally different phase for the company. Even so, he said, they hope to get testing done and line up a few customers before they raise more money.

The future of the video industry may not look a lot like the last couple decades, and that could be a very good thing. No doubt we’ll be hearing more from WaveOne as it migrates from lab to product.

News: AWS announces high resource Lambda functions, container image support & millisecond billing

AWS announced some big updates to its Lambda serverless function service today. For starters, starting today it will be able to deliver functions with up to 10MB of memory and 6 vCPUs (virtual CPUs). This will allow developers building more compute-intensive functions to get the resources they need. “Starting today, you can allocate up to

AWS announced some big updates to its Lambda serverless function service today. For starters, starting today it will be able to deliver functions with up to 10MB of memory and 6 vCPUs (virtual CPUs). This will allow developers building more compute-intensive functions to get the resources they need.

“Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment,” the company wrote in a blog post announcing the new capabilities.

Serverless computing doesn’t mean there are no servers. It means that developers no longer have to worry about the compute, storage and memory requirements because the cloud provider — in this case, AWS — takes care of it for them, freeing them up to just code the application instead of deploying resources.

Today’s announcement combined with support for support for the AVX2 instruction set, means that developers can use this approach with more sophisticated technologies like machine learning, gaming and even high performance computing.

One of the beauties of this approach is that in theory you can save money because you aren’t paying for resources you aren’t using. You are only paying each time the application requires a set of resources and no more. To make this an even bigger advantage, the company also announced, “Starting today, we are rounding up duration to the nearest millisecond with no minimum execution time,” the company announced in a blog post on the new pricing approach.

Finally the company also announced container image support for Lambda functions. “To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size. In this way, you can also easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads,” the company wrote in a blog post announcing the new capability.

All of these announcements in combination mean that you can now use Lambda functions for more intensive operations than you could previously, and the new billing approach should lower your overall spending as you make that transition to the new capabilities.

News: AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views. At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly. The new service can take data from disparate silos and move

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

News: AWS goes after Microsoft’s SQL Server with Babelfish for Aurora PostgreSQL

AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his

AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.

What Babelfish does is provide a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will (though they’ll still have to migrate their existing data). It provides translations for the dialect, but also SQL commands,  cursors, catalog views, data types, triggers, stored procedures and functions.

The promise here is that companies won’t have to replace their database drivers or rewrite and verify their database requests to make this transition.

“We believe Babelfish stands out because it’s not another migration service, as useful as those can be. Babelfish enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements,” AWS’s Matt Asay writes in today’s announcement. “This means much faster ‘migrations’ with minimal developer effort. It’s also centered on ‘correctness,’ meaning applications designed to use SQL Server functionality will behave the same on PostgreSQL as they would on SQL Server.”

PostgreSQL, AWS rightly points out, is one of the most popular open-source databases in the market today. A lot of companies want to migrate their relational databases to it — or at least use it in conjunction with their existing databases. This new service is going to make that significantly easier.

The open-source Babelfish project will launch in 2021 and will be available on GitHub under the Apache 2.0 license.

“It’s still true that the overwhelming majority of relational databases are on-premise,” AWS CEO Andy Jassy said. “Customers are fed up with and sick of incumbents.” As is tradition at re:Invent, Jassy also got a few swipes at Oracle into his keynote, but the real target of the products the company is launching in the database area today is clearly Microsoft.

News: Floww raises $6.7M for its data-driven marketplace matching founders with investors, based on merit

Floww – a data-driven marketplace designed to allow founders to pitch investors, with the whole investment relationship managed online – says it has raised $6.7M / £5M to date in Seed funding from angels and family offices. Investors include Ramon Mendes De Leon, Duncan Simpson Craib, Angus Davidson, Stephane Delacote and Pip Baker (Google’s Head

Floww – a data-driven marketplace designed to allow founders to pitch investors, with the whole investment relationship managed online – says it has raised $6.7M / £5M to date in Seed funding from angels and family offices. Investors include Ramon Mendes De Leon, Duncan Simpson Craib, Angus Davidson, Stephane Delacote and Pip Baker (Google’s Head of Fintech UK) and multiple Family Offices. The cash will be used to build out the platform designed to give startups access to over 500+ VCs, accelerators and angel networks.

The team consists of Martijn De Wever, founder and CEO of London based VC Force Over Mass; Lee Fasciani, cofounder of Territory Projects (the firm behind film graphics and design including Guardians of the Galaxy and BladeRunner 2049); and CTO Alex Pilsworth, of various Fintech startups.

Having made over 160 investments himself, De Wever says he recognized the need for a platform connecting investors and startups based on merit, clean data, and transparency, rather than a system built on “warm introductions” which can have inherent cultural and even racial biases.

Floww’s idea is that it showcases startups based on merit only, allowing founders to raise capital by providing investors with data and transparency. Startups are given a suite of tools and materials to get started, from cap table templates to ‘How To’ guides. Founders can then ‘drag and drop’ their investor documents in any format. Floww’s team of accountants then cross-checks the data for errors and process key performance metrics. A startup’s digital profile includes dynamic charts and tables, allowing prospective investors to see the company’s business potential.

Floww charges a monthly fee to VCs, accelerators, family offices and PE firms. Startups have free access to the platform, and a premium model to contact and send their deal to multiple VCs.

Floww’s pitch is that VCs can, in turn, manage deal-sourcing, CRM, as well as reporting to their investors and LPs. Quite a claim, given all VCs to-date handle this kind of thing in-house. However, Floww claims to have processed 3,000 startups and says it is rolling out to over 500 VC’s.

In a statement, De Wever said: “In an age of virtual meetings and connections, the need for coffee meetings on Sand Hill Road or Mayfair is gone. What we need now are global connections, allowing VCs to engage in merit-based investing using data and metrics.” He says the era of the Coronavirus pandemic means many deals will have to be sourced remotely now, so “the time for a platform like this is now.”

AngelList is perhaps its closest competitor from the startup perspective. And the VC application incorporates the kind of functionality seen in Affinity, Airtable, Efront and Docsend. But AngeList doesn’t provide data or metrics.

News: AWS brings ECS, EKS services to the data center, open sources EKS

Today at AWS re:Invent, Andy Jassy talked a lot about how companies are making a big push to the cloud, but today’s container-focussed announcements gave a big nod to the data center as the company announced ECS Anywhere and EKS Anywhere, both designed to let you run these services on-premises, as well as in the

Today at AWS re:Invent, Andy Jassy talked a lot about how companies are making a big push to the cloud, but today’s container-focussed announcements gave a big nod to the data center as the company announced ECS Anywhere and EKS Anywhere, both designed to let you run these services on-premises, as well as in the cloud.

These two services, ECS for generalized container orchestration and EKS for that’s focused on Kubernetes will let customers use these popular AWS services on premises. Jassy said that some customers still want the same tools they use in the cloud on prem and this is designed to give it to them.

Speaking of ECS he said,  “I still have a lot of my containers that I need to run on premises as I’m making this transition to the cloud, and [these] people really want it to have the same management and deployment mechanisms that they have in AWS also on premises and customers have asked us to work on this. And so I’m excited to announce two new things to you. The first is the launch, or the announcement of Amazon ECS anywhere, which lets you run ECS and your own data center,” he told the re:Invent audience.

Image Credits: AWS

He said it gives you the same AWS API’s and cluster configuration management pieces. This will work the same for EKS, allowing this single management methodology regardless of where you are using the service.

While it was at it, the company also announced it was open sourcing EKS, its own managed Kubernetes service. The idea behind these moves is to give customers as much flexibility as possible, and recognizing what Microsoft, IBM and Google have been saying, that we live in a multi-cloud and hybrid world and people aren’t moving everything to the cloud right away.

In fact, in his opening Jassy stated that right now in 2020, just 4% of worldwide IT spend is on the cloud. That means there’s money to be made selling services on premises, and that’s what these services will do.

News: Find out how we’re working toward living and working in space at TC Sessions: Space 2020

The idea of people going to live and work in space, outside of the extremely unique case of the International Space Station, has long been the strict domain of science fiction. That’s changing fast, however, with public space agencies, private companies and the scientific community all looking at ways of making it safe for people

The idea of people going to live and work in space, outside of the extremely unique case of the International Space Station, has long been the strict domain of science fiction. That’s changing fast, however, with public space agencies, private companies and the scientific community all looking at ways of making it safe for people to live and work in space for longer periods – and broadening accessibility of space to people who don’t necessarily have the training and discipline of dedicated astronauts.

At TC Sessions: Space on December 16 & 17, we’ll be talking to some of the people who want to make living and working in space a reality, and who are paving the way for the future of both commercial and scientific human space activity. Those efforts range from designing the systems people will need for staying safe and comfortable on long spaceflights, to ideating and developing the technologies needed for long-term stays on the surface of worlds that are far less hospitable to life than Earth, like the Moon and Mars.

We’re thrilled to have Janet Kavandi from Sierra Nevada Corporation, Melodie Yashar from SEArch+, Nujoud Mercy from NASA and Axiom’s Amir Blachman joining us at TC Sessions: Space on December 16 &17 to chat about the future of human space exploration and commercial activity.

Janet Kavandi is Executive Vice President of Space Systems at the Sierra Nevada Corporation. She was selected as a NASA astronaut in 1994 as a member of the fifteenth class of U.S. astronauts. She completed three space flights in which she supported space station payload integration, capsule communications and robotics. She went on to serve as director of flight crew operations at NASA’s Johnson Space Center and then as director of NASA’s Glenn Research Center, where she directed cutting-edge research on aerospace and aeronautical propulsion, power and communication technologies. She retired from NASA in 2019 after 25 years of service.

Melodie Yashar is a design architect, technologist, and researcher. She is co-founder of Space Exploration Architecture (SEArch+), a group developing human-supporting concepts for space exploration. SEArch+ won top prize in both of NASA’s design solicitations for a Mars habitat within the 3D-Printed Habitat Challenge. The success of the team’s work in NASA’s Centennial Challenge led to consultancy roles and collaborations with UTAS/Collins Aerospace, NASA Langley, ICON, NASA Marshall, and others.

Nujoud Merancy is a systems engineer with extensive background in human spaceflight and spacecraft at NASA Johnson Space Center. She is currently the Chief of the Exploration Mission Planning Office responsible for the team of engineers and analysts designing, developing, and integrating NASA’s human spaceflight portfolio beyond low earth orbit. These missions include planning for the Orion Multi-Purpose Crew Vehicle, Space Launch System, Exploration Ground Systems, Gateway, and Human Landing System.

Amir Blachman is Chief Business Officer at Axiom, a pioneering company in the realm of commercializing space and building the first generation of private commercial space stations. He spent most of his career investing in and leading early stage companies. Before joining Axiom as the company’s first employee, he managed a syndicate of 120 space investors in 11 countries. Through this syndicate, he funded lunar landers, communication networks, Earth imaging satellites, antennae and exploration technologies.

In order to hear from these experts, you’ll need to pick up your ticket to TC Sessions: Space, which will also include video on demand for all sessions, which means you won’t have to miss a minute of expert insight, tips and trend spotting from the top founders, investors, technologists, government officials and military minds across public, private and defense sectors. There are even discounts for groups, students and military/government officials.

You’ll find panel discussions, interviews, fireside chats and interactive Q&As on a range of topics: mineral exploration, global mapping of the Earth from space, deep tech software, defense capabilities, 3D-printed rockets and the future of agriculture and food technology. Don’t miss the breakout sessions dedicated to accessing grant money. Explore the event agenda now and get a jump on organizing your schedule.

News: Softbank, Volvo back Flock Freight with $113.5M to help shippers share the load

Everyday thousands of trucks carry freight along U.S. highways, propelling the economy forward as consumer goods, electronics, cars and agriculture make their way to distribution centers, stores and eventually households. It’s inside these trucks — many of which sit half empty — where Flock Freight, a five-year-old startup out of San Diego believes it can

Everyday thousands of trucks carry freight along U.S. highways, propelling the economy forward as consumer goods, electronics, cars and agriculture make their way to distribution centers, stores and eventually households. It’s inside these trucks — many of which sit half empty — where Flock Freight, a five-year-old startup out of San Diego believes it can transform the industry.

Now, it has the funds to try and do it.

Flock Freight said Tuesday it has raised $113.5 million in a Series C round led by Softbank Vision Fund 2. Existing investors SignalFire, GLP Capital Partners and Google Ventures also participated in the round, in addition to a new minority investment by strategic partner Volvo Group Venture Capital. Ervin Tu, managing partner at SoftBank Investment Advisers, will join Flock Freight’s board. The company, which has raised $184 million to date, has post-funding valuation of $500 million, according to a source familiar with the deal who confirmed an earlier report by Bloomberg.

A slew of startups have popped up in the past several years all aiming to use technology to transform trucking — the backbone of the U.S. economy that moves more than 70% of all U.S. freight — into a more efficient machine. Most have focused on building digital freight networks that connect truckers with shippers.

Flock Freight has focused instead on the shipments themselves. The company created a software platform that helps pool shipments into a single shared truckload to make carrying freight more efficient. Flock Freight says its software avoids the traditional hub-and-spoke system, which is dominated by trucks with less than a full load, known in the industry as LTL. Flock Freight says that by pooling shipments that are going the same direction onto one truck, freight-related carbon emissions can be reduced by 40%.

The funds will be used to hire more employees; it has 129 employees to date.

“Unlike the digital freight-matching category that uses technology to simply improve efficiency as workflow automation, Flock Freight uses technology to power a new shipping mode (shared truckload) that makes freight transportation more efficient. The impact of Flock Freight’s algorithms is that shippers no longer need to adhere to LTL constraints for freight that measures up to 44 linear feet; instead, they can classify it as ‘shared truckload,’” Oren Zaslansky, founder and CEO of Flock Freight said in a statement. “Shippers can use Flock Freight’s efficient shared truckload solution to accommodate high demand and increased urgency.”

Their pitch has been compelling enough to attract a diverse mix of venture firms and corporate investors such as Volvo and Softbank.

“Flock Freight is improving supply chain efficiency for hundreds of thousands of shippers. Our investment is intended to accelerate the company’s ability to scale its business and capture a greater share of the market,” said Tu, Managing Partner at SoftBank Investment Advisers.

News: AWS launches Trainium, its new custom ML training chip

At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet. It will be available as EC2

At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.

It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.

New instances based on these custom chips will launch next year.

The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.

In addition, AWS is also partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training as well. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.

These new chips will make their debut in the AWS cloud in the first half of 2021.

Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.

Trainium, it’s worth noting, will use the same SDK as Inferentia.

“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.”

 

WordPress Image Lightbox Plugin