Yearly Archives: 2020

News: Apple HomePod update brings Intercom and other new features

Apple HomePod owners, starting today, will be able to use the newly announced “Intercom” feature to send messages between their HomePod smart speakers. The feature, which arrives via a software update, brings this and several other new features to Apple’s smart speakers, including those introduced at Apple’s event last week where the company debuted its

Apple HomePod owners, starting today, will be able to use the newly announced “Intercom” feature to send messages between their HomePod smart speakers. The feature, which arrives via a software update, brings this and several other new features to Apple’s smart speakers, including those introduced at Apple’s event last week where the company debuted its $99 HomePod mini.

Of these, Intercom is the most notable update, as it helps the HomePod catch up to rival smart speakers, like those from Apple and Google, which have offered similar broadcast messaging systems for years.

But in Apple’s case, Intercom doesn’t just send a user’s voice message — like “dinner’s ready!” or “time to go!” — across the family’s HomePod speakers. It’s also meant to work across Apple’s device ecosystem, by adding support for iPhone, iPad, Apple Watch, and even AirPods and CarPlay.

This could be a competitive advantage for HomePod, particularly because Amazon — which leads the U.S. market with its affordable Echo devices — no longer has its own smartphone business.

However, Apple says Intercom’s expanded support for other devices isn’t being rolled out today. Instead, it will arrive through further software updates later this year.

To use Intercom, HomePod owners with multiple devices can say things like:

“Hey Siri, Intercom, Has anyone seen my glasses?”

“Hey Siri, tell everyone, Dinner is ready.”

“Hey Siri, Intercom to the kitchen, Has the game started?”

And to reply, users can say something like “Hey Siri, reply, Yes.”

In addition to the new support for Intercom, the software update also introduces deeper integration with Apple Maps and iPhone, the ability to set and stop timers and alarms from any HomePod, the ability to continue listening to a podcast with multiuser support, and more.

The deeper integration means HomePod owners can now ask Siri for information about traffic conditions, as well as nearby restaurants and businesses. A Siri suggestion will then automatically appears in Maps on your iPhone so the route is available as soon as you get in the car.

HomePod owners can also now ask Siri to search the web, which then sends results to the iPhone.

Two other new features will arrive later this year, including the ability to connect one HomePod (or more) to Apple TV 4K for stereo, 5.1 and 7.1 surround, and Dolby Atmos for movies, TV, games and more.

The other upcoming feature, called Personal Update, will soon let you ask Siri “what’s my update” or “play my update,” to get all the info you need to start your day, including news, weather, calendar events, and any reminders.

News: Now may be the best time to become a full-stack developer

You may not have full knowledge, skills and understanding of everything immediately but you’ll position yourself at the forefront of the world’s software development needs right off the bat.

Sergio Granada
Contributor

Sergio Granada is the CTO of Talos Digital, a global team of professional software developers that partners with agencies and businesses in multiple industries providing software development and consulting services for their tech needs.

In the world of software development, one term you’re sure to hear a lot of is full-stack development. Job recruiters are constantly posting open positions for full-stack developers and the industry is abuzz with this in-demand title.

But what does full-stack actually mean?

Simply put, it’s the development on the client-side (front end) and the server-side (back end) of software. Full-stack developers are jacks of all trades as they work with the design aspect of software the client interacts with as well as the coding and structuring of the server end.

In a time when technological requirements are rapidly evolving and companies may not be able to afford a full team of developers, software developers that know both the front end and back end are essential.

In response to the coronavirus pandemic, the ability to do full-stack development can make engineers extremely marketable as companies across all industries migrate their businesses to a virtual world. Those who can quickly develop and deliver software projects thanks to full-stack methods have the best shot to be at the top of a company’s or client’s wish list.

Becoming a full-stack developer

So how can you become a full-stack engineer and what are the expectations? In most working environments, you won’t be expected to have absolute expertise on every single platform or language. However, it will be presumed that you know enough to understand and can solve problems on both ends of software development.

Most commonly, full-stack developers are familiar with HTML, CSS, JavaScript, and back-end languages like Ruby, PHP, or Python. This matches up with the expectations of new hires as well, as you’ll notice a lot of openings for full-stack developer jobs require specialization in more than one back-end program.

Full-stack is becoming the default way to develop, so much so that some in the software engineering community argue whether or not the term is redundant. As the lines between the front end and back end blur with evolving tech, developers are now being expected to work more frequently on all aspects of the software. However, developers will likely have one specialty where they excel while being good in other areas and a novice at some things….and that’s OK.

Getting into full-stack though means you should concentrate on finding your niche within the particular front-end and back-end programs you want to work with. One practical and common approach is to learn JavaScript since it covers both front and back end capabilities. You’ll also want to get comfortable with databases, version control, and security. In addition, it’s smart to prioritize design since you’ll be working on the client-facing side of things.

Since full-stack developers can communicate with each side of a development team, they’re invaluable to saving time and avoiding confusion on a project.

One common argument against full stack is that, in theory, developers who can do everything may not do one thing at an expert level. But there’s no hard or fast rule saying you can’t be a master at coding and also learn front-end techniques or vice versa.

Choosing between full-stack and DevOps

One hold up you may have before diving into full-stack is you’re also mulling over the option to become a DevOps engineer. There are certainly similarities among both professions, including good salaries and the ultimate goal of producing software as quickly as possible without errors.  As with full-stack developers, DevOps engineers are also becoming more in demand because of the flexibility they offer a company.

News: Cloud Foundry coalesces around Kubernetes

In a normal year, the Cloud Foundry project would be hosting its annual European Summit in Dublin this week. But this is 2020, so it’s a virtual event. This year, however, has been a bit of a transformative year for the open-source Platform-as-a-Service project — in more ways than one. With Cloud Foundry executive director

In a normal year, the Cloud Foundry project would be hosting its annual European Summit in Dublin this week. But this is 2020, so it’s a virtual event. This year, however, has been a bit of a transformative year for the open-source Platform-as-a-Service project — in more ways than one. With Cloud Foundry executive director Abby Kearns leaving earlier this year, the organizations’ former CTO Chip Childers stepped into the role. Maybe just as importantly, though, the project’s move to Kubernetes as its container orchestration tool of choice — and a renewed focus on the Cloud Foundry developer experience — is now starting to bear fruit.

“In April, I took over the job. I said: ‘Listen, our community has a new North Star. It’s to go take the Cloud Foundry developer experience and get that thing re-platformed onto Kubernetes . No more delay, no more diversity of thought here. It’s time to make the move,’ ” Childers said (with a chuckle). “And here we are. It’s October, we have our ecosystem aligned, we have major project releases that are fulfilling that vision. And we’ve got a community that’s very energized around it continuing the work of progressing this integration with a bunch of cloud-native projects.”

Developers who use Cloud Foundry, Childers argued, love it, but the project now has an opportunity to show a wider range of potential use that it can offer a smoother developer experience on top of virtually any Kubernetes cluster.

One of the projects that is working on making this happen — and which hit its 1.0 release today, is cf-for-k8s. Traditionally, getting up and running with Cloud Foundry was a heavy lift — and something that most companies left to third-party vendors to handle. This new project, which launched in April, allows developers to spin up a relatively light-weight Cloud Foundry distribution on top of a Kubernetes cluster — using projects like Istio and Fluentd, in addition to Kubernetes — and to do so within minutes.

“It comes along with the whole process of reimagining our architecture to pull in other projects a lot more aggressively and allows us to get to feature parity [with the classic VM-focused Cloud Foundry experience] using a lot more complementary open-source projects,” Childers said about the larger role of this project in the overall ecosystem. “That lets our community focus less on building the underlying plumbing and [spend] more time thinking about how to speed up innovation and the developer experience.”

This wouldn’t be open source if there wasn’t another project that does something quite similar — at least at first glance. That’s KubeCF, which hit its 2.5 launch today. This is an open-source distribution of the Cloud Foundry Application Runtime that, as Childers explained, is meant for production use and that was originally meant to provide existing users a bridge onto the Kubernetes bandwagon. Over time, these two projects will likely merge. “Everyone’s collaborating on what this shared vision looks like. They’re just, they’re just two different distributions that handle the different use cases today,” Childers explained.

After six months in his new position, Childers noted that he’s seeing a lot of energy in the community right now. The job is hard, he said, when there’s unhealthy disagreement, but right now, what he’s seeing is “a beautiful harmony of agreement.”

News: Synthetaic raises $3.5M to train AI with synthetic data

Synthetaic is a startup workign to create data — specifically images — that can be used to train artificial intelligence. Founder and CEO Corey Jaskolski’s past experience includes work with both National Geographic (where he was recently named Explorer of the Year) and a 3D media startup. In fact, he told me that his time

Synthetaic is a startup workign to create data — specifically images — that can be used to train artificial intelligence.

Founder and CEO Corey Jaskolski’s past experience includes work with both National Geographic (where he was recently named Explorer of the Year) and a 3D media startup. In fact, he told me that his time with National Geographic made him aware of the need for more data sets in conservation.

Sound like an odd match? Well, Jaskolski said that he was working on a project that could automatically identify poachers and endangered animals from camera footage, and one of the major obstacles was the fact that there simply aren’t enough existing images of either poachers (who don’t generally appreciate being photographed) or certain endangered animals in the wild to train AI to detect them.

He added that other companies are trying to create synthetic AI training data through 3D worldbuilding (in other words, “building a replica of the world that you want to have an AI learn in”), but in many cases, this approach is prohibitively expensive.

In contrast, the Synthetaic (pronounced “synthetic”) approach combines the work of 3D artists and modelers with technology based on generative adversarial networks, making it far more affordable and scalable, according to Jaskolski.

Synthetaic elephants

Image Credits: Synthetaic

To illustrate the “interplay” between the two halves of Synthetaic’s model, he returned to the example of identifying poachers — the startup’s 3D team could create photorealistic models of an AK 47 (and other weapons), then use adversarial networks to generate hundreds of thousands of images or more showing that model against different backgrounds.

The startup also validates its results after an AI has been trained on Synthetaic’s synthesized images, by testing that AI on real data.

For Synthetaic’s initial projects, Jaskolski said he wanted to partner with organizations doing work that makes the world a better place, including Save the Elephants (which is using the technology to track animal populations) and the University of Michigan (which is developing an AI that can identify different types of brain tumors).

Jaskolski added that Synthetaic customers don’t need any AI expertise of their own, because the company provides an “end-to-end” solution.

The startup announced today that it has raised $3.5 million in seed funding led by Lupa Systems, with participation from Betaworks Ventures and TitletownTech (a partnership between Microsoft and the Green Bay Packers). The startup, which has now raised a total of $4.5 million, is also part of Lupa and Betaworks’ Betalab program of startups doing work that could help “fix the internet.”

News: Snap shares explode after blowing past earnings expectations

Snap shares were up nearly 20% in after-hours trading after the company showcased a massive earnings beat, besting analyst expectations on both revenue and earnings per share for Q3. The company was already hovering above an all-time-high, with Tuesday’s beat poised to send the share price from just above $28 to just short of $34

Snap shares were up nearly 20% in after-hours trading after the company showcased a massive earnings beat, besting analyst expectations on both revenue and earnings per share for Q3. The company was already hovering above an all-time-high, with Tuesday’s beat poised to send the share price from just above $28 to just short of $34 per share.

The company posted a $0.01 revenue bet, best expectations of a $0.04 loss, but the real headline was that they delivered $679 million in reported revenue, smashing past Wall Street expectations that pinned their performance for the quarter around $555 million.

The revenue numbers represented 52% year-over-year growth, showcasing a huge comeback for the company which has faced some difficult quarters as a public company since making their debut.

User growth was up 4% to 249 million daily active users from the 238 million they reported at the end of last quarter, marking an 18% year-over-year increase. The company still posted a net loss of $200 million, but that’s a 12% improvement from last year’s numbers.

News: Here’s why Netflix shares are off after reporting earnings

Shares of consumer video service Netflix are down sharply after the bell today, following the company’s Q3 earnings report. Why is Netflix suddenly worth about 5% less than before? A mixed earnings report, a disappointing new paying customer number, and slightly slack guidance appear to be the answer. The numbers Heading into the third quarter,

Shares of consumer video service Netflix are down sharply after the bell today, following the company’s Q3 earnings report.

Why is Netflix suddenly worth about 5% less than before? A mixed earnings report, a disappointing new paying customer number, and slightly slack guidance appear to be the answer.

The numbers

Heading into the third quarter, Netflix told investors that they should expect it to generate revenues of $6.33 billion, operating income of $1.25 billion, and net income of around $954 million, worth about $2.09 in earnings per share.

Today, Netflix reported $6.44 billion in revenue, operating income of $1.32 billion, along with $1.74 in per-share profit off of net income of $790 million.

Netflix bested its revenue goals, but fell short on profitability.

The company also managed to best analyst revenue expectations of $6.38 billion, while missing out on analyst per-share profit expectations of $2.13.

Adding to the pain, Netflix also missed expectations on new customer adds. In its Q2 earnings, Netflix said that it “forecast[ed] 2.5m paid net adds for Q3’20 vs. 6.8m in the prior year quarter,” because its “strong first half performance likely pulled forward some demand from the second half of the year.”

Today Netflix reported just 2.2 million customer adds, missing its own targets and sharply missing analyst expectations of around 3.3 million for the period (some analyst counts had an even higher guess).

Looking ahead, Netflix says that in Q4 it expects revenues of $6.57 billion, operating income of $885 million, $615 million in net income, earnings per share of $1.35, and 6.0 million new paid customers in the period. The street had been looking for $6.58 billion in top line, and just $0.94 in per-share profit, so it’s hard to parse which part of the forecast is driving more investor sentiment.

Regardless, today’s earnings report will not move Netflix’s share price too far from its recent, all-time highs. The company may take a ding from its profit miss, but nothing material.

News: Netflix user growth slows as production ramps up again

After the COVID-19 pandemic drove impressive subscriber growth earlier this year, Netflix’s numbers have come back down to Earth. The streaming service added 15.77 million net new subscribers in the first quarter of the year, followed by 10.09 million in Q2. It only projected 2.5 million for Q3. Today’s earnings report shows the company falling

After the COVID-19 pandemic drove impressive subscriber growth earlier this year, Netflix’s numbers have come back down to Earth.

The streaming service added 15.77 million net new subscribers in the first quarter of the year, followed by 10.09 million in Q2. It only projected 2.5 million for Q3.

Today’s earnings report shows the company falling short of that already-underwhelming goal, with only 2.2 million net additions, bringing its total subscriber base to 195 million. And it’s forecasting 6.0 million net additions in Q4, compared to 8.8 million in the same period last year.

“As we have highlighted in our recent investor letters, we believe our record first half paid net additions would result in slower growth in the back half of this year,” the company said in its letter to shareholders. “If we achieve our forecast, it will put us at a record 34m paid net adds for 2020, well above our prior annual high of 28.6m in 2018.”

The company also said that “retention remains healthy and engagement per member household was up solidly year over year.”

While the pandemic may have accelerated Netflix’s user growth, it also halted film production for safety reasons. That’s meant a slowing release schedule — though the delay is less noticeable for Netflix, since it had so many shows and movies in the pipeline.

With production resuming, the company said it’s actually completed principal photography on more than 50 productions since mid-March, with plans to do the same for 150 additional productions by the end of the year.

The fourth season of “Stranger Things​,” the second season of “The Witcher” and action film ​”Red Notice”​ (starring Dwayne Johnson, Gal Gadot and Ryan Reynolds) have all resumed production as well.

The announcement includes viewership numbers for a handful of shows and movies released in the last quarter: 43 million subscribers chose to watch the new season of “The Umbrella Academy,” 48 million chose to watch “Ratched,” 38 million chose to watch “The Social Dilemma” and 78 million chose to watch the Charlize Theron action movie “The Old Guard.” (Reminder: Netflix’s “chose to watch” metric refers to the number of subscribers who watched at least two minutes of a program.)

News: Adobe brings its misinformation-fighting content attribution tool to the Photoshop beta

Adobe’s work on a chain of custody that could link online images back to their origins is inching closer to becoming a reality. The prototype, part of the Content Authenticity Initiative (CAI), will soon appear in the beta of Photoshop, Adobe’s ubiquitous image editing software. Adobe says the preview of the new tool will be

Adobe’s work on a chain of custody that could link online images back to their origins is inching closer to becoming a reality. The prototype, part of the Content Authenticity Initiative (CAI), will soon appear in the beta of Photoshop, Adobe’s ubiquitous image editing software.

Adobe says the preview of the new tool will be available to users in the beta release of Photoshop and Behance over the next few weeks. The company calls the CAI implementation “an early version” of the open standard that it will continue to hone.

The project has a few different applications. It aims to make a more robust means of keeping creators’ names attached to the content they create. But its most compelling use case for CAI would see the tool become a “tamper-proof” industry standard aimed at images used to spread misinformation.

Adobe describes the project’s mission as an effort to “increase trust and transparency online with an industry-wide attribution framework that empowers creatives and consumers alike.” The result is a technical solution that could (eventually) limit the spread of deepfakes and other kinds of misleading online content.

“… Eventually you might imagine a social feed or a news site that would allow you to filter out things that are likely to be inauthentic,” Adobe’s director of CAI, Andy Parsons said earlier this year. “But the CAI steers well clear of making judgment calls — we’re just about providing that layer of transparency and verifiable data.”

The idea sounds like a spin on EXIF data, the embedded opt-in metadata that attaches information like lens type and location to an image. But Adobe says the new attribution standard will be less “brittle” and much more difficult to manipulate. The end result would have more in common with digital fingerprinting systems like the ones that identify child exploitation online than it would with EXIF.

“We believe attribution will create a virtuous cycle,” Allen said. “The more creators distribute content with proper attribution, the more consumers will expect and use that information to make judgement calls, thus minimizing the influence of bad actors and deceptive content.”

News: How to ‘watch’ NASA’s OSIRIS-REx snatch a sample from near-Earth asteroid Bennu

NASA’s OSIRIS-REx probe is about to touch down on an asteroid for a smash-and-grab mission, and you can follow its progress live — kind of. The craft is scheduled to perform its collection operation this afternoon, and we’ll know within minutes if all went according to plan. OSIRIS-REx, which stands for Origins Spectral Interpretation Resource

NASA’s OSIRIS-REx probe is about to touch down on an asteroid for a smash-and-grab mission, and you can follow its progress live — kind of. The craft is scheduled to perform its collection operation this afternoon, and we’ll know within minutes if all went according to plan.

OSIRIS-REx, which stands for Origins Spectral Interpretation Resource Identification Security — Regolith Explorer, was launched in September of 2016 and since arriving at its destination, the asteroid Bennu, has performed a delicate dance with it, entering an orbit so close it set records.

Today is the culmination of the team’s efforts, the actual “touch and go” or TAG maneuver that will see the probe briefly land on the asteroid’s surface and suck up some of its precious space dust. Just a few seconds later, once sampling is confirmed, the craft will jet upward again to escape Bennu and begin its journey home.

Image Credits: NASA

Image Credits: NASA

While there won’t be live HD video of the whole attempt, NASA will be providing both a live animation of the process, informed by OSIRIS-REx’s telemetry, and presumably any good images that are captured as it descends.

We know for certain this is both possible and very cool because Japan’s Hayabusa-2 asteroid mission did something very similar last year, but with the added complexity (and coolness) of firing a projectile into the surface to stir things up and get a more diverse sample.

NASA’s coverage starts at 2 p.m. PDT, and the touchdown event is planned to take place an hour or so later, at 3:12, if all goes according to plan. You can watch the whole thing take place in simulation at this Twitch feed, which will be updated live, but NASA TV will also have live coverage and commentary on its YouTube channel. Images may come back from the descent and collection, but they’ll be delayed (it’s hard sending lots of data over a million-mile gap) so if you want the latest, listen closely to the NASA feeds.

News: Equity Shot: The DoJ, Google, and the suit could mean for startups

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines. It’s a big day in tech because the US Federal Government is going after Google on anti-competitive grounds. Sure, the timing appears crassly political and the case is not picking up huge plaudits thus far for its air-tightness,

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines.

It’s a big day in tech because the US Federal Government is going after Google on anti-competitive grounds. Sure, the timing appears crassly political and the case is not picking up huge plaudits thus far for its air-tightness, but that doesn’t mean we can ignore it.

So Danny and I got on the horn to chat it up for about 10 minutes to fill you in. For reference, you can read the full filing here, in case you want to get your nails in. It’s not a complicated read. Get in there.

As a pair we dug into what stood out from the suit, what we think about the historical context, and also noodled at the end about what the whole situation could mean for startups; it’s not all good news, but adding lots of competitive space to the market would be a net-good for upstart tech companies in the long-run.

And consumers. Competition is good.

You can read TechCrunch’s early coverage of the suit here, and our look at the market’s reaction here. Let’s go!

Equity drops every Monday at 7:00 a.m. PT and Thursday afternoon as fast as we can get it out, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.

WordPress Image Lightbox Plugin