Atlanta United to play Minnesota United in U.S. Open Cup final

By News

Atlanta United to play Minnesota United in U.S. Open Cup final originally published on Dirty South Soccer – All Posts

MLS: Minnesota United FC at Atlanta United FC
Brett Davis-USA TODAY Sports
The cup final matchup is set

After Atlanta United dispatched Orlando City 2-0 from the U.S. Open Cup in the first semi-final played Tuesday, Minnesota United punched its ticket the next night ewith a 2-1 win over the Portland Timbers.

Last month, Atlanta drew first priority to host the final if they were to get there, and indeed Mercedes-Benz Stadium will host its second cup final in as many years—albeit different cups. The U.S. Open Cup final will take place on August 27 at 8 p.m.

It’s an interesting matchup in that not only is it a rematch from a game earlier this season that Atlanta won 3-0, but it pits the two “Uniteds” that joined the league together against one another. Having experienced two very different paths to get to this point, the two clubs arrive in similarly upward trajectories. While Atlanta United was busy winning MLS Cup and nearly claiming a double with a Supporters Shield title, Minnesota were stuck in first gear—enduring a coaching change and personnel revamp. But the pivot worked, and now Minnesota are on the rise, currently placed second in the conference standings—just like Atlanta—and playing some fun, direct attacking soccer.

The Rundown: Creative agencies face a perfect storm

By News

The Rundown: Creative agencies face a perfect storm originally published on Digiday

The shutdown of Barton F. Graf is yet another sign of the times for creative agencies. The shop, which was founded just under a decade ago, has been one of the darlings of the creative agency world. It will close its doors later this year. Founder Gerry Graf, speaking to Ad Age, said the closure was due to “a perfect storm.”

But that storm shouldn’t come as a surprise; it’s been brewing for a while. Barton F. Graf, like other agencies, was hit hard by the ongoing movement from clients towards more project-based work. That’s a tough pivot for agencies like this one, which was built on models that prized agency of record work. 

And agency models, despite lots of loud and public calls for “innovation,” haven’t really evolved much at all. Agencies are typically based on an FTE model, where they’re paid according to the level and number of employees needed to service a client’s business, with (hopefully) a margin added on top.

A labor-based model doesn’t really work in a market like this one. Clients want to pay less money overall, marketing is still considered a cost-center, and they’re likelier to only want to pay for things they can’t do themselves. That means, in many cases, the money agencies are charging won’t cover their costs, let alone garner a profit.

How much of this is the agencies’ own fault? Some say it’s a mindset that they’d always get paid for the services that are coming back to bite them. At least a few agency CEOs and former execs have said to me that agencies for too long charged clients too much for work they could have done with less. That meant clients, sick and tired of that kind of financial arrangement (and under pressure from their bosses) are understandably looking to save money.

Some agencies have attempted to evolve. Some of them tried to make products themselves, leading to an explosion in “agency IP” projects a few years ago, with little in the way of tangible results. Some are offering to co-create and co-invest, thus having more of a hand in making and growing the client’s products and businesses. Many will survive, especially newer ones accustomed to working in this new way. But for pure creative agencies who are unwilling or unable to make significant and drastic shifts in their models, it seems like the end is nigh. 

David Droga, who sold his agency to Accenture Interactive just a few months ago, says agencies should ensure they’re selling more than just a “big idea”. “It’s one thing for us to want great and grand ambitious creative thinking that positions a brand. But our fees are one chunk of that. It’s not all. There are things that a consumer experiences about the brand that don’t touch a creative agency,” he told Digiday previously. “Blue-chip brands give AORs fees of $10 million or $15 million. But the people who are controlling the customer experience, they’re getting paid an ongoing fee of $100 million a year. I don’t need that number, but what I want is to be that important and that influential. I want CMOs to love us and CEOs to love us as well.” — Shareen Pathak

The subscriber is always right? 
There’s no shortage of data encouraging news publishers to lean into consumer revenue. But there’s also ample proof that relying on readers for direct revenue means managing a completely different kind of relationship than many publishers are used to.

This week has already offered examples of both. 

On Wednesday, The Guardian announced it had broken even for the first time in years, thanks largely to growth in its membership ranks and donations. The British news publisher said it now has over 655,000 paying supporters across print and digital around the world, who get perks such as free access to Guardian events, depending on the tier of membership. An additional 300,000 people gave to The Guardian via one-off donations.

But it wasn’t all good news for reader revenue this week. On Monday, the #CancelNYT began trending on Twitter after a handful of Twitter users took issue with the way the newspaper framed a speech on gun violence given by President Donald Trump. Within hours, a mixture of Times subscribers and gleeful conservatives were tweeting that they were tired of the publisher’s handling of the president’s racist rhetoric and provocations. Some framed the headline as the straw that broke the camel’s back, placed atop everything from the paper’s employment of conservative columnist Bret Stephens to a decision to accept advertising that attacked Congresswoman Rashida Tlaib

Grandstanding on Twitter is easy, and canceling a newspaper subscription can be notoriously hard, but it appears that some people did follow through. The paper admitted to the Columbia Journalism Review that it had experienced a “higher number of cancellations than is typical” following the incident.  

The Times is held to an unusually high standard because it is regarded by many as a standard-bearer for American journalism. But the dust-up confirms that news publishers, particularly those focused on growing subscriptions, have to think intently about the expectations of their subscribers, and how that relationship can be managed. 

Newspapers have decided to embrace the idea that they are bulwarks of democracy and community vitality. That’s an admirable responsibility, but people expect different things from their idols.  — Max Willens

The post The Rundown: Creative agencies face a perfect storm appeared first on Digiday.

How Facebook is attempting to target ads without personal data

By News

How Facebook is attempting to target ads without personal data originally published on AdAge

Facebook is telling advertisers that it has a new way to identify their ideal consumer and target ads without relying on personal traits that led to abuses in the past.

The social network says it can build accurate profiles on consumers without relying on their age, gender, ZIP and other sensitive characteristics, as it has devised an alternate route to ad targeting based on people’s online behavior, not personal attributes. Facebook developed the new targeting tool, which it calls Special Ad Audience, in the wake of a civil rights settlement over concerns that marketers could use its ad platform to discriminate against certain groups of people.

In March, Facebook began rolling out a series of updates to its ads platform to prevent abuses like excluding minorities from seeing ads about housing opportunities. The changes prohibit targeting people based on categories like race, gender, age, family status and even household income.

Because the new targeting tool is new, its effectiveness is still unknown. Advertisers are concerned that restrictions on their use of data will limit the success of campaigns. The rules mostly apply to housing, employment and credit ads, but they impact any advertiser that mentions financing offers, like automakers.

Car advertisers provide an interesting case study for Facebook’s new policies in that auto marketers can’t even link to deals on their websites without adhering to the new targeting restrictions. “All deal-based ads are getting sucked into this, including our advertisements for cars, which to be honest with you is strange,” says one ad agency executive who handles the account of a major car company. 

Just putting a suggested retail price in a car ad means the targeting restrictions apply. The advertiser has to either change the message or deliver the ad to a general audience. Without highly specific targeting, marketers fear, ad campaigns could become less successful at driving sales.

Life after Cambridge Analytica
A Facebook spokeswoman says that Special Ad Audience replaces so-called “lookalike” audiences. Lookalikes are users that share traits gleaned from customer profiles provided by advertisers. For instance, an automaker could share customer emails with Facebook that provide details about the brand’s most loyal customers, then the social network could serve ads to similar people. But not anymore.

In March, Facebook announced a settlement over lawsuits with civil rights groups, including the American Civil Liberties Union, after it was revealed that marketers could target ads in ways that discriminated against certain groups. It is against federal law to discriminate when advertising housing, employment and credit opportunities.

Facebook’s civil rights changes aren’t the only new developments throwing off advertisers, either. In 2018, Facebook removed third-party data providers from direct integration into its platform. Companies including Oracle Data Cloud, Acxiom and Epsilon had been offering advertisers direct access to hundreds of hyper-specific audiences. It was a data solution that gave brands instant audiences broken down by income, employment, interests, family status and other categories.

Now, all advertisers have to go directly to the third-party companies, and they bring the audiences into the social network for targeting.

Brad O’Brien, VP of social and content marketing at marketing firm 3Q Digital, says that the job of a Facebook ad buyer is changing. There is less need for expertise in slicing niche audiences to direct super-customized ad campaigns, instead the social network is encouraging marketers to cast a wide net.

Facebook’s algorithm is doing more of the work, anyway, O’Brien says, as more of the planning process and the placement of ads are handled by automation.  

“The days of doing specific targeting on Facebook are over,” O’Brien says.

Data privacy-first advertising is here: Here are the winners and losers

By News

Data privacy-first advertising is here: Here are the winners and losers originally published on Digiday

It’s a new dawn in digital advertising.

The drive for data privacy-first strategies has become more apparent, spurred by anti-tracking moves made by browsers as well as tighter data protection laws. The knock-on effect is that commercially available data will become less abundant in the years to come. While a considerable part of programmatic ad buying and selling remains reliant on third-party cookies, that’s set to change.

Like with any significant change, there are always winners and losers. Here’s a look at some of them.

Winners:

Contextual targeting
Contextual targeting has gotten a lot sexier again. Many ad executives still refer to Oracle’s $325 million acquisition of contextual ad tech company Grapeshot last spring as a solid indicator that this form of advertising was the future. After all, there weren’t many ad tech businesses selling for such desirable prices last year. In the initial wake of the General Data Protection Regulation’s arrival last May, publishers had enjoyed a bit of a windfall in contextual-targeting buys, as media agencies preferred to err on the side of caution and spend more on targeting methods that weren’t reliant on users’ personal data. But that caution was relatively short-lived for most. Fast-forward to today, and the search for sophisticated contextual targeting options that can mirror the effectiveness that audience targeting has that can be achieved at scale, has become an arms race. Publishers, media agencies and ad tech vendors will all want a slice of the pie.

Authenticated-consent ad buys
Inventory that has bona fide user consent attached will be gold dust. If a publisher has managed to obtain the consent without the need for third-party cookies, all the better. Some ad tech executive sources have predicted there will come a time when some ad tech vendors, to avoid flirting with GDPR fines, will begin to fence off the ability to buy and sell on inventory that has no user consent signal. In time, media agencies may be willing to pay a higher CPM to know it’s consented to.

Scaled log-in strategies
The third-party cookie is becoming a term synonymous with yesterday’s business model. The future is in first-party cookie-based models and other forms of measuring and tracking identity, in a data privacy law-compliant way. Subscriptions publishers are already well into their stride with this approach, having already established free registration as a way to warm up future potential subscribers. Meanwhile, publishers in Germany have formed log-in alliances with other businesses including domestic rivals as well as marketers.

Consumers
Just how much consumers care about their data privacy is a question that gets debated a lot. There is a strong contingent of privacy activists that care a great deal and have voiced their concerns to GDPR regulators over the misuse of their data within advertising. But as for the mass public, media industry opinion is divided over just how much they care, and how much more they may care if they truly understood how their data was used within the buying and trading of ads on the open exchange. But regardless, all this effort is to give consumers more choice and control over how their data is used, regardless of whether they decide to action that or not. There is growing evidence to show that they do care. High-profile scandals, like Cambridge Analytica, certainly help raise awareness, alongside mainstream press coverage of large fines to well-known brands like British Airways for leaking data and, therefore, violating GDPR.

Losers:

Real-time bidding
It may just be a mechanism to facilitate a form of targeting, but real-time bidding is losing its grip as the most effective method for delivering acute audience targeting. That’s largely because it’s under scrutiny by U.K. data protection regulator The Information Commissioner’s Office as a method of ad delivery that misuses personal data and, therefore, flouts GDPR. Privacy activists have likewise made RTB and its incompatibility with GDPR the crux of their argument when lodging complaints against ad tech businesses with regulators. Businesses cannot claim the legitimate interest GDPR clause for using personal data within RTB either, which puts the method under increased pressure.

Third-party cookie addicts
Thanks to Apple’s no-nonsense approach to third-party cookie tracking on its Safari browser, programmatic monetization on that browser is a dead dodo. Same goes for Firefox and Brave browsers which have the same policy. Google has given users the choice as to whether they want to switch off third-party cookie tracking, but there are all sorts of reasons why a consumer may never get round to that, particularly if it means they can’t use Google products in the same way. Time will tell if it has any kind of impact. Regardless, that browser-led trend combined with the challenges around using personal data for targeting thanks to GDPR and other data protection laws, is squeezing the amount of commercially available data for advertisers. Publishers are already dropping third-party cookie-based data management platforms like flies, in favor of ones that are first-party data-centric. Forms of targeting that aren’t reliant on personal user data and third-party cookies, will win out. But any business that sits back and doesn’t evolve and provide alternatives to the third-party cookie will struggle.

The post Data privacy-first advertising is here: Here are the winners and losers appeared first on Digiday.

Ann Curry Will ‘Harness the Power’ of Live TV For the First Time Since Leaving the Today Show

By News

Ann Curry Will ‘Harness the Power’ of Live TV For the First Time Since Leaving the Today Show originally published on AdFreak

It’s been seven years since Ann Curry last regularly appeared on live television, when she exited as co-anchor of the Today show, where she had worked since 1997. Now Curry is returning to live TV with Chasing the Cure, a 90-minute series debuting tonight on TNT and TBS that will use a combination of crowdsourcing and top doctors to help people suffering from various undiagnosed and misdiagnosed medical mysteries.

Curry—who anchors and executive produces the multiplatform series—spoke with Adweek about returning to the “tremendous power” of live TV, why she couldn’t say no to Chasing the Cure and her thoughts about getting left out of Today’s recent 25th anniversary celebration video.

This interview has been condensed and edited.

Adweek: What was the draw for you of becoming involved with this?
Curry: Throughout my career, I have always tried to give voice to the voiceless. This project lets people who are rarely heard be listened to and have their cases considered. This is what I’ve always tried to do. When I heard the idea, my first response was, “How do we make sure that a project like this doesn’t exploit people? How do we make sure that we do this correctly, and is there a way to ethically do this?” And my second response was, “If we could do it ethically, if we could do it right, how could I say no?” Because it’s an opportunity to help people who are at their wit’s end, people who are undiagnosed—and there are millions who are misdiagnosed every year. How could I say no? So that’s why I said yes.

“Throughout my career, I have always tried to give voice to the voiceless.”

Ann Curry

When your involvement was first announced, the project was called M.D. Live …
Which was never going to happen!

… how did the scope of the show change since then?
A lot. It’s been like a snowball rolling downhill. It’s changed because it went from a concept [to reality]: What if you took what every one of us is seeing on social media—these stories about people who are undiagnosed or can’t pay for their care—and you amplify those with live television, which has such power. In this time when television is struggling for relevancy, live television has tremendous power, because there’s something authentic, real and connected about it. What if you amplify this trend that’s happening out there and you took those stories and you told those stories well, in almost a documentary way? But then you also brought those patients on live, and you gave them access to doctors anyone of us would be lucky to see?

Because these patients are long-suffering, we made it our mission to find doctors who were not easy to find. Doctors to whom we would say, “We want you to think about being on television, because these patients need you.” And the reason why these majority of these doctors said yes—and some of them are big-deal doctors—is because they recognize that while the opportunity for getting amazing medical care exists in this country, the truth is that for most of us, the system prevents us from getting it. We’re siloed by where we live, our access to who the specialists are in our town, by our medical insurance, the specialists and doctors who we need.

And so what we’re doing with this project is punching a hole in the silo and saying, look, what will happen if you give people access directly, break down the halls and connect them, in addition to anyone out there who may know something? There are more than 300 million people in this country, and there’s a chance that somebody else has a similar symptom, or somebody else is a medical professional who has seen a patient like this. And our website is going to be up so that anybody in the world can see these cases if they register. So there’s a hope that wasn’t there before.

A Framework for Moderation

By News

A Framework for Moderation originally published on Stratechery

On Sunday night, when Cloudflare CEO Matthew Prince announced in a blog post that the company was terminating service for 8chan, the response was nearly universal: Finally.

It was hard to disagree: it was on 8chan — which was created after complaints that the extremely lightly-moderated anonymous-based forum 4chan was too heavy-handed — that a suspected terrorist gunman posted a rant explaining his actions before killing 20 people in El Paso. This was the third such incident this year: the terrorist gunmen in Christchurch, New Zealand and Poway, California did the same; 8chan celebrated all of them.

To state the obvious, it is hard to think of a more reprehensible community than 8chan. And, as many were quick to point out, it was hardly the sort of site that Cloudflare wanted to be associated with as they prepared for a reported IPO. Which again raises the question: what took Cloudflare so long?

Moderation Questions

The question of when and why to moderate or ban has been an increasingly frequent one for tech companies, although the circumstances and content to be banned have often varied greatly. Some examples from the last several years:

  • Cloudflare dropping support for 8chan
  • Facebook banning Alex Jones
  • The U.S. Congress creating an exception to Section 230 of the Communications Decency Act for the stated purpose of targeting sex trafficking
  • The Trump administration removing ISPs from Title II classification
  • The European Union ruling that the “Right to be Forgotten” applied to Google

These may seem unrelated, but in fact all are questions about what should (or should not) be moderated, who should (or should not) moderate, when should (or should not) they moderate, where should (or should not) they moderate, and why? At the same time, each of these examples is clearly different, and those differences can help build a framework for companies to make decisions when similar questions arise in the future — including Cloudflare.

Content and Section 230

The first and most obvious question when it comes to content is whether or not it is legal. If it is illegal, the content should be removed.

And indeed it is: service providers remove illegal content as soon as they are made aware of it.

Note, though, that service providers are generally not required to actively search for illegal content, which gets into Section 230 of the Communications Decency Act, a law that is continuously misunderstood and/or misrepresented.1

To understand Section 230 you need to go back to 1991 and the court case Cubby v CompuServe. CompuServe hosted a number of forums; a member of one of those forums made allegedly defamatory remarks about a company named Cubby, Inc. Cubby sued CompuServe for defamation, but a federal court judge ruled that CompuServe was a mere “distributor” of the content, not its publisher. The judge noted:

The requirement that a distributor must have knowledge of the contents of a publication before liability can be imposed for distributing that publication is deeply rooted in the First Amendment…CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.

Four years later, though, Stratton Oakmont, a securities investment banking firm, sued Prodigy for libel, in a case that seemed remarkably similar to Cubby v. CompuServe; this time, though, Prodigy lost. From the opinion:

The key distinction between CompuServe and Prodigy is two fold. First, Prodigy held itself out to the public and its members as controlling the content of its computer bulletin boards. Second, Prodigy implemented this control through its automatic software screening program, and the Guidelines which Board Leaders are required to enforce. By actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and “bad taste”, for example, Prodigy is clearly making decisions as to content, and such decisions constitute editorial control…Based on the foregoing, this Court is compelled to conclude that for the purposes of Plaintiffs’ claims in this action, Prodigy is a publisher rather than a distributor.

In other words, the act of moderating any of the user-generated content on its forums made Prodigy liable for all of the user-generated content on its forums — in this case to the tune of $200 million. This left services that hosted user-generated content with only one option: zero moderation. That was the only way to be classified as a distributor with the associated shield from liability, and not as a publisher.

The point of Section 230, then, was to make moderation legally viable; this came via the “Good Samaritan” provision. From the statute:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

In short, Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.

Keep in mind that Congress is extremely limited in what it can make illegal because of the First Amendment. Indeed, the vast majority of the Communications Decency Act was ruled unconstitutional a year after it was passed in a unanimous Supreme Court decision. This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.

The one tool that Congress does have is changing Section 230; for example, 2018’s SESTA/FOSTA act made platforms liable for any activity related to sex trafficking. In response platforms removed all content remotely connected to sex work of any kind — Cloudflare, for example, dropped support for the Switter social media network for sex workers — in a way that likely caused more harm than good. This is the problem with using liability to police content: it is always in the interest of service providers to censor too much, because the downside of censoring too little is massive.

The Stack

If the question of what content should be moderated or banned is one left to the service providers themselves, it is worth considering exactly what service providers we are talking about.

At the top of the stack are the service providers that people publish to directly; this includes Facebook, YouTube, Reddit, 8chan and other social networks. These platforms have absolute discretion in their moderation policies, and rightly so. First, because of Section 230, they can moderate anything they want. Second, none of these platforms have a monopoly on online expression; someone who is banned from Facebook can publish on Twitter, or set up their own website. Third, these platforms, particularly those with algorithmic timelines or recommendation engines, have an obligation to moderate more aggressively because they are not simply distributors but also amplifiers.

Internet service providers (ISPs), on the other hand, have very different obligations. While ISPs are no longer covered under Title II of the Communications Act, which barred them from discriminating data on the basis of content, it is the expectation of consumers and generally the policy of ISPs to not block any data because of its content (although ISPs have agreed to block child pornography websites in the past).

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:

The position in the stack matters for moderation

Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.

Cloudflare’s Decision

What made Cloudflare’s decision more challenging was three-fold.

First, while Cloudflare is not an ISP, they are much more akin to infrastructure than they are to user-facing platforms. In the case of 8chan, Cloudflare provided a service that shielded the site from Distributed Denial-of-Service (DDoS) attacks; without a service like Cloudflare, 8chan would almost assuredly be taken offline by Internet vigilantes using botnets to launch such an attack. In other words, the question wasn’t whether or not 8chan was going to be promoted or have easy access to large social networks, but whether it would even exist at all.

To be perfectly clear, I would prefer that 8chan did not exist. At the same time, many of those arguing that 8chan should be erased from the Internet were insisting not too long ago that the U.S. needed to apply Title II regulation (i.e. net neutrality) to infrastructure companies to ensure they were not discriminating based on content. While Title II would not have applied to Cloudflare, it is worth keeping in mind that at some point or another nearly everyone reading this article has expressed concern about infrastructure companies making content decisions.

And rightly so! The difference between an infrastructure company and a customer-facing platform like Facebook is that the former is not accountable to end users in any way. Cloudflare CEO Matthew Prince made this point in an interview with Stratechery:

We get labeled as being free speech absolutists, but I think that has absolutely nothing to do with this case. There is a different area of the law that matters: in the U.S. it is the idea of due process, the Aristotelian idea is that of the rule of law. Those principles are set down in order to give governments legitimacy: transparency, consistency, accountability…if you go to Germany and say “The First Amendment” everyone rolls their eyes, but if you talk about the rule of law, everyone agrees with you…

It felt like people were acknowledging that the deeper you were in the stack the more problematic it was [to take down content], because you couldn’t be transparent, because you couldn’t be judged as to whether you’re consistent or not, because you weren’t fundamentally accountable. It became really difficult to make that determination.

Moreover, Cloudflare is an essential piece of the Facebook and YouTube competitive set: it is hard to argue that Facebook and YouTube should be able to moderate at will because people can go elsewhere if elsewhere does not have the scale to functionally exist.

Second, the nature of the medium means that all Internet companies have to be concerned about the precedent their actions in one country will have in different countries with different laws. One country’s terrorist is another country’s freedom fighter; a third country’s government acting according to the will of the people is a fourth’s tyrannically oppressing the minority. In this case, to drop support for 8chan — a site that was legal — is to admit that the delivery of Cloudflare’s services are up for negotiation.

Third, it is likely that at some point 8chan will come back, thanks to the help of a less scrupulous service, just as the Daily Stormer did when Cloudflare kicked them off two years ago. What, ultimately is the point? In fact, might there be harm, since tracking these sites may end up being more difficult the further underground they go?

This third point is a valid concern, but one I, after long deliberation, ultimately reject. First, convenience matters. The truly committed may find 8chan when and if it pops up again, but there is real value in requiring that level of commitment in the first place, given said commitment is likely nurtured on 8chan itself. Second, I ultimately reject the idea that publishing on the Internet is a fundamental right. Stand on the street corner all you like, at least your terrible ideas will be limited by the physical world. The Internet, though, with its inherent ability to broadcast and congregate globally, is a fundamentally more dangerous medium. Third, that medium is by-and-large facilitated by third parties who have rights of their own. Running a website on a cloud service provider means piggy-backing off of your ISP, backbone providers, server providers, etc., and, if you are controversial, services like Cloudflare to protect you. It is magnanimous in a way for Cloudflare to commit to serving everyone, but at the end of the day Cloudflare does have a choice.

To that end I find Cloudflare’s rationale for acting compelling. Prince told me:

If this were a normal circumstance we would say “Yes, it’s really horrendous content, but we’re not in a position to decide what content is bad or not.” But in this case, we saw repeated consistent harm where you had three mass shootings that were directly inspired by and gave credit to this platform. You saw the platform not act on any of that and in fact promote it internally. So then what is the obligation that we have? While we think it’s really important that we are not the ones being the arbiter of what is good or bad, if at the end of the day content platforms aren’t taking any responsibility, or in some cases actively thwarting it, and we see that there is real harm that those platforms are doing, then maybe that is the time that we cut people off.

User-facing platforms are the ones that should make these calls, not infrastructure providers. But if they won’t, someone needs to. So Cloudflare did.

Defining Gray

I promised, with this title, a framework for moderation, and frankly, I under-delivered. What everyone wants is a clear line about what should or should not be moderated, who should or should not be banned. The truth, though, is that those bright lines do not exist, particularly in the United States.

What is possible, though, is to define the boundaries of the gray areas. In the case of user-facing platforms, their discretion is vast, and responsibility for not simply moderation but also promotion significantly greater. A heavier hand is justified, as is external pressure on decision-makers; the most important regulatory response is to ensure there is competition.

Infrastructure companies, meanwhile, should primarily default to legality, but also, as Cloudflare did, recognize that they are the backstop to user-facing platforms that refuse to do their job.

Governments, meanwhile, beyond encouraging competition, should avoid using liability as a lever, and instead stick to clearly defining what is legal and what isn’t. I think it is legitimate for Germany, for example, to ban pro-Nazi websites, or the European Union to enforce the “Right to be Forgotten” within E.U. borders; like most Americans, I lean towards more free speech, not less, but governments, particularly democratically elected ones, get to make the laws.

What is much more problematic are initiatives like the European Copyright Directive, which makes platforms liable for copyright infringement. This inevitably leads to massive overreach and clumsy filtering, and favors large platforms that can pay for both filters and lawyers over smaller ones that cannot.

None of this is easy. I am firmly in the camp that argues that the Internet is something fundamentally different than what came before, making analog examples less relevant than they seem. The risks and opportunities of the Internet are both different and greater than anything we have experienced previously, and perhaps the biggest mistake we can make is being too sure about what is the right thing to do. Gray is uncomfortable, but it may be the best place to be.

  1. For the rest of this section I am re-using text I wrote in this 2018 Daily Update; I am not putting the re-used text in blockquotes as I normally would for the sake of readability

Another major radio station conglomerate thinks podcasts are the future

By News

Another major radio station conglomerate thinks podcasts are the future originally published on The Verge


Photo by Amelia Holowaty Krales / The Verge
Entercom, one of the biggest US radio corporations, thinks podcasts are essential to the future of audio. The company announced today that it’s acquired two big names in podcasting: Pineapple Street Media, a content network, and Cadence13, an ad distribution platform and production company. The Wall Street Journal reports that the Pineapple deal is worth $18 million, and the Cadence13 deal cost Entercom nearly $50 million.

Under the acquisition agreement, Pineapple Street will change its name to Pineapple Street Studios, a division of Entercom’s Radio.com (a website and app), and will focus on creating shows and working with partners like Netflix and HBO. Cadence13 will continue to operate as is and work with its clients, like Crooked Media and Malcolm Gladwell’s Pushkin Industries. Entercom says it’s now considering experimenting with exclusive or windowed content that’ll first premiere on Radio.com but later become available more widely, a trend we’ve already seen play out across the industry with the launch of podcast startups like Luminary as well as Spotify’s focus on exclusive shows. In Spotify’s case, at least, it has millions of users already accessing the app. It’s unclear how popular Radio.com is and whether anyone is loyal enough to podcasts that they’ll come to the website or download the app to listen.

Entercom owns more than 235 radio stations across the US that reach 170 million listeners each month. This deal clearly sets it up to compete with iHeartMedia, one of the biggest names in radio. Last year, iHeart acquired Stuff Media, the podcast network behind HowStuffWorks, for $55 million. Since then, iHeart has launched multiple podcasts, including the Ron Burgundy Podcast with Will Ferrell. The company leverages its 858 radio stations to expand its podcasts’ reach. The team played the Ron Burgundy Podcast across its radio stations, for example, bridging the connection between the conventional idea of a podcast, which typically lives as an RSS feed, and traditional terrestrial radio. The message is clear: the future of audio involves podcasts, whatever that word eventually comes to mean.

Skype, Slack, other Electron-based apps can be easily backdoored

By News

Skype, Slack, other Electron-based apps can be easily backdoored originally published on Ars Technica

No need to knock, Electron left the code unlocked.
Enlarge / No need to knock, Electron left the code unlocked.

Getty Images

LAS VEGAS—The Electron development platform is a key part of many applications, thanks to its cross-platform capabilities. Based on JavaScript and Node.js, Electron has been used to create client applications for Internet communications tools (including Skype, WhatsApp, and Slack) and even Microsoft’s Visual Studio Code development tool. But Electron can also pose a significant security risk because of how easily Electron-based applications can be modified without triggering warnings.

At the BSides LV security conference on Tuesday, Pavel Tsakalidis demonstrated a tool he created called BEEMKA, a Python-based tool that allows someone to unpack Electron ASAR archive files and inject new code into Electron’s JavaScript libraries and built-in Chrome browser extensions. The vulnerability is not part of the applications themselves but of the underlying Electron framework—and that vulnerability allows malicious activities to be hidden within processes that appear to be benign. Tsakalidis said that he had contacted Electron about the vulnerability but that he had gotten no response—and the vulnerability remains.

While making these changes required administrator access on Linux and MacOS, it only requires local access on Windows. Those modifications can create new event-based “features” that can access the file system, activate a Web cam, and exfiltrate information from systems using the functionality of trusted applications—including user credentials and sensitive data. In his demonstration, Tsakalidis showed a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.

A demonstration of a BEEMKA-backdoored version of the BItWarden application.

It’s not a bug, it’s a feature

The problem lies in the fact that Electron ASAR files themselves are not encrypted or signed, allowing them to be modified without changing the signature of the affected applications. A request from developers to be able to encrypt ASAR files was closed by the Electron team without action.

Code inserted into the ASAR can run either within the application’s context or within the context of the Electron framework itself. Application code is “plain old JavaScript,” Tsakalidis explained, capable of calling Electron’s operating-specific modules—including microphone and camera controls, as well as operating system interfaces. Code injected into Electron’s internal Chrome extensions can allow attackers to bypass certificate checks, so that, while code may still force communications over HTTPS, an attacker can use a self-signed certificate on a remote system for exfiltration. And Web communications can be altered or completely blocked—including applications’ updating features, which would prevent new versions from being automatically installed, displacing the backdoored application.

Tsakalidis said that in order to make modifications to Electron apps, local access is needed, so remote attacks to modify Electron apps aren’t (currently) a threat. But attackers could backdoor applications and then redistribute them, and the modified applications would be unlikely to trigger warnings—since their digital signature is not modified.

13-Year-Old Encryption Bugs Still Haunt Apps and IoT

By News

13-Year-Old Encryption Bugs Still Haunt Apps and IoT originally published on Wired

Hackers try to find novel ways to circumvent or undermine data encryption schemes all the time. But at the Black Hat security conference in Las Vegas on Wednesday, Purdue University researcher Sze Yiu Chau has a warning for the security community about a different threat to encryption: vulnerabilities that were discovered more than a decade ago still very much persist today.

The issues relate to RSA, the ubiquitous encryption algorithm and cryptosystem that helps protect everything from web browsers and VPNs to email and messaging applications. The problem isn’t in the RSA specification itself, but in how some companies implement it.

Chua’s research focuses on flaws in how RSA cryptography can be set up to handle signature validation, checks to ensure that a “signed” chunk of encrypted data was actually verified by the sender, and that the signature hasn’t been tampered with or manipulated along the way. Without strong signature validation, a third party could manipulate data, or send fake data that appears to come from a trusted source. Prolific Swiss cryptographer Daniel Bleichenbacher, who currently works at Google, first demonstrated these RSA signature validation weaknesses at the CRYPTO cryptography conference in 2006.

“It’s surprising to see this old problem haunt us in different libraries, different settings,” says Purdue’s Chau. “After 13 years people still don’t know that we have to avoid these problems—they are still persistent. So that’s why I wanted to present at Black Hat. Awareness is an important factor and we need to learn from each other’s mistakes.”

Since Bleichenbacher’s presentation, researchers have found RSA signature validation issues in major code bases, like the secure communication library OpenSSL in 2007 and Mozilla’s Firefox in 2014.

The RSA signature verification flaws don’t represent a flaw in the algorithm itself. They arise instead from insecure implementations that are too permissive about the signature characteristics they will accept, or allow opportunities to circumvent validity checks. This creates an opening to sneak forged signatures and associated malarky past RSA’s checks. But regardless of where the vulnerabilities get introduced, they can have real-world consequences.

“They don’t necessarily understand how the crypto works underneath.”

Sze Yiu Chau, Purdue University

In just a brief survey, Chau found six RSA implementations with the signature verification flaws. Two of them, in the open source VPN infrastructure tools Openswan and strongSwan, could have been exploited to bypass authentication requirements for VPNs—potentially exposing data that a user expects to be shielded. And since Openswan and strongSwan are both publicly available tools that can be used by anyone, the flaws may have been perpetuated across a number of VPNs and other secure connection tools. Chau says that both Openswan and strongSwan were responsive about the issues and quickly fixed them in August and September 2018.

The signature verification issues can show up in other common and foundational web security protocol implementations, too, like the secure network protocol SSH, and the data security extensions for the internet’s phone book lookup protocol, known as DNSSEC.

Not all open source tools and code libraries that contain these weak implementations are responsive about issuing fixes, though. And many developers without a specific background in cryptography will incorporate pre-fab components into their projects without knowing to check for cryptographic implementation issues. Chau says that this is of particular concern in apps or small gadgets that are often rushed to market, like internet of things devices.

“There are developers in the IoT community using these products. For example, we found the issues in two open source TLS web encryption libraries,” Chau says, referring to the Transport Layer Security protocol that encrypts data to and from a website. “We don’t know what commercial products use them, but the numbers show that they have 20 or 30 downloads each week. For developers, particularly application developers, they just want to make things work. They don’t necessarily understand how the crypto works underneath.”

By continuing to find variants of these vulnerabilities and talk about them, Chua hopes developers can come close to stamping them out permanently. But a larger takeaway, he says, is thinking about how encryptions standards and documentation are written to make it less likely that people can interpret them in ways that are ultimately insecure. Given that it’s been 13 years already for these RSA signature verification issues, it may be time for a more fundamental shift.


More Great WIRED Stories