Thursday, August 15, 2019

Today, in worshiping the algorithm









Vice:
TikTok Users Are Inventing Wild Theories to Explain Its Mysterious Algorithm

...

Probably half of the videos I see on TikTok include one of the following hashtags: #fyp, #foryou, or #foryoupage.

The hashtaggers’ theory is that if they use these tags in their captions, their posts are more likely to surface on more people’s For You pages. The For You page is TikTok’s recommendation feed, which is personalized to each user based on how that user interacts with videos on TikTok, according to the company.

There’s absolutely no proof that using these hashtags does anything, but it seems like they do. Because so many people use these hashtags, it looks like they actually get videos on people’s For You pages.

...

Other users have theorized that TikTok has no intelligent algorithm and just surfaces content randomly to each user. This user says that even when her views have a good “interaction per view” ratio—or, the ratio of likes, comments, and shares to total views—her videos still top off with very few views.


...

“If you scroll past these little TikToks with like five likes, they’re probably going to die soon,” she says, with ironic melodrama. “So in conclusion, it is your civic duty as a member of the TikTok community to like and comment on this video, because if you don’t do it, no one else will.”



NYT:
On Wednesday evening, a phalanx of Amazon employees known as “FC ambassadors” began tweeting again about how great it is to work at Amazon.

..

The accounts have provoked suspicion. In January, it appeared that the accounts had changed hands; one that had belonged to a “Leo” had changed its display name and handle to Ciera. A “Rick” had become a “James,” and a “Michelle” had transformed into a “Sarah.” (Critics of the account occasionally call them the “Borg,” a reference to an alien race in Star Trek who operate as a collective hive mind.)

Tweets from the ambassador accounts suggest that workers shift in and out of their social media roles. In May, for instance, an account that now uses the handle @AmazonFCBrianDJ tweeted a picture of a smiling man holding an Amazon package and announced that, after four months of tweeting, it would be his last day as an ambassador. About a week later, the account posted a picture of a different man who introduced himself as Brian D.J., an outbound picker at a fulfillment center in Jacksonville. The next month, an account using the name Mary Kate announced that she was returning to her role as a “picker and learning ambassador on the weekdays and modern dancer on the weekends.”





Splinter:
The future of the “gig economy” is on the table in California, where the legislature is considering a bill that would make it much harder to categorize workers as independent contractors, rather than actual employees. Needless to say, companies are going all out to oppose it. Postmates even got 1,600 of its own workers to speak out against it.

...

I find it interesting that so many gig workers—people who work for Postmates, which is not the world’s easiest or most lucrative job—would enthusiastically volunteer to lend their names to a public request to keep themselves classified as second-class workers, ensuring they will never get things like a union or real company health insurance. Odd passion to have! I asked Postmates how they found all these names. A spokesperson responded
LAT:
Purdue Pharma sought to divert online readers from critical L.A. Times series on opioid crisis, records show

...

Internal documents from 2016 show company officials discussed diverting online traffic away from a series of stories published by the Los Angeles Times that detailed the company’s marketing of OxyContin and its links to the deadly opioid crisis.

“If Purdue doesn’t fill this vacuum, someone else will — and it won’t be Purdue’s narrative,” a member of the company’s digital support team wrote in a memo to Purdue officials, laying out a strategy to drive traffic to a friendly website, PurduePharmaFacts.com.
Recode:
A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets.

...

This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings — like the “n-word” or “queer” — may not be in others. But algorithms — and content moderators who grade the test data that teaches these algorithms how to do their job — don’t usually know the context of the comments they’re reviewing.