Social media, society and technology 2021-November 2022

Written by  //  November 13, 2022  //  Justice & Law, Media, Science & Technology  //  1 Comment

13 November
The tech CEO spending millions to stop Elon Musk
Dan O’Dowd says Tesla’s ‘Full Self Driving’ software shouldn’t be on the road. He’ll keep running over test dummies until someone listens.
O’Dowd has run nationwide TV ads with the videos and even launched an unsuccessful campaign for the U.S. Senate as part of his one-man crusade to challenge what he sees as the cavalier development of dangerous technology. For O’Dowd and other skeptics, the program is a deadly experiment foisted on an unsuspecting public — a view underscored by a recently filed class-action lawsuit and a reported Department of Justice investigation into the tech.
Despite O’Dowd’s high-profile campaign, and the concern from some regulators and politicians, Tesla is charging ahead with what it claims is world-changing technology.

9 November
Meta laying off 11,000 as tech industry slashes jobs
Facebook’s parent company is facing severe threats to its business model, including competition for users and advertising dollars from TikTok
Zuckerberg said the company would refocus on such priorities as its advertising business and elevating content from viral creators over friends and family, a strategy that has made the short-form video app TikTok so popular.
How Twitter’s contentious new fact-checking project really works
In private chats, a cadre of volunteers is shaping a new approach to misinformation — one Elon Musk has endorsed as he scales back on professional debunkings
Birdwatch takes a Wikipedia-like approach to fact-checking that relies on a consensus of highly engaged amateurs rather than professional journalists or content moderators to decide which tweets deserve debunkings or could benefit from added context. Its supporters tout that bottom-up approach as an answer to the rising mistrust that has greeted fact-checking efforts by mainstream news outlets and social media sites.

31 October
Is Elon Musk going to kill Twitter?
By Max Fawcett
It’s abundantly clear, given both his statements about being a “free speech absolutist” and his own interventions on Twitter, that Musk isn’t going to crack down on the spread of misinformation or conspiracy theories. That those are poisoning the public square he now owns and contributed directly to the violence he says he abhors, whether it’s the January 6th coup attempt or any number of lesser incidents, doesn’t seem to register with him.
(National Observer| Opinion) … It should also attract the attention of government officials in countries where Twitter operates — including Canada. It’s time — long past time, actually — for them to regulate these social media platforms more aggressively and stop them from serving as amplifiers of misinformation, fear, and loathing. The spread of conspiracy theories around COVID-19 on these platforms, and the damage that’s done to public health and safety, should be all the evidence these governments need to act.
But if they need more, apparently Musk is determined to give it to them.

Heather Cox Richardson October 28, 2022
At about 2:30 am, police in San Francisco responding to a call discovered that an assailant had broken into the San Francisco home of House speaker Nancy Pelosi (D-CA) and attacked her husband, 82-year-old Paul Pelosi, with a hammer. … Those who knew the alleged attacker, 42-year-old David DePape, say his behavior has been concerning. … Matthew Gertz of Media Matters reviewed DePape’s blog and found it “a standard case of right-wing online radicalization. QAnon, Great Reset, Pizzagate, Gamergate and all there, along with M[en’s] R[ights] A[ctivist]/misogyny, hatred of Blacks/Jews/trans people/’groomers,’ and anti-vax conspiracy theories.” ,,, Right-wing media channels immediately spun the home invasion and attack into Republican talking points, saying that “crime hits everybody” and that “this can happen anywhere, crime is random and that’s why it’s such a significant part of this election story.” Some tried to pin the attack on President Joe Biden, blaming him for not healing the country’s divisions. …
Late yesterday, Twitter’s board completed the $44 billion sale of the company to billionaire entrepreneur Elon Musk. Musk has promised to be an advocate for free speech and to reopen the platform to those previously banned for spreading racist content or disinformation—including former president Trump—but his actual purchase of the site might complicate that position.
In the technology magazine The Verge, editor Nilay Patel wrote that The problems with Twitter “are not engineering problems. They are political problems.” The site itself is valuable only because of its users, he points out, and trying to regulate how people behave is “historically a miserable experience.”
Patel notes that to attract advertising revenue, Musk will have to protect advertisers’ brands, which means banning “racism, sexism, transphobia, and all kinds of other speech that is totally legal in the United States but reveals people to be total a**holes.” And that content moderation, of course, will infuriate the right-wing cheerleaders who “are going to viciously turn on you, just like they turn on every other social network that realizes the same essential truth.” And that’s even before Twitter has to take on the speech laws of other countries.
… Today, racist and antisemitic content rose sharply as users appeared to be testing the limits of the platform under Musk. The Network Contagion Research Institute, which studies disinformation on social media, noted that posters on the anonymous website 4chan have been encouraging users to spread racist and derogatory slurs on Twitter.

27 October
Elon Musk reportedly fires top Twitter executives as he takes over company
The $44bn deal will give world’s richest man control of influential social media platform with more than 230m users
(The Guardian) After months of legal back-and-forth, Elon Musk has reportedly completed his $44bn takeover of Twitter, taking control of the company and firing several of the company’s top executives including CEO Parag Agrawal.
Shortly after taking the helm of Twitter, Musk reportedly ousted several senior figures, including chief executive Agrawal, Ned Segal, the chief financial officer, and Vijaya Gadde, the head of legal policy, trust and safety.

25 October
This small seaside community could be home to Canada’s first spaceport. But not everyone is on board
Canso, N.S., project was greenlit this summer, but vocal opposition, hefty price tag could stand in the way

14 October
Elon Musk is under federal investigations, Twitter says in court filing
(Reuters) – Elon Musk is being investigated by federal authorities over his conduct in his $44 billion takeover deal for Twitter Inc (TWTR.N), the social media company said in a court filing released on Thursday.
While the filing said he was under investigations, it did not say what the exact focus of the probes was and which federal authorities are conducting them.
Musk says SpaceX cannot fund Ukraine’s vital Starlink internet indefinitely
(Reuters) – Elon Musk said on Friday his rocket company SpaceX cannot indefinitely fund its Starlink internet service in Ukraine, which has helped the country’s civilians and military stay online during the war with Russia.
Musk’s comment on Twitter came after a media report that SpaceX had asked the Pentagon to pay for the donations of Starlink. The billionaire has been in online fights with Ukrainian officials over a peace plan he put forward which Ukraine says is too generous to Russia.
Musk activated Starlink, satellite broadband service, in Ukraine in late February after internet services were disrupted because of Russia’s invasion. SpaceX has since given it thousands of terminals.

4 October
Elon Musk vs. Twitter: all the news about one of the biggest, messiest tech deals ever
(The Verge) On Thursday, April 14th, Elon Musk announced an offer to buy Twitter for $54.20 a share. On April 25th, Twitter accepted the deal. By July 8th, Musk wanted out. Then, Twitter sued Musk. For a while, it appeared we were headed for Chancery Court in Delaware for a five-day trial in October that will determine who owns Twitter. Then, at the last moment, it appeared Musk might just buy Twitter and put an end to all this.

24 August
‘Pre-bunking’ shows promise in fight against misinformation
(AP) New findings from university researchers and Google reveal that one of the most promising responses to misinformation may also be one of the simplest.
In a paper published Wednesday in the journal Science Advances, the researchers detail how short online videos that teach basic critical thinking skills can make people better able to resist misinformation. …
Subjects who viewed the videos were found to be significantly better at distinguishing false claims from accurate information when tested by the researchers. The same positive results occurred when the experiment was replicated on YouTube, where nearly 1 million people viewed the videos.

15 August 2022
Lawyers could have electronic chips implanted in their BRAINS to enable them to scan through documents in a fraction of the time, report suggests
Brain implants could reduce the number of lawyers required to work on a case
Clients could pay for their services by unit of attention rather than by hour
A report claims neurotechnology in society could pose new ethical issues
Lawyers may have to consider that their defendants’ chips could be hacked
Electronic brain implants could allow lawyers to quickly scan years of background material and cut costs in the future, a new report claims.
The report from The Law Society sets out the way the profession could change for employees and clients as a result of advances in neurotechnology.

12 August
Disinformation is a high-stake game threatening freedom
In the second of a series of interviews with the Queen Elizabeth II Academy Faculty, Jessica Cecil examines solutions to disinformation eroding trust in democratic leadership.
(Chatham House) Quite simply, if citizens are making decisions based on disinformation – that is false information deliberately spread to mislead them – there can be harmful real-world consequences. Democracy can be undermined, and disinformation can cost lives. Across the world, people have been making decisions about their health based on false information – decisions on whether to have a vaccine, decisions to seek out fake cures. We do not yet know how many hundreds of thousands of deaths have been caused in this pandemic because of disinformation.
Hate speech and false claims against the Rohingya people in Myanmar were spread on Facebook in 2017 and were the backdrop to the communal violence in which thousands of Rohingya people were killed. And we have an increasing issue around climate change disinformation. Put simply, it is impossible to seek agreement in societies in which there are diverging views if you cannot even agree on the facts.

6 August
Alex Jones’ $49.3M verdict and the future of misinformation
(AP) Alex Jones is facing a hefty price tag for his lies about the Sandy Hook Elementary School massacre — $49.3 million in damages, and counting, for claiming the nation’s deadliest school shooting was a hoax — a punishing salvo in a fledgling war on harmful misinformation.
But what does this week’s verdict, the first of three Sandy Hook-related cases against Jones to be decided, mean for the larger misinformation ecosystem, a social media-fueled world of election denial, COVID-19 skepticism and other dubious claims that the Infowars conspiracy theorist helped build?
U.S. courts have long held that defamatory statements — falsehoods damaging the reputation of a person or a business — aren’t protected as free speech, but lies about other subjects, like science, history or the government, are. For example, saying COVID-19 isn’t real is not defamatory, but spreading lies about a doctor treating coronavirus patients is.
That distinction is why Jones, who attacked the parents of Sandy Hook victims and claimed the 2012 shooting was staged with actors to increase gun control, is being forced to pay up while Holocaust deniers, flat-earthers and vaccine skeptics are free to post their theories without much fear of a multimillion-dollar court judgment

13 August
These Canadian startups are taking quantum computing mainstream
While quantum computers are still in their nascent stage, experts already point to them as having the potential to solve complex problems like climate change and cybersecurity. The technology is beginning to creep into business plans too, with Goldman Sachs using quantum computers to improve calculations in options financing and Volkswagen looking to use them to optimize its manufacturing.
For years, quantum computers have mostly been the focus of academics and government. Now, experts say we could be near a turning point where the technology is closer to commercialization. … The concepts used in quantum computing may seem mind-bending, and they’re very powerful. Ordinary computers encode information using zeros and ones, called binary digits — or bits for short. Using quantum physics, these supercomputers can use zeros, ones, or any value in between, in something known as quantum digits — or quibits. This allows them to perform calculations much faster on problems that are more complex.

9 August
How will brain-monitoring technology influence the practice of law?

2 August
Robert Reich: How to stop rightwing media lies?
Sue the bastards
Defamation law may turn out to be America’s most important weapon against rightwing media lies.
On Friday, Infowars star Alex Jones’ parent media company, Free Speech Systems, filed for bankruptcy in the midst of a defamation damages trial underway in Austin, Texas.
To win a defamation lawsuit, a plaintiff must show four things: the defendant made a false statement purporting to be fact; the statement was published or communicated; the defendant failed to exercise reasonable care or, worse, knew the statement was incorrect and hurtful but made it anyway; and the plaintiff suffered harm as a result.
Defamation litigation is slow and expensive and, like all litigation, it enriches lawyers. It can also be abused. … But at a time when social media can’t be trusted to police itself against weaponized lies, and when much of the public doesn’t trust government to regulate social media, defamation lawsuits may be the best we can hope for.

12 July
Rogers outage prompts Industry Minister to demand major telecoms co-operate on network reliability
Industry Minister François-Philippe Champagne is directing Canada’s telecoms to enter into a formal agreement aimed at enhancing network reliability after a widespread outage shut down Rogers Communications Inc.’s wireless and internet services across the country on Friday.
“Certainly the [telecoms] were willing to offer their assistance over the weekend to Rogers, but we want to have a much more formal process in place to make sure that whatever the nature of a possible future failure would be, that we would be better prepared,” he said.
“Let’s be clear – this was a failure by Rogers in their system,” he added, noting that he told the company’s CEO, Tony Staffieri, the outage was unacceptable.
Rogers faces anger, questions after hours-long outage
Canadian industry minister to meet with telecom leaders after hours-long Rogers service disruption affected millions
19 June
Peter Thiel helped build big tech. Now he wants to tear it all down.
Inside the billionaire investor’s journey from Facebook board member to an architect of the new American right
(WaPo) New reporting shows Thiel has set his sights on transforming American culture — and funding its culture wars — through what his associates refer to as “anti-woke” business ventures, including a right-wing film festival, a gay dating app for conservatives founded by a former Trump administration ally and a firm, Strive Asset Management, that will “pressure CEOs to steer clear of environmental, social and political causes,” said Vivek Ramaswamy, the firm’s co-founder such as oil companies “committing to reduce production to meet environmental goals.”

23 May
Zuckerberg sued by DC attorney general over Cambridge Analytica data scandal
Karl Racine accuses Facebook co-founder of direct knowledge of policies that allowed firm to gather data of millions of Americans
Washington DC’s attorney general has sued Mark Zuckerberg, seeking to hold the Facebook co-founder personally responsible for his alleged role in allowing the political consultancy Cambridge Analytica to harvest the personal data of millions of Americans during the 2016 election cycle.
Heather Cox Richardson May 23, 2022
The filing recounts the story, which was important to the 2016 election. In November 2013, researcher Aleksandr Kogan designed an app on the Facebook platform that identified itself as a personality test. To use it, a consumer had to give permission for the app to collect some personal data: name, gender, birthdate, likes, and friends list. What they did not know, though, was that the app also accessed the data of those folks on the friends list.
About 290,000 users installed the app, but the app collected the data of about 87 million users, more than 70 million of whom were in the U.S. More than 340,000 were in Washington, D.C.
In 2014, Kogan sold the data the app had collected for about $800,000 to the political consulting firm Cambridge Analytica, which used the information to target ads to users to promote Republican candidates in the 2014 midterm elections. By December 22, 2015, Facebook knew that Kogan had sold the data; selling data violated its terms of service. It got rid of the app but simply requested that Kogan and Cambridge Analytica delete the information. Instead, Cambridge Analytica used it during the 2016 election, targeting political ads to help first Texas senator Ted Cruz, and then Trump.

25-28 April
Europe Is Making Social Media Better Without Curtailing Free Speech. The U.S. Should, Too
By Frances Haugen, former Facebook product manager who focused on combating misinformation and espionage.
(NYT) Elon Musk’s deal to take Twitter private, which has spurred questions about power, censorship and safety for the future of the platform, happened just days after the European Union reached a landmark agreement to make social media less toxic for users. The new E.U. standards, and the ethic of transparency on which they are based, will for the first time pull back the curtain on the algorithms that choose what we see and when we see it in our feeds.
In Europe’s case, the dryly named Digital Services Act is the most significant piece of social media legislation in history. It goes to the heart of what I’ve tried to do as a whistle-blower who worked inside Facebook: make social media far better without impinging on free speech.
Inside Twitter, Fears Musk Will Return Platform to Its Early Troubles
Content moderators warn that Elon Musk doesn’t appear to understand the issues that he and the company will face if he drops its guardrails around speech.
John Cassidy: How Congress Can Prevent Elon Musk from Turning Twitter Back Into an Unfettered Disinformation Machine
(New Yorker) Over the weekend, a story came out of Brussels that many may have missed. The twenty-seven member states of the European Union reached an agreement on a new law requiring big online platforms, including social-media companies, to police hate speech and disinformation more effectively. Under the E.U.’s Digital Services Act, European governments now have the power to ask Web platforms like Twitter, Facebook, and YouTube to remove any content that promotes terrorism, hate speech, child sexual abuse, or commercial scams. The platforms will also be obliged to prevent the “manipulation of services having an impact on democratic processes and public security.”
Four ways Elon Musk might change Twitter
(BBC) Twitter is usually awash with topics for discussion, but over the past couple of days one has stood out on the platform above others – what does the future hold for Twitter itself?
1. Loosen content rules – Mr Musk has previously described himself as a “free speech absolutist”, but his exact view of the concept is unclear.
2. No more adverts? – Although the entrepreneur might have cited concerns about advertisers’ influence over Twitter policy, the platform currently relies on adverts for about 90% of its income.
3. Making sure users are real – Mr Musk has spoken of “defeating the spam bots”, one pledge that is likely to be extremely popular with Twitter users.
4. Editing tweets after posting – Before his bid for Twitter, Mr Musk asked his followers if they wanted an edit button in a Twitter poll.
Five reasons you should care about Elon Musk buying Twitter
What’s he going to do about Trump?
(The Economist Highlights newsletter) Elon Musk struck a deal to buy Twitter, capping three weeks of drama during which he had revealed he had amassed a 9.2% stake and rejected a seat on the board and Twitter had tried to block a sale. In the end its big investors forced Twitter to the table when Mr Musk revealed a financing package for his offer. At around $44bn it will be one of the largest-ever leveraged buy-outs. The Twitterati went into meltdown at Mr Musk’s pledge to nurture free speech on the platform. But he seemed to have the support of Jack Dorsey, Twitter’s founder, who tweeted: “Taking it back from Wall Street is the correct first step.”
See Mitch Joel Comment

25 April
Will EU’s new law clean up online hate speech and disinformation?
(CS Monitor) The European Union reached an agreement on the Digital Services Act – legislation dedicated to policing hate speech, disinformation, and other harmful content online. The law’s backers say it will make big tech firms more accountable for content created by users.
… The law will also force tech companies to make it easier for users to flag problems, ban online ads aimed at kids, and empower regulators to punish noncompliance with billions in fines.

7 April
Kenya’s already fragile elections now face a dangerous new enemy: big tech platforms
Odanga Madung
Media complacency has allowed for a thriving disinformation industry that threatens Kenya’s democratic discourse
(The Guardian) Kenya’s recent history features hotly contested, sometimes violent elections in which candidates and their allies have used tribal politics to turn people against one another. Yet as this election approaches, one of the biggest dangers comes much further from home: US and Chinese tech platforms.
Essential discussions about the election are unfolding on platforms such as Twitter, Facebook, and TikTok. It’s on these platforms that crucial civic information – but also disinformation and hate speech – will be amplified. Meanwhile, Kenya receives just a fraction of the resources – if that – that platforms give to address similar issues in western elections. They have acquired a massive civic responsibility in our countries – one that they are having trouble accepting.

6 April
How the Russia-Ukraine conflict has put cryptocurrencies in the spotlight
(The Conversation) Our work examining the digital transformation of the accounting profession has led us to delve into the world of cryptocurrency to explore how it operates and how it is regulated. As the armed conflict between Ukraine and Russia rages on, countries’ interest in regulating cryptocurrency has never been so urgent.
The conflict between Ukraine and Russia is not just a war of bombs and bullets. It is also a digital war of which cryptocurrency is just one of many components. (see Ukraine’s Digital Ministry Is a Formidable War Machine)
Ukraine’s Ministry of Digital Transformation is getting lots of press for the ingenious way it is supporting the country’s resistance to the Russian invasion. This is being done through a sophisticated use of social media to promote Ukrainian interests around the world at hackathons, where hackers are rewarded with US$100,000 for successfully attacking Russian systems.

SpaceX, USAID deliver 5,000 satellite internet terminals to Ukraine

17-18 March
Ukraine is a turning point for the Kremlin’s internet tactics
Russia is losing the information wars, leading Putin to rethink how the country disseminates agitprop.
(Fast Company) In the last decade, Russia has built up an effective strategy for controlling information about the Kremlin within the country’s own borders and for sowing dissent in democracies around the world. As Vladimir Putin’s devastation of Ukraine wears on, Russia’s credibility has been severely diminished, forcing the country to rethink how it disseminates propaganda—a shift that could make Russian disinformation harder to combat.
Russia has developed a multi-layered approach to interfere in global affairs in order to advance its own agenda. … While this disinformation apparatus has worked to influence foreign affairs, Russia is proving less effective at controlling the public narrative about itself, and Russians are increasingly dubious of the official story line that Russia is trying to “denazify” Ukraine.
… As Russia tries to control the narrative inside its own country and in the greater global landscape, experts say the country may return to Cold War tactics, when the Kremlin used figures and organizations that were sympathetic to its cause to distribute its messaging. “Western audiences will be less trusting of Russian sources directly,” says Emerson Brooking, a resident fellow at Digital Forensic Research Lab of the Atlantic Council, a think tank based in Washington D.C. “I think it will force more focus on clandestine manipulation and on the use of sock puppets and on networks that don’t attribute themselves as Russian propagandists.”
Could understanding the evolution of cyberattacks better prepare us for the future of warfare?
(CBC Radio Spark) Russia and Ukraine have been engaged in cyberwarfare for several years now, and the consequences have been much bigger than simple internet outages. Hospitals and airports have been paralyzed, large shipping companies have seen their operations shut down, and even pharmaceutical companies like Merck in the U.S. have been dramatically affected,[ said Andy Greenberg, a senior writer at Wired and the author of the book, Sandworm: A New Era for Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers.]
Much of the attacks have come from a Russian group called Sandworm, which works for Russia’s military intelligence agency. Its malware has caused the shutdown of Ukraine’s postal service, banks, hospitals, airports and even the fare payment system in Kyiv’s metro.
And it’s spreading, he said. “Before Ukraine was on everyone’s mind, we in the West treated Ukraine as this faraway place, kind of within Russia’s sphere of influence. Russia had been digitally and physically abusing Ukraine for decades, in some ways for centuries. And we kind of ignored that in the West and allowed Russia to cross all these red lines, thinking that we would not be affected,” Greenberg said.
“But on the internet, we’re connected in ways that are not intuitive, and it turns out we’re on the border of our adversary the same way that Ukraine was, we were connected to Ukraine, and to its enemies, in ways that we didn’t fully understand. And I think we suffered the consequences of that misunderstanding.”
Ukraine’s Digital Ministry Is a Formidable War Machine
A government department run by savvy tech “freaks” has become a surprise defense against Russia.
(Wired) The projects the ministry came up with have made it a linchpin of Ukraine’s fight against Russia—and the country’s broad support among world leaders and tech CEOs. Within three days of the first missiles falling on Kyiv, Federov and his staff launched a public campaign to pressure US tech giants to cut off Russia, began accepting cryptocurrency donations to support Ukraine’s military, secured access to Elon Musk’s Starlink satellite internet service, and began recruiting a volunteer “IT Army” to hack Russian targets. More recent projects include a chatbot for citizens to submit images or videos of Russian troop movements. “We have restructured the Ministry of Digital Transformation into a clear military organization,” says Anton Melnyk, an adviser to the department.

24 February
Social media platforms on the defensive as Russian-based disinformation about Ukraine spreads
Kremlin-backed falsehoods are spreading across the world’s largest tech platforms and putting the companies’ content policies to the test.
(Politico) The world’s biggest social media companies are scrambling to combat a global barrage of Kremlin-backed falsehoods and digital tricks around the invasion of Ukraine — putting the tech giants back in the political crosshairs over the spread of online disinformation.
Russia-backed media reports falsely claiming that the Ukrainian government is conducting genocide of civilians ran unchecked and unchallenged on Twitter and on Facebook. Videos from the Russian government — including speeches from Vladimir Putin — on YouTube received dollars from Western advertisers. Unverified TikTok videos of alleged real-time battles were instead historical footage, including doctored conflict-zone images and sounds.
Social media companies are already under pressure from politicians in both the U.S. and Europe who argue that falsehoods ranging from Covid treatments to voting fraud — and misinformation, more generally — provide justification to curtail the industry’s liability protections, break the large tech companies up or otherwise rein them in by demanding more transparency about their operations.

3 February
Mark Zuckerberg’s Disaster Is Taking Silicon Valley With It
By Kevin T. Dugan
(New York) With a single earnings report and a disastrous conference call, Mark Zuckerberg wiped out $240 billion in value from his company. Meta’s was the largest one-day loss by a U.S. company ever, and the ripple effects were closer to tsunamis throughout Silicon Valley. The list of tech losers reeling from the Meta Platforms (formerly Facebook) reckoning is long and full of familiar names: Spotify was 16 percent lower; Twitter was down about 6 percent; and even companies that were relatively safe, such as Apple and Microsoft, saw hundreds of billions of dollars erased from their market value. Every percentage point here is a huge sum of money gone, at least for shareholders. Why did this happen? Who is responsible? Has the bell tolled for Big Tech?
… Many of Facebook’s problems are of Zuckerberg’s own making. … But there are other, structural reasons for Meta’s rout, and the weight of those changes has suddenly registered with the rest of the world.

5 January
Sean Silcoff interview on CBC: Blackberry end of an era
(CBC) It’s the end of an era for Blackberry hold outs. We dig into the rise and fall of the iconic smartphone as they stop working.
Goodbye, BlackBerry. You were the coolest toy on Parliament Hill
Susan Delacourt
Many people may have forgotten how life-changing the BlackBerry was, especially for those of us who were early adopters back at the beginning of this 21st century. The idea of staying in email contact while on the move — far more discreet and less disruptive than mobile phones — was revolutionary, especially for the political class on Parliament Hill.
BlackBerry’s classic smartphone stops working today
The company cuts off support for the once-beloved cellphone and status symbol, a casualty of the rise of the touch screen

2021

News Use Across Social Media Platforms in 2020
Facebook stands out as a regular source of news for about a third of Americans
(Pew Research Center) As social media companies struggle to deal with misleading information on their platforms about the election, the COVID-19 pandemic and more, a large portion of Americans continue to rely on these sites for news. About half of U.S. adults (53%) say they get news from social media “often” or “sometimes,” and this use is spread out across a number of different sites, according to a Pew Research Center survey conducted Aug. 31-Sept. 7, 2020.
There are in some cases drastic demographic differences between the people who turn to each social media site for news. For example, White adults make up a majority of the regular news users of Facebook and Reddit but fewer than half of those who turn to Instagram for news. Both Black and Hispanic adults make up about a quarter of Instagram’s regular news users (22% and 27%, respectively). People who regularly get news on Facebook are more likely to be women than men (63% vs. 35%), while two-thirds of Reddit’s regular news users are men.
The majority of regular news users of many sites – YouTube, Twitter, Instagram, Reddit and LinkedIn – are Democrats or lean Democratic. (12 January 2021)

Leaders in Paris call for protecting children online
Internet giants, including social media apps Instagram, Facebook, Twitter and Snapchat, joined several world leaders to issue a global call to better protect children online at a Paris summit on Thursday.
About 30 heads of state and government and U.S. Vice-President Kamala Harris were participating in the Paris Peace Forum that opened Thursday. The summit, organized both in person and online, brings together world leaders, CEOs, NGOs and others to discuss global issues such as climate, the COVID-19 pandemic and digital transition.
The call, initiated by France and the U.N. child protection agency UNICEF, acknowledges that “in the digital environment, children can come across harmful and violent content and manipulation of information. Just like adults, children have rights to privacy, which should be respected.”
Macron, Harris, EU Commission President Ursula Von Der Leyen and Canadian Prime Minister Justin Trudeau also participated in another roundtable on regulating the digital domain, along with Microsoft president Brad Smith. Harris announced that the U.S. is joining the Paris Call launched in 2018 to improve security and better regulate cyberspace.

28 October
What is the metaverse and how will it work?
By MATT O’BRIEN and KELVIN CHAN
(AP) Facebook CEO Mark Zuckerberg’s Thursday announcement that he’s changing his company’s name to Meta Platforms Inc., or Meta for short, might be the biggest thing to happen to the metaverse since science fiction writer Neal Stephenson coined the term for his 1992 novel “Snow Crash.”
But Zuckerberg and his team are hardly the only tech visionaries with ideas on how the metaverse, which will employ a mix of virtual reality and other technologies, should take shape. And some who’ve been thinking about it for a while have concerns about a new world tied to a social media giant that could get access to even more personal data and is accused of failing to stop the proliferation of dangerous misinformation and other online harms that exacerbate real-world problems.

26 October
Facebook Faces a Public Relations Crisis. What About a Legal One?
One of the most pressing questions is whether the Securities and Exchange Commission will significantly add to the company’s woes.
(NYT) In recent weeks, Facebook’s stock has fallen roughly 5 percent, shaving billions off its market value. Lawmakers have introduced laws that could weaken the company’s legal protections. Shareholders filed a resolution to dilute the power of its chief executive, Mark Zuckerberg.
All of that has been in response to the thousands of pages of internal research and testimony provided by Frances Haugen, a former Facebook product manager. She has said the documents show that the company chooses profits before the safety of users. Many of the documents, called the Facebook Papers, were shared with a consortium of news organizations that included The New York Times.
Gary Gensler, who took over the S.E.C. in April, has said the agency needs to step up enforcement when companies don’t adequately disclose information that could influence investors. In his first months in office, the agency appears to be broadening its scope to encompass how corporate decisions have broader social, environmental and labor impacts — the kinds of decisions that are a priority for some investors. It recently opened an investigation into claims that Activision Blizzard, the gaming company, failed to disclose sexual harassment accusations to investors.

5 October
Facebook whistleblower Frances Haugen tells lawmakers that meaningful reform is necessary ‘for our common good’
(WaPo) Her Senate committee testimony — based on her experience working for the company’s civic integrity division and thousands of documents she took with her before leaving in May — sought to highlight what she called a structure of incentivization, created by Facebook’s leadership and implemented throughout the company. By directing resources away from important safety programs and encouraging platform tweaks to fuel growth, these performance metrics dictated operations, Haugen said, a design that encouraged political divisions, mental health harms and even violence.
Analysis: Will this spell Facebook’s demise? Don’t count on it.
In the pantheon of Facebook scandals, the whistleblower affair shares space with the Cambridge Analytica privacy scandal, Russian election interference and the platform’s role in facilitating the Myanmar genocide. Through each of those, the company has emerged with its reputation battered but its business intact. Facebook reported record revenue last quarter of nearly $30 billion, and it boasts a staggering 3.5 billion users across its platforms, which include WhatsApp and Instagram.
The Most Important Answer From the Facebook Whistleblower
(Slate) “The dangers of engagement-based ranking are that Facebook knows that content that elicits an extreme reaction from you is more likely to get a click, a comment, a reshare”.
According to Haugen, the research indicates that content that elicits an extreme, often angry reaction from users is more likely to get clicks, and Facebook’s algorithms promote clicky content. This feeds into a cycle in which producers of such content are incentivized to put out ever more divisive posts in order to get that engagement and thus rank higher on news feeds. According to one report Haugen leaked, even an algorithm change in 2018 that the company claimed would promote more “friends and family” content actually exacerbated this dynamic.

4 October
Gone in Minutes, Out for Hours: Outage Shakes Facebook
When apps used by billions of people worldwide blinked out, lives were disrupted, businesses were cut off from customers — and some Facebook employees were locked out of their offices.
The outage lasted over five hours, before some apps slowly flickered back to life, though the company cautioned the services would take time to stabilize.
Facebook’s apps — which include Facebook, Instagram, WhatsApp, Messenger and Oculus — began displaying error messages around 11:40 a.m. Eastern time, users reported. Within minutes, Facebook had disappeared from the internet. The outage lasted over five hours, before some apps slowly flickered back to life, though the company cautioned the services would take time to stabilize.

11 August
Facebook shuts accounts in anti-vaccine influencer campaign
Russia-based marketing firm sought to pay social media influencers to smear Covid vaccines
(The Guardian) Facebook has removed hundreds of accounts linked to a mysterious advertising agency operating from Russia that sought to pay social media influencers to smear Covid-19 vaccines made by Pfizer and AstraZeneca.
A network of 65 Facebook accounts and 243 Instagram accounts was traced back to Fazze, an advertising and marketing firm working on behalf of an unknown client.
The network used fake accounts to spread misleading claims that disparaged the safety of the Pfizer and AstraZeneca vaccines. One claimed AstraZeneca’s shot would turn a person into a chimpanzee. The accounts targeted audiences in India, Latin America and, to a lesser extent, the US, using several social media platforms including Facebook and Instagram.

30 July
NB: I tried to share this story on Facebook and was reprimanded:
“Your post goes against our Community Standards on dangerous individuals and organizations”
Matt Taibbi: Meet the Censored: Hitler
Can history itself violate community standards?
Since the beginning of the “content moderation” movement, a major problem has become apparent. Human beings simply create too much content on Twitter, Facebook, YouTube, and Instagram for other human beings to review. Machines have proven able to identify clearly inappropriate content like child pornography (though even there the algorithms occasionally stumbled, as in the case of Facebook’s removal of the famous “Running Girl” photo).
But asking computer programs to sort out the subtleties of different types of speech — differences between commentary and advocacy, criticism and incitement, reporting and participation — has proven a disaster. A theme running through nearly all of the “Meet the Censored” articles is this problem of algorithmic censorship systematically throwing out babies with bathwater.

Foreign powers amplified QAnon content to sow discord that led to Jan. 6 Capitol riots, extremism expert says
Mia Bloom, co-author of “Pastels and Pedophiles: Inside the Mind of QAnon,” speaks with The World‘s host Marco Werman about the rise of QAnon, a US-based, conspiracy-fueled movement with international reach.
(PRI) MB: QAnon is one of these baseless conspiracy theories that started from the underbelly of the internet, and the basic premise of QAnon is rehashed and recycled old anti-Semitic tropes, conspiracy theories about the Catholic Church, and that the world is controlled by this global cabal of mostly Democrats, but also Hollywood elites that are trafficking in children.They are raping the children, and then they are drinking their blood. And for the longest time, it was a fringe movement. And then all of a sudden, in March 2020, we saw a 600% increase in the number of people joining these message boards, Facebook groups, Twitter. And so, there was a massive uptick. So now, instead of it being a fringe movement, what we have is as many as 30 million Americans believe that there is a blood-drinking cabal running things.
SP: QAnon is also of interest to us because it appears to have some international connections. The Soufan Group, which looks at extremism and global security, they put out a report that pointed to Russia and China as having weaponized QAnon, that the two governments use social media to sow discord among Americans. Did foreign powers have responsibility in creating QAnon or allowing it to grow?
MB … We know that Russian accounts, the internet research agency that was so involved in the 2016 election, they amplified that content. And now, Russia and China also have QAnon problems. And so, it’s almost one of these ironies that while Russia tried to amplify it, now they themselves have to deal with it.
…there are QAnon followers in 85 different countries. And what’s unique about QAnon is its ability to adapt to a new environment and take on a lot of local flavor. There is QAnon in China, there’s QAnon in Russia. When they get to a foreign area, they connect with local groups. So, for example, in France, they are allied with the Yellow Jackets movement. In the United Kingdom, they’re connected to Brexit. We even have QAnon trying to make in-roads into Israel. And there’s a number of Hebrew-language QAnon channels, which is very ironic and interesting because QAnon is really anti-Semitic.

28 July
Facebook’s ‘Disinformation Dozen’ Are Flourishing Across Social Media
(Newsweek) The dozen were named by the Center for Countering Digital Hate (CCDH) as those intentionally peddling the most viral false information about vaccines and COVID-19 online, in a study released in March 2021 and later cited by the Joe Biden administration.
The CCDH urged social media companies to shut down accounts linked to the 12 but, despite several being removed, a majority are still active.
The Big Three
Joseph Mercola has by far the largest reach.
The osteopathic physician and alternative medicine advocate has more than 1.7 million followers on a verified Facebook account, with a further 1 million on a Spanish-language account. His Twitter account has about 296,000 followers and an Instagram account, which is also verified, has about 330,000.
Ty and Charlene Bollinger, controversial alternative medicine activists, were also named among the 12. … However, they do not appear to have posted about vaccines and COVID since their original Instagram ban, instead focusing on cancer and alternative medicine.

15 July
Facebook, Twitter and other social media companies need to be treated like Big Tobacco
The surgeon general’s new advisory shows their product is in need of serious consumer protection regulations.
By Joan Donovan, research director of Harvard Kennedy School’s Shorenstein Center, and Jennifer Nilsen, research fellow at the Shorenstein Center
Thursday marks a turning point in internet history. For the first time, the U.S. surgeon general has declared the barrage of misinformation spreading on social media a public health hazard. In an advisory, Surgeon General Dr. Vivek Murthy calls on technology companies to “take responsibility for addressing the harms” their social media products impose on consumers by prioritizing the early detection of misinformation, providing researchers with meaningful access to data, and protecting public health professionals from harassment.

5 May
Oversight Board to Facebook: We’re Not Going to Do Your Dirty Work
The decision on Trump is the clearest indication yet that the board does not want to be Facebook’s flunky.
(Wired) On January 21, Facebook asked its Oversight Board to review its decision to indefinitely ban Donald Trump, and guide it on whether it should allow the former president to post again. You could see it as the ultimate buck-passing. For three years, Facebook has been setting up an elaborate structure for a supposedly independent body to review its content decisions. And now that the 20-member board has just begun to hear cases, Facebook outsourced it with perhaps the company’s most controversial decision ever. Would Donald Trump return to social media, attacking those who displeased him and insisting that he actually won the election? Facebook CEO Mark Zuckerberg told his shiny new board to make the call.
But the board did not play. While affirming that Facebook was correct to suspend the Trump account for its riot-coddling posts on January 6, today it called out the company for inventing a penalty that wasn’t part of its policies—an “indefinite” suspension. The board told Facebook to take six months and get its own rules straight, and then make the Trump restoration decision itself.

22 April
Online-ad firm Outbrain confidentially files for IPO following rival Taboola’s SPAC deal
Outbrain provides links to sponsored content that Web sites display via so-called “native advertising,” where the material looks like another news article or similar item.
These often appear in “chumboxes,” those sets of links that appear at the bottom of articles and other Web pages under headings like “Recommended For You.”
“This Is Going to Be a Global Moment”: All Eyes Are on Facebook as It Weighs Whether to Ban Donald Trump for Life
Everyone from Angela Merkel to Bernie Sanders has weighed in on what the tech giant ought to do. And whatever his oversight board decides, Mark Zuckerberg could find himself in a bind.
(Vanity Fair) What started as a 24-hour block on Trump’s account on January 6 became an indefinite suspension. Mark Zuckerberg justified this by stating that the risks of Trump continuing to use the platform were “simply too great” given the January 6 riot. Twitter was not so indecisive—its Trump ban appears to be sticking—while YouTube sided more closely with Facebook, with its CEO, Susan Wojcicki, stating that it may lift its suspension “when we determine that the risk of violence has decreased.”
Remember the chumbox providers? This is how they look now
You know that chumbox of weird garbage that appears at the bottom of most news sites, including this one? You know the one! It’s labeled “Promoted stories” or “Around the web.” It’s got headlines like: “1 Weird Trick to Lose Weight,” “You Won’t Believe What [STAR NAME HERE] Looks Like Today!,” and “Throw this vegetable out!” There are two major players in the field — Taboola and Outbrain — and the Justice Department has approved their merger.

Why are they called chumboxes? Well, chum is fishbait — you throw decomposing fish guts, blood, and bones into the water to lure other fish. A chumbox is like this but for humans online. These chumboxes exist because they’re more lucrative than other kinds of advertising: you add them to your site — that’s free! — and then make money off the unwary souls who want to know “Is CBD good for my pet?” The money is, evidently, good: besides Vox Media, Bloomberg, Business Insider, The Washington Post, CNN, and more feature these boxes at the bottoms of their stories. (July 2020)

31 March
A Dozen Misguided Influencers Spread Most of the Anti-Vaccination Content on Social Media
The Disinformation Dozen generates two thirds of anti-vaccination content on Facebook and Twitter
(McGill Office for Science & Society) The Center for Countering Digital Hate (CCDH) has recently released a report entitled The Disinformation Dozen, and its main take-home message is that two-thirds of anti-vaccine content shared or posted on Facebook and Twitter between February 1 and March 16, 2021, can be attributed to just twelve individuals. Twelve. Let that sink in.
The modern anti-vaccination movement is led by a relatively small number of devoted and typically well-financed influencers who have accumulated a mighty following on social media platforms, where fear spreads more easily than facts and nuance. So who exactly is the Disinformation Dozen?

22 March
Reset: Reclaiming the Internet for Civil Society
We need to reclaim our lives from our phones and ‘reset,’ says CBC Massey lecturer Ron Deibert
(Massey Lectures 2020 Part 1) ‘Look at that device in your hand,’ says Ron Deibert in the first instalment of his 2020 CBC Massey Lectures. ‘You sleep with it, eat with it … depend on it.’ The renowned tech expert exposes deep systemic problems in our communication ecosystem and shares what we need to do about it.
“Information and communications technologies are, in theory, supposed to help us reason more effectively, facilitate productive dialogue and share ideas for a better future,” says renowned technology and security expert Ron Deibert. “They’re not supposed to contribute to our collective demise.” (originally aired on November 9, 2020)

20 February
Apps Recreate the Soundtrack of Pre-Pandemic Life
(Bloomberg City Lab) Ice cubes clink. A blender whirs. The hum of gossip carries. People shout to be heard over the din.
Can you hear it? Do you miss it? You’re not alone. There’s a whole genre of auditory environments like this that have all but disappeared over the past year: other people making little noises around you. In bars, coffee shops, and even open offices. Ears yearning, people in lonely apartments all over the world have tuned into new sites that turn that low background hum of life-in-public into a soundtrack.
The internet has long churned out “coffee shop” playlists, which channel the lo-fi instrumentals or soft folk you might hear at a Starbucks. These new mixes go further to include sounds you may not have appreciated but were always there, curated not via algorithm but by internet Foley artists. There’s Spotify’s “The Sound of Colleagues,” where remote workers can crank the volume to return to the dulcet, focusing tones of “printer,” “coffee machine,” and “keyboards.” Kids Creative Agency is behind I Miss The Office, where telephones ring and coworkers sneeze and “mhm.”

17 February
Facebook restricts the sharing of news in Australia as Google says it will pay some publishers.
(NYT) Facebook said on Wednesday that it would restrict people and publishers from sharing links to news articles in Australia, in response to a proposed law in the country that requires tech companies to pay publishers for linking to articles across their platforms.
The decision came hours after Google announced it had reached an agreement to pay Rupert Murdoch’s News Corp to publish its news content in a three-year global deal, part of a string of deals it had struck with media companies in recent days to ensure that news would remain on its services.

6 February
Lawsuits Take the Lead in Fight Against Disinformation
Defamation cases have made waves across an uneasy right-wing media landscape, from Fox to Newsmax.
Lou Dobbs, whose show on Fox Business was canceled on Friday, was one of several Fox anchors named in a defamation suit filed by the election technology company Smartmatic.

(NYT) In just a few weeks, lawsuits and legal threats from a pair of obscure election technology companies have achieved what years of advertising boycotts, public pressure campaigns and liberal outrage could not: curbing the flow of misinformation in right-wing media.
Dominion Voting Systems, another company that Mr. Trump has accused of rigging votes, filed defamation suits last month against two of the former president’s lawyers, Rudolph W. Giuliani and Sidney Powell, on similar grounds. Both firms have signaled that more lawsuits may be imminent.

25 January
A double-edged sword
How social media went from toppling dictators to platforming hate.
(Open Canada) Ever since the Arab Spring revealed the fragility of certain Middle Eastern dictatorships and highlighted how quickly online discontent can transform into national resistance, authoritarian regimes have used social media to help predict dissent and gauge public sentiment. Governments can now actively monitor protest plans, identify key figures and persecute people who support popular protests (as is currently the case in Belarus). Social media platforms also provide governments with new methods of communicating with their population, which they can use to counter dissenting opinions or to spread propaganda and disinformation that creates confusion and muddies the waters of legitimate news sources.
Countries that have effectively used social media to monitor and control public opinion include China, Russia and Saudi Arabia. China encourages limited expression online in order to better understand weaknesses within its own government. This gives the Chinese government a better understanding of the dynamics of public discontent, while also allowing it to present the façade of benevolence and democratic oversight. Saudi Arabia passed counterterrorism legislation in 2014 that criminalized defamation of the state — a purposely vague cybercrime law that arbitrarily limits free speech and allows the government to arrest online bloggers and activists with little explanation. Saudi, along with regional neighbours like the United Arab Emirates, also use automated bot and pro-government social media influencers to promote state propaganda and to drown out dissenting voices. Bahrain, an island neighbour of Saudi, has arrested several prominent opposition figures who criticized the Bahraini government online.

12 – 15 January
White supremacist terrorism: Key trends to watch in 2021
(Brookings) …the movement as a whole is heavily dependent on social media. Part of this is a generational shift, as youth around the world embrace Facebook, YouTube, Instagram, and other media. But social media is also cheap and easily accessible, making it ideal for propaganda and networking. This technological shift, however, has made the movement more diffuse, weakening what little hierarchies existed while connecting previously isolated individuals. Fortunately, social media and financial services companies are more willing to deplatform white supremacists, but many experts contend more could be done.

The Guardian view of Trump’s populism: weaponised and silenced by social media
(Editorial) Donald Trump’s incitement of a mob attack on the US Capitol was a watershed moment for free speech and the internet. Bans against both the US president and his prominent supporters have spread across social media as well as email and e-commerce services. Parler, a social network popular with neo-Nazis, was ditched from mobile phone app stores and then forced offline entirely. These events suggest that the most momentous year of modern democracy was not 1989 – when the Berlin wall fell – but 1991, when web servers first became publicly available.
There are two related issues at stake here: the chilling power afforded to huge US corporations to limit free speech; and the vast sums they make from algorithmically privileging and amplifying deliberate disinformation. The doctrines, regulations and laws that govern the web were constructed to foster growth in an immature sector. But the industry has grown into a monster – one which threatens democracy by commercialising the swift spread of controversy and lies for political advantage.

The Importance, and Incoherence, of Twitter’s Trump Ban
By Andrew Marantz
(The New Yorker) “I doubt I would be here if it weren’t for social media, to be honest with you,” Donald Trump said in 2017. He may have been wrong; after all, he uttered those words on Fox Business, a TV network that will surely continue to have him on as a guest long after he leaves the White House, and even if he loses every one of his social-media accounts. Perhaps Trump could have become President without social media. There were plenty of other factors militating in his favor—a racist backlash to the first Black president, the abandonment of the working class by both parties, and on and on. Still: Trump wanted to be President in 1988, and in 2000, and he couldn’t get close. In 2012, just as social media was starting to eclipse traditional media, Trump was a big enough factor in the Republican race that Mitt Romney went to the Trump Hotel in Las Vegas to publicly accept his endorsement. Only in 2016, when the ascent of social media was all but complete, did Trump’s dream become a reality. Maybe this was just a coincidence. There is, tragically, no way to run the experiment in reverse.

Trump’s Been Unplugged. Now What?
The platforms have acted, raising hard questions about technology and democracy.
(The New Yorker) … The President’s tweeting was “highly likely to encourage and inspire people to replicate the criminal acts at the U.S. Capitol,” the company stated, in a blog post. It noted that plans for additional violence—including a “proposed secondary attack” on the Capitol and various state capitols—were already in circulation on the platform.
… Although Twitter has been an undeniable force throughout the Trump Presidency—a vehicle for policy announcements, personal fury, targeted harassment, and clumsy winks to an eager base—most Americans don’t use it. According to Pew Research, only around twenty per cent of American adults have accounts, and just ten per cent of Twitter users are responsible for eighty per cent of its content.
By Saturday, most major tech companies had announced some form of action in regard to Trump. The President’s accounts were suspended on the streaming platform Twitch, and on Snapchat, a photo-sharing app. Shopify, an e-commerce platform, terminated two online stores selling Trump merchandise, citing the President’s endorsement of last Wednesday’s violence as a violation of its terms of service. PayPal shut down an account that was fund-raising for participants of the Capitol riot. Google and Apple removed Parler, a Twitter alternative used by many right-wing extremists, from their respective app stores, making new sign-ups nearly impossible. Then Amazon Web Services—a cloud-infrastructure system that provides essential scaffolding for companies and organizations such as Netflix, Slack, NASA, and the C.I.A.—suspended Parler’s account, rendering the service inoperable.

In the United States, online speech is governed by Section 230 of the Communications Decency Act, a piece of legislation passed in 1996 that grants Internet companies immunity from liability for user-generated content. Most public argument about moderation elides the fact that Section 230 was intended to encourage tech companies to cull and restrict content.

Social media companies need better emergency protocols
Daniel L. Byman and Aditi Joshi
How can and should social media companies treat politicians and governments fomenting hate online?
(Brookings) Online vitriol, especially in the hands of widely-followed, influential, and well-resourced politicians and governments, can have serious — and even deadly — consequences. On January 6, 2020, President Trump tweeted false claims of election fraud and seemingly justified the use of violence as his supporters stormed the U.S. Capitol. Although an in-person speech appeared to most directly trigger the violence, Trump’s social media presence played a large role in the mob’s actions. For weeks after losing the 2020 election, President Trump tweeted false claims of election fraud and encouraged supporters to descend on Washington, D.C. on January 6, refuse to “take it anymore,” and “be strong.” On the day of the assault, a tweet that Vice President Mike Pence “didn’t have the courage to do what should have been done” was followed by messages from Trump’s supporters on the social networking platform Gab calling for those in the Capitol to find the vice president, as well as in-person chanting of “Where is Pence?” Leading up to and during the outbreak of violence, various social media platforms helped the mob assemble at the right place and time, coordinate their actions, and receive directions from the president and one another.
Although states’ exploitation of communications technology is not new, social media provides new dangers and risks. Given platforms’ reach, states can have a huge impact on their populations if they dominate the narrative on popular platforms like Facebook and Twitter. Additionally, social media facilitates “echo chambers,” where feeds are personalized based on user data and users’ pre-existing views are reinforced (possibly to the point of inciting action) rather than challenged. Lastly, most social media platforms have no gatekeepers and lack the editorial role of newspapers or television broadcasts, though they do usually have minimum community standards.
Although Facebook and other companies have devoted significant resources to the problem of bad content, technical tools and available human moderators often fall short of solving the problem. Humans are necessary to train and refine technological tools, handle appeals, and treat nuanced content requiring social, cultural, and political context to be understood.
… over-restriction can have equally devastating consequences. Repressive regimes often shut down the internet in the name of security while using the silence to harm dissenters or minority communities. Furthermore, limiting any content, especially government content, may be at odds with U.S.-based technology companies’ supposed principles. Many companies claim to be committed to free speech for all their users and do not see themselves as arbiters of appropriate or inappropriate content. Making these judgments places social media companies in a role they should not nor want to be in. Yet, with the power these platforms yield, social media companies must find ways to prepare for this role and prevent escalation of tensions in a crisis.

One Comment on "Social media, society and technology 2021-November 2022"

  1. Diana Thebaud Nicholson April 28, 2022 at 4:33 pm ·

    Mitch Joel on Facebook:
    “Unlike many of the other pundits who are blabbing away, I’m not sure I fully understand any of this. I’m not sure why, considering how busy Elon is with Tesla, SpaceX, Neuralink, OpenAI and The Boring Company, that he even has time or energy for fixing what’s broken with Twitter and social media. If this is just about another billionaire trying to control a media property, fine (we’ve seen that story before… and so it goes.. history keeps repeating itself). That could be it. Still, I have no idea. I don’t think the Elon money or a changing of the guards will do that much to get more people to use Twitter more frequently (or encourage bigger media companies to spend more). But… I could be wrong… I wish I could be more insightful (because, that’s what companies pay me for), but I feel like the general media (and that includes me) are really missing something.
    That “something” is the actual play that’s happening here… and there’s nothing wrong with saying “I don’t know.””

Comments are now closed for this article.

Wednesday-Night