January 12th, 2021

Trump Dumped by Social Media – No Problem

Picture via CNET

It took a number of years, but Twitter finally dumped Donald Trump from its platform. So too for Facebook. Too much hate. Too much violence. The insurrection at the Capitol was the final straw.

There have been many commentators saying that this is a problem. My opinion, dumping Trump and any others that spew hatred and foment violence is most assuredly not a problem.

First, we will dispense with the First Amendment argument. There is none. These are private businesses and the First Amendment restricts what the government can do. The principles involved for me dumping a comment or commenter are no different than Big Tech. Big Tech and Small Tech have the same fundamental issue, albeit at different scale.

If you spew hate, or spam, or simply write crap I don’t like, then poof, you’re gone. My blog, my rules. If you don’t like it don’t come here. Same with Big Tech.

This deplatforming of a President, however, struck a nerve with folks, for no reason other than he is a President, at least for a few more days.

Over at Bloomberg, Joe Nocera claims that this is as problem, in that a few people in charge of Big Tech have too much power:

Do you really want Jack Dorsey, Mark Zuckerberg, Tim Cook and Sundar Pichai deciding which speech is acceptable and which is not on their platforms — platforms that are now indistinguishable from the public space. In addition to the problem of having so much power concentrated in so few hands, they are simply not very good at it. Their rules are vague, change constantly and are ignored often if the user is prominent enough.

He comes around to a solution — destroying Section 230 protections:

 I have come around to an idea that the right has been clamoring for — and which Trump tried unsuccessfully to get Congress to approve just weeks ago. Eliminate Section 230 of the Communications Decency Act of 1996. That is the provision that shields social media companies from legal liability for the content they publish — or, for that matter, block.

He then admits that this would merely result in the problematic posters such as Trump being neutered anyway. After all, without 230 protections, the platform could conceivably be liable for the misconduct of posters. Nocera just thinks the neutering is a good thing:

In fact, once the social media companies have to assume legal liability — not just for libel, but for inciting violence and so on — they will quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn’t be able to hurl anti-Semitic slurs. Presidents and other officials could announce policies, but they wouldn’t be able to spin wild conspiracies.

This is a terrible idea for reasons that I discussed last month — Section 230 is the lifeblood of interactive communications. Without 230, you would never read a negative review of any restaurant, hotel or widget. Negative reviews would be met with threats of litigation and the platform is not in a position to determine the truth/falsity of the review. And with politics, there are a bazillion shades of gray all wrapped up in contextual statements.

Twitter and Facebook are hardly the only platforms Trump has to speak on. He has the presidential podium, after all, and can speak freely from it.

And even when he is gone from office — and he will be gone — Trump could call up any journalists in the world and have conversations with them. Who would say no? Whether you love him or despise him you’d certainly like to get answers to questions. Recorded, of course.

And those remarks would get rebroadcast by others. On Twitter. On Facebook. And in a million newspapers, magazines, news shows, blogs, bulletin boards, etc. And it would happen almost instantaneously.

The only difference is that Twitter/Facebook would not be primary sources, but secondary.

Prof. Eugene Volokh, raises concerns in a New York Times op-ed. He writes that while there are plenty of places to speak, Twitter and Facebook are matchless:

 there are hundreds of newspapers throughout the nation and several major TV networks. Facebook and Twitter have no major rivals in their media niches. The public relies on them as matchless mechanisms for unfiltered communication, including politicians’ communications with their constituents.

But this likewise misses that social media is, in very large part, about rebroadcasting the thoughts and opinions of others. If Trump (or any other dumped commenter) says anything worth repeating, it will be repeated on those platforms. By someone. Whether the ideas are rebroadcast widely would be determined merely by their level of interest. The same as me. And you.

A final thought: No one claims it is easy to moderate these platforms, or any forum with a lot of discussion.

It’s hard to do and virtually impossible to come up with any kind of objective criteria. The words themselves often obscure the context, as we will see in the upcoming impeachment debate over Trump directing people to march on the Capitol.

Want to know why it’s hard? Consider this easy example. In one context, Trump says “March on the Capitol!” to an angry group of armed insurrectionists. In another, Mahatma Gandhi says “March to the salt flats to make salt.” One is an implicit call for violence, implicit because Trump has a long history of advocating violence. The other comes from someone with a long history of advocating peace.

Context matters. And it defies artificial intelligence decisions that merely look at the words. Let Big Tech (and Small Tech) do as they please with respect to dumping/keeping posters. Keep government out of it.

(Full disclosure: I own stock in Twitter, having bought it after Trump was sworn in, figuring that four years of free advertising couldn’t hurt.)

 

August 7th, 2013

Can New Protective Order Law Be Used for Facebook Demands?

Facebook-logoThe New York Law Journal has a short article today on an expansion of New York law regarding protective orders from over-reaching discovery (CPLR 3103(a)). Governor Cuomo signed it yesterday.

While it has long been the law that any person from whom discovery is sought may object to a discovery demand, the new amendment now includes objections regarding others who may merely be mentioned in the discovery being sought.

This can, as I’ll explain in a moment, be used to protect against many aspects of Facebook, social media and email demands.

The rationale for the law, however, didn’t have anything to do with Facebook. This is the simple (and quite logical) reasoning from the memo accompanying the bill:

Not addressed [in the current law] is a person about whom records are being subpoenaed from either a party or another nonparty. By way of example, if an accountant is subpoenaed to produce the records of clients who are not parties to the litigation, it is unclear under the present statute whether the non-party clients would have standing to object to the production of their records.

This is easy to understand if an accountant’s records are sought. Just because there may be a lawsuit regarding one aspect of your accountant’s practice, having nothing to do with you, does that mean that your private records should be disclosable? Shouldn’t you at least have standing to object?

The law was proposed by Chief Administrative Judge A. Gail Prudenti and her Advisory Committee on Civil Practice to fill a procedural gap.

But what if Facebook records are sought? These requests are getting more common as the months go by, and I’ve collected a few New York decisions on the matter.

The scenario in which it would come up is easy to foresee: Joe busts his arm in a car collision (not an accident). He writes about it on Facebook. His friends, who have their privacy settings maxed out, respond. Perhaps one of them jokes in a comment or private message, “You been drinking again?”

Are the comments and messages of the friends discoverable? The law here, of course, is not whether those comments may be admissible at trial, but merely discoverable. Can the defense lawyers go on a fishing expedition through the comments and messages of friends and their lives? These friends clearly have an expectation of privacy, as Facebook has explicitly told them so.

It seems to me that this new law can, will, and should, be used to combat over-reaching Facebook demands. Expect to see decisions on this in a year or two.

 

April 19th, 2013

What Does A Smile Mean? (Updated x2)

Jeff Bauman in the hospital after the Boston Marathon bombing

Jeff Bauman in the hospital after the Boston Marathon bombing

Jeff Bauman is in the picture to the right. He is in the news right now because he had the great misfortune of being near one of the Boston Marathon bombs.

In the picture Bauman is smiling and giving a thumb’s up. He is also missing both of his legs. Actor Bradley Cooper is to the left and New England Patriots wide receiver Julian Edelman (who tweeted the picture) is to the right.

As soon as he woke up in the hospital, he asked for pen and paper to write that he saw the bomber and then went on to help the FBI.

I bring this smile photo up today because, over the years, I’ve covered several rulings by courts that deal with defense attorneys asking to fish through the Facebook and other social media sites of plaintiffs. They ask to fish because the plaintiff is smiling in a photo and claim that the smile is inconsistent with suffering.

Here are two examples: In Davids v. Novartis,  drug-maker Novartis went fishing on the basis of a smile in a photograph and Magistrate Judge Williams D. Wall slapped it down, writing, “is not clear to the court, one picture of Plaintiff smiling does not contradict her claim of suffering, nor is it sufficient evidence to warrant a further search into Plaintiff’s account.”

By contrast, a Suffolk County judge permitted access to Facebook based on the same theory, writing in Romano v. Steelcase:

In this regard, it appears that plaintiff’s public profile page on Facebook shows her smiling happily in a photograph outside the confines of her home despite her claim that she has sustained permanent injuries and is largely confined to her house and bed. (see also, in contrast,  Eric Goldman’s commentary on the Romano photo)

Perhaps future courts will take note of the picture of Bauman, with a smile and a thumb’s up, to note that a smile in a snapshot does not magically mean everything is well.

As Bauman makes abundantly clear in this picture, people can smile for a multitude of reasons. It may be because they are happy to be alive. Or because someone said something humorous, even at a funeral. Or simply because of instinct when someone lifts a camera and hollers, “Say cheese.”

Judges and practitioners, please take note.

Heather Abbott, of Newport, R.I., is wheeled into a news conference past members of the media, behind, at Brigham and Women's Hospital, in Boston, Thursday, April 25, 2013. Abbott underwent a below the knee amputation during surgery on her left leg following injuries she sustained at the Boston Marathon bombings on April 15. (AP Photo/Steven Senne)Updated (4/26/13) – Another smile, this time from bombing victim Heather Abbott. One week after the bombing, she had her leg amputated. Prior attempts to surgically repair the leg had failed.

Three days after the amputation she appeared at a press conference. And smiled. You can see her expression here.

A smile may mean many things.

Updated June 24, 2013: People Magazine ran a cover photo in its June 11, 2013 edition — three amputees, three brave smiles. If a defendant tries to claim a smile in a photograph means the person isn’t injured, just show them this cover.PeopleMagazine-BostonStrong

 

January 31st, 2013

Another Facebook Fishing Expedition Gets Slapped Down

The Facebook decisions seem to be coming fast and furious now.

Today, the Appellate Division (First Department) shot down yet another attempt by a defendant to go fishing around the plaintiff’s personal life, simply because Facebook activities “may reveal daily activities that contradict or conflict with”plaintiff’s claim isn’t enough. No way, said the appellate court, not good enough.

“Mere possession and utilization of a Facebook account is an insufficient basis to compel plaintiff to provide access to the account or to have the court conduct an in camera inspection of the account’s usage.”

“To warrant discovery, defendants must establish a factual predicate for their request by identifying relevant information in plaintiff’s Facebook account — that is, information that “contradicts or conflicts with plaintiff’s alleged restrictions, disabilities, and losses, and other claims.”

So sayeth the court in Tapp v. New York State Urban Dev. Corp.

The other Facebook decisions and discussion on my site are at this link.

 

 

 

January 25th, 2013

NY Judge: Facebook Discovery Reviews May Open Flood Gates

This Facebook discovery decision came down January 11th. It is one that I’ve expected for a long time.

The backdrop: In the last few years there have been a plethora of demands by defense lawyers in personal injury cases for Facebook (and other social media) information. It often comes in the form of a demand for the plaintiff’s log in information, so that they can go snooping around looking for something damaging.

The first decision of any note came about due to a woman smiling in a photo on Facebook. The photo was public. If the woman is smiling, argued the defendants, maybe she isn’t in as much pain as she claims? (Romano v. Steelcase, 2010) And so it began.

Commercial litigators have dealt with e-discovery for years, sifting through documents that might number in the millions as emails and document drafts are sorted through with sophisticated software. Out-of-work lawyers get hired for peanuts to sit in dreary dungeons going through them.

But such discovery is mostly unknown to the personal injury bar. The exploding use of social media, and the creation of spectacular quantities of data, is now changing that.

This data explosion and the desire of defendants to access it has ramifications for the courts. Who is to say what should be disclosed or not? Well, the court is to say. And in order to say, the court must review. Therein lies the problem.

In Staten Island, Justice Joseph Maltese wrestled with that issue two weeks ago at the trial level in Fawcett v. Altieri. Fawcett’s action alleges assault and battery by Altieri and injury to Fawcett’s eye.

Defendants moved for social media data and the plaintiffs cross-moved for a protective order.  The defendants demanded:

authorizations to permit the defendants to obtain full access to and copies of Plaintiff’s current and historical records and/or information and photographs on Plaintiff’s social media website pages, including but not limited to Facebook, MySpace, Friendster, Flickr, and any other social media websites.

In the face of discovery demands, courts have to deal with what his “material and necessary.” The court noted the wide array of things that social media is used for:

The court takes judicial notice that subscribers to these sites share their political views, their vacation pictures, and various other thoughts and concerns that subscribers deem fit to broadcast to those viewing on the internet. Whether these broadcasts take the form of “tweets,” or postings to a user’s “wall,” the intent of the users is to disseminate this information.

This wide array of data is important because, if some material is to be disclosed, someone impartial will have to sift through it. The fact that privacy settings may be cranked up high is unimportant. An old fashioned hand-written diary may be private, but it also may be discoverable in certain circumstances.

And so defendants must show, in order to gain access to private information, a “factual predicate” for doing so, which is another way of saying that a party, to gain access, “must show with some credible facts that the adversary subscriber has posted information or photographs that are relevant to the facts of the case at hand.” In this case, Justice Maltese noted that depositions hadn’t even been held yet, and no actual predicate had been shown.

The judicial burden is extraordinary. The judge noted that “asking courts to review hundreds of transmissions ‘in camera’ should not be the all purpose solution to protect the rights of litigants. Courts do not have the time or resources to be the researchers for advocates seeking some tidbit of information that may be relevant in a tort claim.”

This is exactly the point I made back in October 2011 after a lower court told the plaintiff to disclose everything, and the appellate division reversed and threw it back to the trial court to do a “more specific identification of plaintiff’s Facebook information that is relevant, in that it contradicts or conflicts with plaintiff’s alleged restrictions, disabilities, and losses, and other claims.” (Patterson v. Turner)

I noted then that, if lower courts were forced to actually do such determinations, they would be swamped by requests. They would have to set the bar of discovery high, just to survive the paper onslaught. I  wrote that:

What does this mean for the lower courts? That if they see fit to grant a request for Facebook or similar records, the judge will be forced to do in camera reviews of potentially voluminous records comprising all manner of notes that might come from Facebook, My Space, private blogs, Twitter,  emails, texts and other places. The digital age has spawned an extraordinary boatload of information that courts will have to sift through when demands are made by overeager lawyers hoping to stumble upon some smoking gun.

Justice Maltese has concluded, as had I, that someone has to go through all that crap. OK, he doesn’t say it exactly that way, but he comes damn close:

As a matter of judicial policy, such a fishing expedition is not a sufficient basis to open the flood gates of meandering thoughts or silly postings to be used to impeach a party in a simple assault or negligence action without any good cause to believe that any incriminating statement was ever made and publicized in the social media. These are not matters of national security or part of a criminal investigation. This is a civil tort matter of a minor assault that should have a good faith basis other than supposition, hope or speculation that some comment was made that may be relevant to the case at hand.

This point can’t be made strong enough: Anyone opposing a discovery order for social medial records had damn well better point out to the court that this is not a one-time deal. When the camel’s nose gets under the tent, the rest of the camel will surely follow.