Twitter and the social media platform Discord have various policies that might have prompted them to remove the leaked Pentagon documents that Biden administration officials say revealed key information about U.S. intelligence gathering operations.
But gray areas in those rules and uneven enforcement of them make it unclear how, or even if, executives at those companies would decide to remove them.
As of Saturday, Twitter continued to host tweets with the Pentagon’s documents, some of which had been up since at least Wednesday. There is no indication that Elon Musk, who bought Twitter nearly six months ago, will take any action against the tweets with the classified documents.
Two days earlier, Mr. Musk seemed to respond sarcastically to a tweet about the leaked material. “Yeah, you can totally delete things from the Internet — that works perfectly and doesn’t draw attention to whatever you were trying to hide at all,” he wrote.
On Discord, a messaging platform popular with video game players, the Pentagon documents may have been circulating as early as March. Since Discord chat groups — known as servers — are not directly managed by the company like a Facebook or Twitter feed, the distribution of the Pentagon documents would have been difficult to spot.
Mr. Musk did not respond to a request for comment on Saturday, and Discord declined to comment. It is not known if the companies, which are both based in the United States, have been asked to remove the Pentagon material.
In the past, Twitter may have removed the material under rules that prohibit the publication and distribution of hacked materials, two former executives told The New York Times. Under this policy, Twitter would remove tweets with “real or synthesized hacked materials” or place warning labels on the material. Some of the Pentagon material circulating on social media may have been manipulated.
But there were caveats to Twitter’s rules, as they were described in a policy document, which was last updated in October 2020. The rules allowed for exceptions for material that forms the basis for reporting by news agencies. And debates inside social media companies about what to allow online have often been similar to discussions that traditional media have about whether leaked or hacked material is of enough public interest to justify publishing.
It was not clear on Saturday whether the Pentagon material was hacked or intentionally leaked — the images circulating appeared to be photographs of documents. The documents could fall into a gray area that, at least in the past, would have led to discussion among compliance officers inside the company about whether they qualified for a takedown.
Twitter used its hacked material policy to block the circulation of an article in October 2020 from the New York Post that said the F.B.I. had seized a computer that purportedly belonged to Joseph R. Biden Jr.’s son Hunter Biden. Twitter’s leaders, including then-chief executive Jack Dorsey, later called the decision a mistake.
The former executives, who spoke to The Times on the condition of anonymity for fear of retribution from Mr. Musk, said that Twitter often received reports of potential violations of its polices from U.S. government organizations.
But since acquiring the company in October, Mr. Musk has shrunk the groups responsible for moderation and more than 75 percent of Twitter’s 7,500 employees have been fired or left. Ella Irwin, Twitter’s head of trust and safety, did not immediately respond to requests for comment.
Twitter has removed or prevented the circulation of content at the behest of governments like India and on Mr. Musk’s whims.
This past week, Twitter also began regulating the circulation and engagement of links to Substack, the newsletter platform, after the start-up unveiled a Twitter-like service. On Friday, many Substack writers found that tweets that had links to their Substack pages could not be liked or retweeted.
Discord surged in popularity during the pandemic, moving beyond its video game roots. By late 2021, the platform had more than 150 million active users each month.
Discord provides so-called servers that are essentially chat rooms, where people can discuss their hobbies and message with each other or join audio calls. Some servers are public and contain thousands of people, while others — like servers made just for a group of friends — are private.
This arrangement has enabled Discord to thrive, but has also led to problems with harmful content. Ensuring that Discord users follow the platform’s policies and refrain from posting inappropriate or questionable material has largely been left up to the individuals who create the servers, some of whom deputize members of the server communities to help enforce rules.
The private nature of some of these groups means they can easily escape detection or moderation.
In 2017, white nationalists organized the “Unite the Right” rally in Charlottesville, Va., on far-right Discord servers. Company executives were aware the white nationalists were using the platform but did not remove them until after the rally.
Discord said it had since beefed up its content moderation team, and the company’s chief executive, Jason Citron, said in a 2021 interview that 15 percent of his employees worked on trust and safety teams.
Still, the company did not discover Discord messages in a private server posted by the shooter who killed 10 people at a grocery store in Buffalo last spring. In the messages, the shooter posted racist remarks and appeared to detail how he planned to carry out the attack. After the shooting, Discord said it was investigating the postings and working with law enforcement agencies.
In its most recent transparency report, covering the last three months of 2022, Discord said it had disabled more than 150,000 accounts for policy violations that ranged from “harassment and bullying” to “exploitative and unsolicited content.” The number of accounts it had disabled was a 17 percent decrease from the three months before that, the company said.