1.2 C
Moscow
Saturday, November 23, 2024

Bluesky addresses trust and safety concerns around abuse, spam, and more

Must read

Social networking startup Bluesky, which is building a decentralized alternative to X (formerly Twitter), offered an update on Wednesday about how it’s approaching various trust and safety concerns on its platform. The company is in various stages of developing and piloting a range of initiatives focused on dealing with bad actors, harassment, spam, fake accounts, video safety, and more.

To address malicious users or those who harass others, Bluesky says it’s developing new tooling that will be able to detect when multiple new accounts are spun up and managed by the same person. This could help to cut down on harassment, where a bad actor creates several different personas to target their victims.

Another new experiment will help to detect “rude” replies and surface them to server moderators. Similar to Mastodon, Bluesky will support a network where self-hosters and other developers can run their own servers that connect with Bluesky’s server and others on the network. This federation capability is still in early access. However, further down the road, server moderators will be able to decide how they want to take action on those who post rude replies. Bluesky, meanwhile, will eventually reduce these replies’ visibility in its app. Repeated rude labels on content will also lead to account-level labels and suspensions, it says.

To cut down on the use of lists to harass others, Bluesky will remove individual users from a list if they block the list’s creator. Similar functionality was also recently rolled out to Starter Packs, which are a type of sharable list that can help new users find people to follow on the platform (check out the TechCrunch Starter Pack).

Bluesky will also scan for lists with abusive names or descriptions to cut down on people’s ability to harass others by adding them to a public list with a toxic or abusive name or description. Those who violate Bluesky’s Community Guidelines will be hidden in the app until the list owner makes changes to comply with Bluesky’s rules. Users who continue to create abusive lists will also have further action taken against them, though the company didn’t offer details, adding that lists are still an area of active discussion and development.

In the months ahead, Bluesky will also shift to handling moderation reports through its app using notifications, instead of relying on email reports.

To fight spam and other fake accounts, Bluesky is launching a pilot that will attempt to automatically detect when an account is fake, scamming, or spamming users. Paired with moderation, the goal is to be able to take action on accounts within “seconds of receiving a report,” the company said.

One of the more interesting developments involves how Bluesky will comply with local laws while still allowing for free speech. It will use geography-specific labels allowing it to hide a piece of content for users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating a space for free expression, while also ensuring legal compliance so that Bluesky may continue to operate as a service in those geographies,” the company shared in a blog post. “This feature will be introduced on a country-by-country basis, and we will aim to inform users about the source of legal requests whenever legally possible.”

To address potential trust and safety issues with video, which was recently added, the team is adding features like being able to turn off autoplay for videos, making sure video is labeled, and ensuring that videos can be reported. It’s still evaluating what else may need to be added, something that will be prioritized based on user feedback.

When it comes to abuse, the company says that its overall framework is “asking how often something happens vs how harmful it is.” The company focuses on addressing high-harm and high-frequency issues while also “tracking edge cases that could result in serious harm to a few users.” The latter, though only affecting a small number of people, causes enough “continual harm” that Bluesky will take action to prevent the abuse, it claims.

User concerns can be raised via reports, emails, and mentions to the @safety.bsky.app account.



Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Translate »