-
FiatLux posted an update 5 years, 4 months ago
Another “Pre-Crime” AI System Claims It Can Predict Who Will Share Disinformation Before It’s Published
“Results from the study found that the Twitter users who shared stories from unreliable sources are more likely to tweet about either politics or religion and use impolite language. They often posted tweets with words such as ‘liberal’, ‘government’, ‘media’…. Analysing the behaviour of users sharing content from unreliable news sources can help social media platforms to prevent the spread of fake news at the user level.”
So discussing politics and religion will be verboten on social media? Oh, what would we do without Big Tech, that all-wise arbiter of what’s a reliable source and what’s not?!
-
In order not to become a false positive AI target, one must adopt several ‘normie’ interests and avoid trigger words. I follow and comment on sports and sweet aphorisms, which I like but appear safely mundane. this way edgy comments are diluted for AI purposes. Have a great day!
-
“The study found that Twitter users who shared stories from reliable news sources often tweeted about their personal life, such as their emotions and interactions with friends. This group of users often posted tweets with words such as ‘mood’, ‘wanna’, ‘gonna’, ‘I’ll’, ‘excited’, and ‘birthday’.” I think I’m gonna be in a good mood tomorrow–I’ll be excited about my friend’s birthday! 🙂
-
I also follow a broad spectrum of feeds to avoid niche characterization. This results in seeing opposing views, which forces me to examine the reasoning and temper the effect of the echo chamber. It is interesting how people on youtube avoid trigger words and use a substitute words.
-
-
-
The Giza Forum (Legacy)
Closed Archive of The Old Forum