Categories: Technology

Spirit AI wants to be your Ally on the fly, a player-centric bot for online gaming abuse

Online harassment or abuse in gaming involves some of the most toxic slurs ever slung in cyberspace. For more sensitive gamers, simply saying, “Don’t take it personally,” doesn’t cut it.

No race, creed, ethnicity, gender, nor religious affiliation is exempt from verbal or textual abuse online. Although many gamers see this as just virtual trash-talking and part of the territory, a poll from IGN shows that one in three gamers are actually turned off by online abuse.

Read More: Google’s new Perspective API can help you not sound like a jerk while commenting

“While the use of boastful or insulting speech to intimidate or humiliate can have value as a psychological strategy, when the remarks attack someone for their gender, perceived sexual orientation, or race, many would agree that a line has been crossed,” wrote Kaitlyn Williams, who received a Stanford University Boothe Prize Honorable Mention for her essay When Gaming Goes Bad: An Exploration of Videogame Harassment Towards Female Gamers.

In order to evaluate whether a line has been crossed in online gaming interactions, the team at Spirit AI developed a bot called Ally that makes “in-game player communities, chatrooms and online social platforms safer and more inclusive.”

Using the power of machine learning and predictive analytics, Ally takes a player-centric approach to abuse, asking the player whether or not they are OK with player interactions and learning what situations, language, and individuals are within their “safe zones” or comfort levels.

What’s cool about the player-centric approach is that Spirit AI’s Ally comes in the form of a virtual character that mimic’s the game’s style, so it doesn’t seem out of place.

The Ally checks-in on a potential abuse victim and asks whether or not a user has been offensive, and the software has a customizable interface that allows players to input what is offensive to them.

On the surface this may seem tedious and even overkill on political correctness.

However, the Spirit Triage Manager will decide how the system will respond to each abusive scenario as it is detected, and its context-aware reporting system can create a case file for further analysis by the community team, whether a player proactively reports an instance of abuse or responds to an Ally enquiry.

It’s good to know that there is at least some human element working behind the scenes.

Additionally, as players gain more experience in a game or chat room, or build their own community with whom they feel relaxed, their response to problematic language may change. They’re free to tell Ally whenever their preferences change – or even how they’re feeling on any given day.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

G20 South Africa commits to advancing digital public infrastructure globally

DPI involves giving everybody electricity & internet, making them sign up for digital ID, and…

1 day ago

Nisum, Applied AI Consulting partner-up to turn the promise of AI into tangible results

Across industries, AI has been promised as the magic bullet, poised to solve different business…

2 days ago

WEF blog calls for an ‘International Cybercrime Coordination Authority’ to impose collective penalties on uncooperative nations

How long until online misinformation and disinformation are considered cybercrimes? perspective The World Economic Forum…

2 days ago

With surge in AI-generated code creates security concerns, DeepSources launches trio of autonomous AI agents for DevSecOps 

Autonomous, AI-powered employees are set to begin roaming corporate networks sooner than expected, marking the…

5 days ago

As carcinogenic chemicals from cleaning products hit the headlines, Viking Pure Solutions is protecting employees from harm

Despite the ongoing fight to reduce, reuse and recycle plastics, when it comes to environmental…

5 days ago

Muddy Waters vs. AppLovin: Why Investors Might Be the Real Target

Muddy Waters’ recent short report on AppLovin reads serious. Abuse, violations, an impending takedown. But…

6 days ago