"Bad behavior" bots can make Facebook a safer platform

The Web-Enabled Simulation (WES) project uses a new method of building large-scale, highly realistic simulations of complex social networks, and has direct application in the areas of protection, security and privacy, as Mark Harman explained in a meeting with a restricted group of journalists. It is still a research project created in FAIR (Facebook Artificial Intelligence Research) but the UCL professor and researcher on Facebook is confident that WES can be useful in several areas of application, making Facebook more secure but also flooding its use to sectors such as the development of game agents.

For now, the simulation is being used on the Facebook platform, using machine learning to train bots and simulate the behavior of real people. Groups of "innocent bots" and others of "evil bots" have been created that exploit the flaws in the system to behave inappropriately. Bots are trained to interact with each other, within communities of intelligent agents, using the same infrastructure as users, but without ever acting with real people. They can send messages, make publications, ask for friends, do research and even try to buy products that are banned on Facebook, such as drugs and weapons.

We are still in a research environment but the goal is that WES can help identify and resolve bugs on the platform, improve services and detect potential integrity problems before they affect the real people using the platform, explains Mark Harman.

For now this "army" of bots trained to imitate behaviors that researchers know can happen on the platform, but the idea evolves and the researcher says that "In theory, and in practice, bots can do things that we have never seen before. one of the goals because we really want to face harmful behaviors, and anticipate these movements instead of continuing behind what happens ", he said.

The prevention of negative behaviors is one of the concerns of Facebook that has invested in various areas to moderate comments and posts, to prevent publications that violate the rules of non-violence and the use of pornography, among others. But it has not always been successful and there is no lack of examples of rule violations, to a greater or lesser extent, which have generated criticism and problems, exposing the social network to sanctions by the authorities.

The researcher argues that in this type of tests the software has an ability to adapt and modify behavior that is ideal to anticipate possible changes. We are working on prevention, he justifies, adding that in the simulation there are good bots and bad bots, which attack other bots, similar to what happens when using the real Facebook platform where some users have negative behaviors.

Without wanting to detail what kind of behaviors are studied and identified, Mark Harman argues that there are several scientific challenges involved in these simulations, and that WE combines various technologies ranging from search-based software engineering and machine learning to programming languages, multi-agent systems , AI of games and AI-assisted gameplay. And the fact that using the real Facebook data platform is fascinating, he says, highlighting the power of applying this simulation in Big Data and the enormous statistical power of testing behaviors that it brings.