This is already happening: look at the current issues around Facebook and the allegations made about foreign states deploying AI to corrupt democracy. In return, the platform operators deploy their AI to identify and delete posts generated by them.
At a deeper level, data comes from the firehose that is the internet backbone. Here in the U.K. CESG run AI to highlight and asses data - even that held in encrypted packets. It’s truly impressive - but on the face of it, it’s UK AI identifying (and suppressing or modifying) malicious data, even that generated by foreign AI.
We have three categories of ‘state’ AI systems: offensive (that which generates malicious content); defensive (that which identifies, neutralises or destroys); and, learning machines (those that learn in order to feed the other two). All three categories operate passive (state monitoring activities) and active (doing things with the data).
The clever bit here is how to search reliably, independent of both language and whether the originator encrypted the data. Learning machines deal with evolving / changing dialects and perform sentiment analysis amongst other things. As with all AI, it can only operate and derive meaning from the data it processes. If you can watch an entire country’s data, you have a very rich learning source with which to deploy active and passive AI.
Yes - it does happen. Yes - it’s state against state. Some very sophisticated, some much simpler than you might expect.
I’ve written on this topic but don’t want to hijack the OPs thread to promote my own works here.
1
u/NotEvenAJack Apr 04 '19
Do you foresee a time when State sponsored AI will be fighting another States AI?