CITS 3001 - 2022
Project: A game on operations in the information environment
Due Date: 13th October
Implementation: Java or Python r>
Game Scenario
There are four teams involved: Red, Blue, Green and Grey.
The scenario has been deliberately designed to represent the uneven playing field of the
contested environment between the various teams. The scenario highlights the
vulnerabilities of blue team in the contested information environment. The concept of blue
and red teams is prevalent in cybersecurity related serious games or wargames. If you wish
to get some background knowledge about the functioning of teams, you can read this article:
https://csrc.nist.gov/glossary/term/red_team_blue_team_approach. However, this game is
not related to cyber security, rather we are modelling the information environment in a
country.
Red and Blue teams are the major geopolitical players in this fictitious country.
Red team is seeking geopolitical influence over Blue team. Of particular interest to Red team
is influence over Green population and the Government. Blue is seeking to resist the Red
teams growing influence in the country, and promote democratic government in the Green
country.
A key challenge faced by the Blue team, that will become apparent in the exercise, is that
their democratic values are leveraged against them. They are vulnerable to some forms of
manipulation, yet their rules-of-engagement do not allow them to respond in equal measure:
there are key limitations in the ways in which they respond and engage in this unique
battlespace. The Blue team is bound by legal and ethical restraints such as free media,
freedom of expression, freedom of speech.
The Green team lacks a diverse media sector, it is confused and there is a wide range of
foreign news broadcasting agencies Green’s population has subscribed to. The Green
population suffers from poor internet literacy, and the internet literacy can be modelled via
pareto distribution. The government lacks resources to launch a decisive response to foreign
influence operations and a lack of capability to discover, track and disrupt foreign influence
activity. This was background information and the description of the agents that was
provided to you does not entail implementing this. However, if you want to make use of this
information, you can do so
The Red team, an authoritarian state actor, has a range of instruments, tactics and
techniques in its arsenal to run influence operations. The Green government can block
websites and social media platforms and censor news coverage to its domestic population
whilst maintaining the capability to run sophisticated foreign influence operations through
social media.
2
The Grey team constitutes foreign actors and their loyalties are not known.
Election day is approaching and the Red team wants to keep people from voting.
Population Model:
An underlying network model that defines the probability of nodes interacting with each
other. The majority of the nodes, over 90%, will belong to green team and they depict the
population of the country. A small percentage will be grey and there will be one red and one
blue agent. At the beginning, grey nodes are not part of the network.
Let G=(n,p ), where G is the graph depicting the green network, n is the number of green
nodes and p is the probability of an edge.
Agents have an opinion (X) and uncertainty (U) associated. In every simulation, round
nodes will interact with each other and affect each others’ opinions. The more uncertain an
agent is, the more likely their opinion would change.
The uncertainty scale is [-1,1], however, if that scale is not easy to create a mental model,
you can either flip the values or create a (0,1] scale for implementation. Please state your
assumptions in the report.
For instance, two green nodes i and k have the following opinions and uncertainties:
xi = 1 and ui= 0.2 (meaning i wants to vote)
xk= 0 and uk= - 0.2. (meaning k does not want to vote) ?
Is ukfollows:
xi = 0 and ui= ? (i’s opinion has changed to not vote, but you need to
think of a clever way to have a new uncertainty value here )
xk= 0 and uk= - 0.2 (nothing will change there)
The probability of interaction is not uniform across all nodes. Some nodes (for instance those
in a household), may have a higher probability to interact. You need to start with a simple
graph and you can still finish the project with good marks, provided all the other features are
working. If you implement complicated graphs such as the one described in the red text, you
can score higher points. However, do not get stuck in this part. You can even use the graph
provided to you in under the project link on LMS.
How teams are going to take turns:
Teams are going to take turns one by one.
1. Red Team: You need to create function where red team (only 1 agent) is able to
interact with all members of the green team. The agent affects the opinions and
uncertainty of the green team during the interaction. The catch is that you need to
select from 5 levels of potent messaging ( after class discussions we decided
that it does not have to be 5 discrete levels. If you like, you can model this as a
real number).
The potency of the message can also be treated as uncertainty/certainity. I leave it
up to the students how they would like to perceive potency and uncertainty.
If the red team decides to disseminate a potent message, during the interaction round,
the uncertainty variable of the red team will assume a high value. A highly potent
message may result in losing followers i.e., as compared to the last round fewer green
team members will be able to interact with the red team agent. However, a potent
message may decrease the uncertainty of opinion among people who are already under
the influence of the red team (meaning they are sceptical about casting a vote). You
need to come up with intelligent equations so that red team improves the certainty of
opinion in green agents, but at the same time does not lose too many green agents.
Think of it as a media channel trying to sell their narrative to people. However, if they
may big, claim, lie too much, they might lose some neutral followers which they could
indoctrinate with time.
2. Blue Team: Similarly, blue team can push a counter-narrative and interact with green
team members. However, if they invest too much by interacting with a high certainty,
they lose their “energy level”. If they expend all their energy, the game will end. You
need to model this in way that the game keeps going on while the blue team is
changing the opinion of the green team members. Blue team also has an option to let
a grey agent in the green network. That agent can be thought of as a life line, where
blue team gets another chance of interaction without losing “energy”. However, the
grey agent can be a spy from the red team and in that case, there will be a round of
an inorganic misinformation campaign. In simple words, grey spy can push a potent
message, without making the red team lose followers.
Which things the students have to implement at the minimum
1. Implementing a green network where the nodes can interact
2. Implement one red, one blue and some grey agents
3. Every team (red, blue, green) takes a turn, one after another. You can start from any
team.
4. Every agent has an opinion and uncertainty
5. An interaction function that determines the change in opinion and uncertainity
6. An implementation that caters for the Effect on the number of followers after red
agent interacts with the green agents
7. An implementation that caters for the Effect on the life-line when the blue agent
interacts with the green agents
8. An implementation where the blue agent can choose between inviting a grey agent
or taking a normal turn
9. Implementation of grey agent: 1) if grey is a spy, it can act like a red agent without
losing followers, 2) if it is an ally of blue, blue can take its turn, without losing an
lifeline.
10. Human vs. computer play is possible, i.e., red/blue agent can be a human.
11. Some logical modelling of the agents that students can describe in the report
12. Possibility to pass parameters at the beginning of the game, e.g., percentage of grey
nodes, starting uncertainties, percentage of nodes that want to vote and others
described in earlier.
Students can be creative in their implementation regarding various aspects, e.g., but not
limited to:
1. Green network creation/ updation
2. Weights on the links in the green network
3. Selecting an uncertainty (or potency) of red/blue nodes
4. Updating uncertainty of the affected node, after an interaction
5. Visualisation
6. Implementation language either python or java
7. Object Oriented or functional
8. Modeling of the agents i.e., how to make them intelligent
5
Q. Is this a scenario with full knowledge or hidden knowledge?
Q How much information do the red and blue agent know about the green agents?
Answer: The red and blue agents know the opinion of the green agents (i.e. the total
number of agents who want to/do not want to vote. ) but not their uncertainties. From that
perspective the knowledge about the system is incomplete.
Q. Does the red/blue agent know exactly what opinions and uncertainties
the green agents have? And the connections?
Answer: No, the red and blue agents do not know that
Q. Does the red agent know just the opinions of green agents?
Answer: Yes.
Q. Similarly, what do grey agents know?
Answer: Grey agents are also aware of the opinions of the green agents i.e. the total number
of agents who want to/do not want to vote.
With the probabilities of connections between green nodes in the graph:
Q. Does this only affect the initial generation of the graph, and then the graph remains
static after that?
Answer: In the base case, the graph will remain static once it is generated. However, if the
design of AI technology of a group requires changing connections in the graph once the
game has started, they can do so.
Interaction in the green network: The possibility of interaction is only allowed between nodes
that have connections. However if your AI technology requires putting weights on the
connections or associating probabilities of interaction, you can do that.