首页 > > 详细

讲解 DTS311TC FINAL YEAR PROJECT讲解 Java编程

DTS311TC FINAL YEAR PROJECT

The application of a trust model based on confidence and reputation to NPC trust assessment and decision- making in market simulation games

Proposal Report

In Partial Fulfillment of the Requirements for the Degree of Bachelor of Data science and big data technology

Abstract

Trust is crucial in multi-agent systems and is a key factor affecting decision-making. In market trading scenarios in particular, agents must assess each other's reliability in order to  conduct  subsequent  transactions.  This  project  aims  to  develop  a  framework  with dynamic trust calculation at its core and apply it to the trust assessment of NPCs in a market  simulation  game.  A  dynamically  adjusted  trust  value  calculation  method  is constructed by combining trust (based on direct interaction data) and reputation (based on feedback information shared between NPCs). Compared with the traditional static model, the dynamic trust mechanism in this study can respond to changes in player behavior in real time, thereby more realistically simulating complex interaction scenarios. The focus of this project is on the generation of trust values, the update algorithm and the optimization of its performance. The specific decision-making behavior of the NPC is a secondary   implementation.   Experimental verification is used to demonstrate   the applicability and performance of the model in a dynamic game environment.

Contents

1  Introduction

1.1  Introduction, Background

Trust  computing,  as  an important research  direction in multi-agent  systems, provides theoretical support for modeling complex interactive behaviors. Trust plays an important role in the effective interaction of multi-agent systems.  However, most existing systems rely on static or rule-based decision models, which cannot adapt to complex real-time player behaviors, resulting in an interaction experience that lacks realism. In recent years, models  capable  of  dynamically   computing  trust  have   shown  strong  potential  for application in multi-agent systems.  [1]These models can adapt to different interaction environments.

In this project, a dynamic trust calculation model is applied to a market simulation game, enabling NPC to dynamically assess the player's trustworthiness based on the player's behavioral data and environmental context, thereby providing more effective support for the trading process. This model enables the NPC in the game to continuously adjust their decisions during interactions, thereby improving the player experience.

1.2  Scope and Objectives

1.2.1      Scope

- Develop a computational trust model for a market-oriented simulation game.

- Design a lightweight dynamic trust value calculation framework.

- Focus the main research on trust value calculation, and implement NPC decision logic as a secondary task.

1.2.2      Objectives

-Design  a  confidence  and  reputation  calculation  algorithm  based  on  fuzzy  logic  and reputation aggregation.

-Develop a dynamic trust value update mechanism that responds to player behavior. -Verify the accuracy and applicability of the model in a simulated environment

2  Literature Review

2.1     Related work

What is a trust model? The method to specify, evaluate and set up trust relationships amongst entities for calculating trust is referred as the trust model. Trust modeling is the technical approach used to represent trust for the purpose of digital processing.[2]

Marsh (1994) was one of the first scholars to formalize trust in computing systems. In his approach, he  integrated  various  aspects  of trust  from  disciplines  such  as  economics, psychology,  philosophy,   and  sociology.  Since  then,  many  trust  models  have  been constructed  for  various  computing  paradigms  such  as  ubiquitous  computing,  P2P networks, and multi-agent systems. [2][3]

2.2   Trust models in multi-agent systems

Trust  computing  plays  an  important  role  in  multi-agent  systems,  providing  core theoretical  support for the modeling of complex interactions and intelligent  decision- making.[4]

The  earliest trust  computing models were usually  static,  for  example,  assessing  trust values through  fixed rules  or  a  single  calculation  based  on past  data.  Dynamic  trust models   overcome  the   shortcomings   of  static   models   through   real-time  updating mechanisms.[5]

In order to cope with the needs of multi-agent interactions in complex environments, multi-dimensional trust assessment models have gradually emerged. [6] For example, in addition to the traditional confidence and reputation dimensions, researchers have also introduced  factors  such  as  social  relationships   and  risk   assessment.  These  models improve  the  accuracy  and  applicability  of trust  values  by  weighting  and  fusing  the evaluation results of multiple dimensions. [7]

Currently, the interest in trust models is not decreasing. We can see that the number of various models present in the literature is increasing. [8]. People can apply these models specifically to different scenarios to help the development of projects. I think my research on trust models is relevant.

2.3  Trust model selection

Many models for calculating trust have been developed. For example, Marsh (1994) first proposed a formal calculation framework for trust, which laid the theoretical foundation for quantifying trust values. He believes that trust is a value between -1 and 1, and his calculation method considers the risk of interaction and the ability of the interaction partner to calculate. [9] However, these concepts are not given any precise grounding, and past experience and reputation values are not considered.

Reputation symbolizes trust, and the level of ability is collected from the social network in which the  agent  is  located.  The  main value  of this model  is to use  reputation to symbolize trust, but this assessment is too simple. [10] There is also a probability method used to build the model, which takes into account past experience and reputation, but it does not significantly help in understanding the decision-making of the agent. [11]

We have chosen a trust model based on reputation and confidence. For the first time, confidence and reputation are combined for trust modeling, providing a context-aware trust  calculation  method.  A   specific  algorithmic  framework  is  provided  to  support dynamic weight adjustment. [12]It can be used well in market simulation games.

2.4   Trust computing in games

The application of the trust computing model in games is mainly reflected in two aspects: improving the intelligent behavior of NPCs and enhancing the interactive experience of players,  especially  in  real-time  interactions  in  dynamic   environments  and  complex decision-making  scenarios. For  example, market  simulation  games,  as  a  typical  open interactive scenario, provide a broad  space for the application of the trust computing model.

The evaluation of player trust in games is also very important. The application of the trust computing  model  in  games  is  conducive  to  achieving  fairness  in  online  games  and reducing the spread of untrustworthy information among players. [13]

3  Project Plan

3.1  Proposed Solution / Methodology

3.1.1      data preparation

1. Direct Interaction Data (Confidence):

- Get the player's actual performance on a specific issue from the historical case base CB.

- Key information for each interaction includes:

- Issue assignmentsO = {x1      = v1     , x2      = v2     , …}

- Execution results O ′ = {x1      = v1 ′   , x2      = v2 ′   , …}

- Timestamp  t

- Use a utility functionux(v)to evaluate the utility of each issue value.

2. Indirect Interaction Data (Reputation):

Reputation information collected from the social network of agents.

3.1.2       Confidence Calculation

Confidence measures the reliability of the target agent based on direct interactions. The process includes:

(1) Obtain the distribution of utility changes from historical data

Extract  the  distribution  of utility  changes  for  a  issue  x  from  the  interaction  records ΔUx    =Ux    (v′)−Ux    (v)  ,  where  v is the value  agreed in the contract and v' is the actual implementation result.

(2) Estimate Confidence Interval

Determine  the  confidence  interval  [v− ,v+]    for  the  agent’s  possible  performance  on issue x based on historical data.

(3) Fuzzify Assessment

Map the confidence interval to linguistic labels L={Poor,Average,Good} and assign confidence levels C(x,L) to each label.

(4) Compute Expected Value Range

Using confidence levels, calculate the expected value range for issue x:

 

where  is the membership function for label L.

(5)Calculate Maximum Utility Loss

Within the expected value range, calculate the maximum utility loss:


 

(6) Derive Confidence Trust Value

Based on the maximum utility loss, compute the trust value for issue based on confidence:

 

3.1.3      Reputation calculation

The reputation value measures indirect information collected from other agents and is calculated as follows:

(1) Obtain the reputation value distribution Rep(x, L) of the target agent on issue x from the social network, where L is the fuzzy set label.

(2) Calculate expected value range

Similar to confidence level, based on the reputation value distribution, calculate the expected value range for issue x.

 

(3) Calculate maximum utility loss

Calculate maximum utility loss based on reputation value:

(4) Calculate reputation trust value



Calculate the reputation trust value of the target agent on issue x based on the maximum utility loss.

3.1.4         Combining confidence and reputation

In practical scenarios, confidence and reputation are often used in combination.  The combination process is as follows:

(1)Determine the weight

∣CB ∣ :Number of interactions in history.

θmin     :: Confidence threshold, the minimum number of interactions that indicate that the confidence value completely dominates trust.

(2) Calculate the comprehensive expected value range

Combine confidence level and reputation to calculate the comprehensive expected value range for issue x.

 

Further information on the expected value range:

 

(3) Calculate the overall trust value



Calculate the maximum utility loss based on the overall expected value range:

 

The final overall trust score is obtained:

3.1.5        Construction of the trust model

(1)Confidence is the only source of trust (Trust = Confidence)

In this case, only direct interactions are considered to be a valid source of information for measuring the performance of another agent. The first contract will be full of uncertainty, and this definition of trust will only work  effectively when there have been  enough interactions.

Then the trust value for issue x is defined as:

(2)Reputation as the only source of trust (Trust = Reputation)

When the number of interactions is small, confidence cannot provide sufficient information, and reputation information may be more useful. This is a common situation

The trust value of issue x is defined as:



(3)Combining confidence and reputation (Trust = Confidence and Reputation)

In most cases, it is more reasonable to combine confidence and reputation. The logic is that as interactions between agents increase, npc will become more and more dependent on their own confidence measurement, rather than the reputation information provided by others (because direct interactions are usually more accurate than indirect information).

Finally, the trust value of issue x is defined as:

Our definition of trust (especially the last approach) views trust as a dynamic and rational concept.

3.1.6       Decision-making framework:

The comprehensive trust valueT(β, X) is compared with the preset threshold.

If T(β, X is greater than the preset threshold, the NPC accepts the player's transaction request.

If T(β, X)is less than the preset threshold, the NPC rejects the transaction.

3.2  Experimental Design

Flowchart


Figure 1 Decision-making process flow chart


3.2.1     Testbed architecture

Design a test platform. to simulate the interaction between NPCs and players, and test the NPC decision-making mechanism based on the trust model (confidence, reputation, and comprehensive trustworthiness) to ensure the rationality and dynamic adaptability of the decision-making.

Figure2 Testbed architecture (1)

Figure3 Testbed architecture (2)


The testbed architecture has four components: the simulation engine, the database, the user interface, and the agent framework.

The simulation engine is responsible for starting the game, controlling the simulation environment by adjusting parameters, and managing processes such as player requests, NPC trust calculations, and decision execution.

The database stores environment and agent data. This testbed also provides the ability to record other data types in the database, as well as data replay and analysis tools.

The user interface provides real-time visualization of NPC-player interactions, trust value changes, and decision results. Figure 4 shows a game monitoring interface.

The agent skeleton is designed to allow the implantation of custom internal trust representations and trust revision algorithms.  The  Java  classes  that  define  the  agent skeleton implement all the necessary interfaces to allow agent-agent interactions (via the simulation  engine).  The  agent  skeleton  also  handles  coordination  tasks  with  the simulation engine, such as opinion formation and evaluation calculations. In the future, it may be possible to try to develop agent skeletons in other programming languages to provide more flexibility for agent designers.


Figure4 Game monitoring interface.

3.2.2       Overview of the process

1.  The player  sends  a  request  to  the  NPC  (e.g.,  a  quest,  a  trade,  etc.).  The  player's behavior. can be designed to be honest, dishonest, or mixed.

2. The simulation engine assigns the request, and the NPC calculates the player's overall trustworthiness.


3. The NPC makes a decision to accept or reject based on the trustworthiness and the set threshold.

4. The player performs the task, and the simulation engine records the task result and updates the trust value.

5.  The  data  storage  module  records  the  interaction  data  and  provides  analysis  and playback functions.

3.2.3       Test indicators

1. Change in confidence trend: Whether the dynamic changes in confidence, reputation and overall confidence in different player behavior. patterns are as expected.

2. Decision  accuracy:  Consistency between the NPC's  acceptance/rejection  decisions and the player's actual behavior.

3.  Model  adaptability:  Whether  the  model  can  dynamically  adjust  trust  values  and decisions based on player behavior. patterns

3.3  Expected Results

3.3.1      Expected results

NPC decision-making is highly consistent with player behavior. patterns:

1.   Honest   players:   gradually   increase   overall   trustworthiness   and   increase   NPC acceptance rate.


2.  Dishonest  players:   gradually  decrease   overall  trustworthiness   and  increase  NPC rejection rate.

3.   Mixed    players:   overall    trustworthiness   and    decision-making    show   dynamic fluctuations.

3.3.2       Data visualization:

Plot the trust value change curve and decision-making distribution map to demonstrate the dynamic adjustment capability of the model.

3.4  Progress Analysis and Gantt Chart

In the current school year project (FYP), the literature research on the application of multi-agent system trust models in simulated market games will be completed in October 2024, and a plan proposal will be submitted in November. From November to December, relevant knowledge and skills will be learned, relevant data will be collected, and trust models will be established and analyzed. From December 2024 to April 2025, specific experiments will be completed, the results will be evaluated and compared, and a first draft of the paper will be completed in the process. Finally, the paper will be revised and the defense will be completed.

Figure5 Gantt chart


4  Conclusion

This project  focuses  on  the  dynamic  trust  calculation  of NPCs  in  market  simulation games. By combining confidence and reputation, an efficient trust assessment framework is proposed. At the same time, its applicability and performance in dynamic scenarios are verified through experiments, providing a reference for further research in the field of game AI.

However, this study is limited to accepting or rejecting NPCs decisions. The trust model mentioned in this paper can also guide NPCs to make more complex decisions, such as   modifying the content of a transaction.If I have the opportunity, I can study it further in  the future.



联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!