- Open Access
A trust-based framework for vehicular travel with non-binary reports and its validation via an extensive simulation testbed
© Cohen et al.; licensee Springer. 2014
- Received: 24 October 2013
- Accepted: 28 August 2014
- Published: 23 October 2014
In this paper, we offer an algorithm for intelligent decision making about travel path planning in mobile vehicular ad-hoc networks (VANETs), for scenarios where agents representing vehicles exchange reports about traffic. One challenge that arises is how best to model the trustworthiness of those traffic reports. To this end, we outline an algorithm for effectively soliciting, receiving and analyzing the trustworthiness of these reports, to drive a vehicle’s decision about the path to follow. Distinct from earlier work, we clarify the need for specifying the conditions under which reports are exchanged and for processing non-binary reports, culminating in a proposed algorithm to achieve that processing, as part of the trust modeling and path planning. To validate our approach we then offer a detailed evaluation framework that achieves large scale simulation of traffic, travel and reporting of information, confirming the value of our proposed approach by demonstrating the average speed of vehicles which follow our algorithm (compared to ones that do not). This experimental framework is promoted as a significant contribution towards the goal of evaluating trust algorithms for intelligent decision making in traffic scenarios.
- Multi-faceted trust modeling
- Multiagent systems
- Vehicle routing
- Traffic control
In this paper, we present a method for exchanging reports between agents in multiagent systems that allows the trustworthiness of peers providing non-binary information to be modeled, as part of an agent’s decision making process. We are motivated by the problem of enabling agents to make travel decisions based on traffic reports received by peers, in a setting of mobile vehicular ad-hoc networks (VANETs). In this environment, maintaining a multi-faceted trust model is of value and our proposal for supporting non-binary reports ultimately integrates each facet of this trust model, in order for an agent to determine which travel path to follow. For example, a non-binary report could indicate a traffic congestion figure, rather than a binary response to a question such as “Is the traffic heavy?”. Our starting point is a model that includes a calculation of the consensus opinion about roads from the majority of agents, but that assumes only binary reports. From here, we sketch algorithms that clarify in greater detail how to support effective communication between the agents in the environment and how this would then dictate the travel decision making of an agent who is receiving traffic reports from peers.
In order to demonstrate the effectiveness of our framework, we introduce a detailed testbed that simulates vehicles traveling in an environment, making path planning decisions based on non-binary traffic reports from peers whose trustworthiness has been modeled. We offer an extensive set of simulations that serve to validate our approach, illustrating how effective the average path time taken by our vehicles is, in comparison with a best case scenario with perfect knowledge and with models that integrate less detailed trust modeling.
The dual contributions are: i) an effective decision making process for intelligent agents in VANET environments where trust is modeled and non-binary reports are exchanged ii) an extensive testbed of use for measuring the value of different trust modeling algorithms, in travel environments where agents exchange reports. We clarify the importance of these contributions in comparison with related work in the field.
In this section, we outline our original framework for modeling trust in VANET environments (-). We consider the driver of each vehicle in our VANET environment to be represented by an agent. In order for each vehicle on the road to make effective traffic decisions, information is sought from other vehiclesa (about the traffic congestion on a particular road). As a result, for each driver an intelligent agent constructs and maintains a model for each of the other vehicles. Travel decisions are then made based on a multi-faceted model of agentb trustworthiness. This is necessary because when asked, each agent may report inaccurate traffic congestion, in an effort to deflect other vehicles from certain roads. In particular, we propose a core processing algorithm to be used by each agent that seeks advice (about travel paths, based on traffic) from other vehicles in the environment as summarized below.
In order to cope with possible data sparsity, various facets (highlighted in this section in bold) of each agent are taken into consideration when reasoning about travel, including the agent’s role, location and inherent trustworthiness (determined on the basis of past experiences with this particular agent - i.e. whether past advice has proven to be trustworthy). Each of these facets of the agent is stored within the trust model.
We first acknowledge that certain vehicles in the environment may play a particular role and, on this basis, merit greater estimates of trustworthiness. For example, there may be vehicles representing the police and other traffic authorities (authority) or ones representing radio stations dedicated to determining accurate traffic reports by maintaining vehicles in the vicinity of the central routes (expert). Or there may be a collection of agents representing a “commuter pool”, routinely traveling the same route, sharing advice (seniority).
Experience-based trustworthiness is represented and maintained following the model of  where T A (B)∈(−1,1) represents A’s trust in B (with -1 for total distrust and 1 for total trust) which is incremented by 0<α<1 using Equation (1) if B’s advice is found to be reliable (positive experience), or decremented by −1<β<0 using Equation (2) if unreliable (negative experience), with β>α to reflect that trust is harder to build up but easier to tear down. Distinct from the original model of , the values of α and β can be set to be event-specific. For example, when asking about a major accident, these values may be set high, to reflect considerable disappointment with inaccurate advice. We also incorporate a requirement for agents to reveal whether the traffic information they are providing has been directly observed or only indirectly inferred from other reports that agent has received. The critical distinction of direct or indirect reporting then influences the values set for α and β, introducing greater penalties for disappointment with direct advice. In  we discuss at greater length the incentives to honesty that are introduced within this framework; for brevity, we omit that discussion in this paper.
A central calculation to influence the travel decision of each agent is the determination of majority consensus amongst the agents providing advice about a particular road. The agent maintains, as part of her model of other agents, a list of agents to ask for advice. This list is ordered from higher roles to lower roles with each group G i of agents of similar roles being ordered from higher experience-based trust ratings to lower ratings. The agent sets a value n and asks the first n agentsc from her ordered list the question (thus using priority-based trust), receives their responses (reports), and then performs majority-based trust measurement. Suppose that q of these n agents declare that their reports are from direct experience/observation. The requesting agent determines whether there are sufficient direct witnesses such that she can make a decision based solely on their reports.
If q≥N min , then the requesting agent will only consider the reports from the q direct witnesses if a majority consensus on a response can be reached, up to some tolerance set by the requester (e.g. the agent may want at most 30% of the responders to disagree), then the response is taken as the advice and followed. If q<N min , then there are insufficient direct witnesses; the agent will consider reports from both direct and indirect witnesses, assigning different weight factors to them, computing and following the majority opinion. (Once the actual road conditions are verified, the requesting agent adjusts the experience-based trust ratings of the reporting agents: It penalizes (rewards) more those agents who reported incorrect (correct) information in the direct experience case than those agents with incorrect (correct) information in the indirect experience case.) If a majority consensus cannot be reached, then instead, the agent relies on role-based trust and experience-based trust (e.g., taking the advice from the agent with highest role and highest experience trust value). Note that in order to eventually admit new agents into consideration, the agent will also ask a certain number of agents beyond the n t h one in the list. The responses here will not be considered for decision, but will be verified to update experience-based trust ratings and some of these agents may make it into the top n agents, in this way.
The computation of majority consensus adheres to the set of formulae outlined as follows: Suppose agent A receives a set of m reports from a set of n other agents regarding an event. Agent A will consider more heavily the reports sent by agents who have higher level roles and larger experience-based trust values. When performing majority-based process, we also take into account the location closeness between the reporting agent and the reported event, and the closeness between the time when the event has taken place and that of receiving the report. We define C t (time closeness), C l (location closeness), T e (experience-based trust) and T r (role-based trust). Note that all these parameters belong to the interval (0,1) except that T e needs to be scaled to fit within this interval by (T e +1)/2.
W(B i ) is a weight factor set to 1 if agent B i who sent report R j is an indirect witness, and W(B i ) is set to a value in (0,1) if user B i is a direct witnessd.
where ε∈(0,1) is set by agent A to represent the maximum error rate that A can accept and . A majority consensus can be reached if the percentage of the opinion (the effect among different reports) over all possible opinions is above the threshold set by agent A.
The trust modeling framework described so far clarifies the algorithms that lead to the calculation of the trustworthiness value which would then be stored in each agent model. Trip planning decisions of a vehicle would then be made in light of these particular agent models. One element that requires further clarification is detailed agent communication protocols to exchange reports. This is elaborated in the section that follows.
The framework in  (see also ,) is designed with a pull based communication protocol, where agents send requests to other agents for information. In addition to this classic pull oriented design, we introduce a push based protocol for broadcasting information. These protocols dictate when communication is initiated and to whom. Either or both of the two protocols can be used for communicating information between agents. Algorithm 2 describes the push and pull based protocol and how a priority road information request is sent by agents. This is part of our proposal for specifying when trust modeling should be integrated into the decision making process of these agents.
We note that this algorithm serves to provide important detail and clarification to advance the earlier proposal of . In that work, the messaging proposed was vague. It was suggested that the message content (congestion information about a road) would be a “yes” or “no” response to a question “Is this road congested?” and that this response would be pulled to the requesting agent. When the pulls would occur was left vague as “in need of advice”. As such, which roads were being investigated was also left unspecified. The concept of a priority road, introduced below, facilitates messaging and serves to provide the clearer specification of communication. Roads are placed into priority for an agent if there is a gap of information about congestion; subsequent to receiving a report about a priority road, that road’s status may be altered to cause it to be removed from the priority list (if sufficient information on that road has accumulated). How agents choose to designate a road as priority can be left as an implementation detail. In the simulations used to validate our model, if road information was empty or was sufficiently old, that road would be added to the priority list.
The pull protocol allows agents (requester) to request information from other agents (requestee). The trustworthiness of the information from the requestee agent is modeled and used to determine what path to follow based on the report produced. On the other hand, the push protocol allows agents to send information to other agents, even if it were not requested. The trustworthiness of the sender agent is still modeled; this may then be employed during decision making about travel paths. Both of these protocols are set to occur according to a certain communication frequency; this is the tactic employed during our simulation of traffic which serves to provide the validation of our proposed framework (see Section “Simulation results”). Setting the communication to happen fairly frequently allows agents to inquire about any roads for which they lack sufficient guidance and keeps the information flowing between agents, from the push broadcasting.
Three types of messages are supported within our protocol. The three messages are a transmission of an agent’s location and congestion (Location and Congestion Push), a request for congestion information about a specific road (Priority Road Information Pull Request), and a response for congestion information about a specific road (Priority Road Information Pull Response).
We begin with a clarification of how our messaging framework would support trust modeling in the context of Boolean traffic reports. Algorithm 1 theoretically sends requests only to agents in a prioritized list, when advice was needed. Our proposed update to this algorithm, shown in Algorithm 3, would have each agent’s knowledge base continuously updated with periodic messages, from the pull, push or both protocols. When advice is needed, the most relevant and trustworthy reports are chosen and used.
The work by Minhas et al. mentioned in Section “Background: multi-faceted trust model” presented a Multi-faceted Trust Management Framework that was described as operational for Boolean values of congestion (Heavy (True), Light (False)). In order to calculate a majority opinion, reports which featured the same Boolean value of congestion were aggregated together. The percentage of reports with same congestion value would be compared against a threshold to determine whether the advice would be followed. The trust modeling itself respects the formulae outlined in Section “Background: multi-faceted trust model”. The use of a new advice gathering protocol (as per Algorithm 2) would not intrinsically alter the majority opinion calculation; it simply clarifies how traffic reports are retrieved. Note that calling Check Priority Road(Current Road) within this algorithm has the eventual effect of coping with stale or missing information on roads that are critical to current path planning.
In this section we clarify how our framework could support the use of numeric traffic reports, leading to a “confidence metric” used for trust modeling, in contrast to the Boolean evaluation of traffic in Section “Background: multi-faceted trust model”. Our new proposed confidence metric and use of numeric congestion and trust values serve to allow a more accurate description of traffic and agent information.
The original theory in Section “Background: multi-faceted trust model” assumed that congestion would be communicated as a simple true (Heavy) or false (Light), stating either that the road was congested or not. However, direct application may result in an unfair and biased calculation of the majority opinion. This is because determining whether a road is congested or not is a subjective opinion and is prone to inaccuracies. Also, by representing the congestion as a Boolean, this severely limits the system’s ability to compare roads, evaluate agents, and make the best decisions. Our proposed model seeks to alleviate this problem by representing congestion as a number, which will bring a more suitable level of accuracy to the systeme.
Formula (3) shows the calculation for the aggregated effect of a majority opinion. The new way of representing congestion as a numeric value requires a careful recasting of formula (3). (3) aggregates the effect of all agents that sent the same report (i.e. cong = true). This simple aggregation of similar reports is impossible with the new congestion representation because there are no longer only two types of reports (Cong =true or Cong =false). In the new framework, each report must be evaluated for addition into the majority opinion system. This is done by giving the report a confidence and then evaluating it for inclusion into the majority opinion (similar to the aggregated effect calculation).
The following sections will detail how the factors of experience and role based trust, time and location closeness, and whether the advice is direct or indirect are incorporated into our proposed confidence metric and utilized in calculating a majority opinion.
Confidence functions as a metric similar to trust, and is calculated by combining many different report and agent factors, which were introduced in Formula (3) and will be described in detail later in this section. These factors include experience and role based trust, time and location closeness, and whether the advice is direct or indirect.
Our proposed equation for calculating confidence must effectively replace Formula (3), while representing a trust-like metric. Modifications to confidence should then be reflected in a manner similar to how trust is increased and decreased in Equations (1) and (2). α and β function in these equations as a standard for increasing and decreasing trust, respectively. For our proposed confidence calculation it did not make sense to atomically increase or decrease the value according to the influencing factor (role, time closeness, etc.). The increase or decrease should reflect the significance of the factor. As a result, our proposed confidence metric replaces Formula (3) with Equation (6), where Equations (1) and (2) are used as the basis for calculating the confidence of report R j , through a modified summation of a geometric seriesf.
Majority based trust is incorporated into our framework as a core algorithm for determining the trustworthiness of an agent, to then dictate whether to believe the congestion value reported about a road, which influences path planning. Section “Background: multi-faceted trust model” describes majority based trust as a consensus, with a value which has been agreed upon by many agents. For our proposed non-Boolean extension to trust modeling, majority based trust is described as an opinion, where a similar value has been agreed upon by many agents. The rationale for the change from a Boolean based congestion value to a numerical congestion value was described in the beginning of Section “Our proposed numeric trust modeling”.
The advice is used by choosing and prioritizing information from various reports and calculating a majority opinion, which is followed if its confidence is above a threshold, similar to the threshold of Equation 4. The primary advice presented in Section “Background: multi-faceted trust model” would be road congestion reports, which would be used to help an agent decide what roads to take and which to avoid by considering all the facets of the multidimensional trust model. This continues to hold in our framework. In our calculation, if the confidence is below a threshold, then the advice is used from the report with the highest confidence.
The majority opinion is calculated using Algorithm 4. All relevant advice reports referencing a location are retrieved and prioritized into a list of size n. The majority opinion is then calculated, stored, and reported back to the agent. If a report contains information that is suspicious with respect to other reports that have been observed, such as an extremely high congestion report, the sender is reported as a suspicious agent. Labeling agents as suspicious is helpful in order to remove them from consideration, regardless of their current trustworthiness value. The framework will then process the suspicious agent, profiling it and updating its trust value in the knowledge base.
Algorithm 4 is a modified algorithm from Algorithm 1, which shows the calculation of a majority opinion in the framework. The algorithm uses suspicious agent detection in helping to avoid the inclusion of congestion advice which is outside a standard deviation from the current majority congestion. The majority opinion is used if there are at least n agents to use advice from and the majority confidence is above the majority threshold.
Suspicion detection is important to include to help avoid congestion advice that greatly deviated from the current majority. Only using advice that has similar congestion reports forms our majority opinion, rather than conceiving of majority opinion as just an average congestion of the highest trusted agents (n).
If an agent is deemed suspicious, then they are reported and the agent’s advice is not used in the majority opinion calculation. However, the reverse is possible where if an agent’s advice has higher confidence than the majority and confidence greatly deviates from the majority. If this happens then the majority confidence is decreased proportionally and the agent’s advice is potentially used as the report with highest confidence.
Experience based trust is the most basic type of trust and is applied to every agent in our framework. As detailed in Section “Background: multi-faceted trust model”, it is trust as a result of direct experiences with the individual agent. This is updated when the model encounters information that it can use in a judgmental nature. An example of such information would be from detecting suspicious information being reported by an agent, encountering definitive information that can be used as a comparison factor against information previously reported by an agent, or processing the opinion of a more trusted agent about the agent in question. Since experience based trust is the most basic type of trust, this forms the basis of the confidence calculation.
This facet of trust management is very simple but powerful. Section “Simulation results” demonstrates this through basic simulations which only use experience and majority based trust.
Experience based trust is a powerful tool for profiling agents; however, it is often challenged in scenarios with data sparsity. Data sparsity is an absence of agents with which the resident agent has had previous experience. This is often the case in the real world where it is rare to encounter a car which you have previously profiled.
Role based trust helps alleviate the issue of data sparsity by assigning roles to agents in our framework. As detailed in Section “Background: multi-faceted trust model”, predefined roles (e.g. police patrols, traffic reporters or taxi drivers) are assigned to all agents in the system. Different roles may be associated with different levels of trust. The model uses the four different types of roles, motivated by the classification of Minhas et al: Ordinary, Seniority (e.g. commuter pool), Expert (e.g. news station car), Authority (e.g. police).
It can often be the case that an agent receives a great deal of reports about a road, with some being more accurate than others. A combination of time and location closeness is used in confidence calculations to determine how accurate reports are. Time closeness is a measure of how old the report is with respect to when the advice is needed. Location closeness is a measure of how how far the agent providing the report is to the road in question.
Time and location closeness helps alleviate the issue of old and inaccurate reports by assigning these metrics to traffic report propositions and using them in confidence calculations in our framework. As detailed in Section “Background: multi-faceted trust model”, metrics of time and location closeness are used in calculating a majority consensus. Our proposed model similarly uses these metrics in calculating a majority opinion, through modifying the confidence of propositions by a magnitude inversely proportional to these metricsi.
The framework of this paper also incorporates the distinction of direct and indirect reports. Direct reports are reports which have been directly observed and reported by an agent. Indirect reports are direct reports of a third agent which are stored in the knowledge base of the agent the resident agent is communicating with.
For example, when one agent (Ar) communicates with another agent (A2) through a pull request concerning a priority road (R1), A2’s highest confidence traffic report concerning R1 may have been reported by another agent (A3) and not A2. A2 would send Ar the report and indicate that it is an indirect reportk (A2 did not create the report), which would include A2’s confidence of the report. A2 calculates the confidence using the report’s experience and role based trust, and time closenessl.
The inclusion of indirect reports, as opposed to only allowing direct reports, is important because it greatly increases the response rate of a pull request concerning a priority road. Indirect reports, however, may be more inaccurate than direct reports. This is taken into consideration through the use of the corresponding agent’s confidence of the report (A2’s confidence of the report) and by modifying the confidence value of a report by a predetermined factor.
Confidence calculation examples
This subsection presents two examples which describe how the confidence metric for a report is calculated according to the multidimensional trust factors of experience and role based trust, location and time closeness, and whether the report is indirect or not. The following examples will show iterative modifications to the confidence value of a report according to the various factors.
The following calculation demonstrates how the confidence value for the report was calculated. Note that all the parameter values used in these examples are the ones used in our implementationm.
Travel decisions when using numeric trust modeling
Algorithm 4 clarifies whether an agent will choose to take a certain road or not based on consensus about the congestion on the road. If the agent wants to reason about which road to choose (from a set of possible roads), it can run Algorithm 4 for each road. This algorithm is of use in scenarios such as the simulations we present in the following section, where a path planning algorithm is considering specific roads in order to propose the one that is best for the agent’s decision making. This algorithm continues to clarify our proposal for integrating trust modeling into agent decision making, in these travel environments.
This section describes the simulation tests performed to compare and contrast the effectiveness of our model’s implementation against a system that does not use traffic information in routing and a best case scenario. Included in the comparisons displayed in our graphs are less comprehensive trust modeling options. (For example, our proposal with only experience-based and majority-based trust modeling is one comparator; another is an algorithm that takes all reports at face value and does not incorporate trust modeling at all).
We have designed an extensive simulation testbed that can be used to validate our model by modeling traffic flow within an environment, tracking the path times of cars to determine the effectiveness of travel decisions. When vehicles make path planning decisions based on reports from other agents, if the accompanying trust modeling has been effective, the vehicles’ completion of travel paths should be timely. The implementation makes use of the following third party software, JiST/SWANS, vans, DUCKS, and Protegen. JiST stands for Java in Simulation Time; it is a high-performance discrete event simulation engine that runs over a standard Java Virtual Machine (JVM). SWANS stands for Scalable Ad-hoc Network Simulator; it is built on top of the JiST platform and serves as a host of network simulation tools. Vans is a project comprising the geographic routing and the integrated Street Random Waypoint model (STRAW). STRAW utilizes an A* search algorithm to calculate shortest path to a destination. It also allows realworld traffic to be simulated by using real maps with vehicular nodes (briefly illustrated in Appendix Appendix D Pictorial depiction of grid-like maps in simulations). DUCKS is a simulation execution framework, which allows for a Simulation Parameters file to be provided to define the simulation. Protege is a free, open source ontology editor and knowledge base framework. Note that the simulation constructed here, while inspired by that employed for the original model of , goes far beyond, to enable a rich modelling of traffic scenarios with effective measurement of successful travel.
The simulation was set to poll cars every 6–15 seconds; with 100 cars in total, experience with every other car would be gained quicklyo. In order to simulate environments with low experience-based trust, we introduce a variable called sparsity. For example, 80% sparsity resembles having a lack of previous experience with 80% of the agents. In the simulation, this variable effectively ignores updates of trust values, thus hindering experience-based trust.
These graphs chart the performance of simulations that either use trust modeling (i.e. profiling (P), (Hon #) or notp (no P, Hon #)). Agent honesty represents the percent of honest agents in the simulation (i.e. 0.5 is 50% honesty). Role-based trust (Role #) represents the percent of agents in the simulation that have been assigned a role (i.e. 0.2 will have 20% of agents assigned a role). Sparsity (Spars #) represents the percent sparsity in the simulation (i.e. 0.8 will have 80% sparsity). Dishonest lie percentage (Lie #) represents the percent of the time which a dishonest agent will lie (i.e. 0.8 means dishonest agents will lie 80% of the time)(set at 100% if unspecified).
In Appendix Appendix B Simulation curves and parameters we display the various parameters set for the experiments and how the values were chosen (while the path planning for the simulation is displayed in Appendix Appendix C Pathing). Our first set of experiments incorporated experience-based trust and majority-based trust, alone. These were the central elements of the original model of -. We call this type of simulation Basic. Simulations with all the other additional components added are referred to as Full. The other trust modeling components individually indicated are time closeness (Time), location closeness (Loc), and indirect advice (Indir). (Full) indicates when all multidimensional trust components are being used. The VANET trust modeling results are also compared against two additional simulations: the first is a worst case scenario where traffic is ignored (no traffic)q, and the other is a best case omnipresent version (omni) which simulates the ability for any car to look up the exact congestion of any road at anytime. All simulation tests results are averaged over 5 runs.
The final set of graphs show the robustness of our simulation framework through experiments that modify simulation-specific variables, such as the number of agents and messaging frequency.
In conclusion, we offer an approach for supporting reasoning about agent trust with advice from peers, whose trustworthiness is then also modeled, when non-numeric reports are provided and have shown the merit of our framework in the context of the VANET application (resulting in effective travel decisions due to the modeling of trustworthiness). As such, we offer a method that supports the exchange of more detailed trustworthiness information, leading to more precise and valuable calculations. We have outlined our method for integrating various reports from peers in full detail. We have also clarified in depth how communication between peers would take place, through a combination of push and pull protocols, in order to assure effective exchange of real-time information and to extend the original model of Minhas et al. - which left as underspecified the exchange of information between agents, for effective travel decision making. Our overall solution integrates a number of novel modeling elements (priority roads, suspicious reports) which support the final algorithm that is presented. The detailed simulation framework allows for the adjustment of a wide variety of parameters which have been implemented to draw out the benefit of the full combination of our methods for trust modeling for effective transportation decisions that support exchange of traffic information. Included here is a method for simulating a dearth of experience for experience-based trust (our sparsity parameter), which can be varied in the experiments and a variable to model the extent to which agents in the environment have specific roles which may increase their trustworthiness (the role parameter). In all, with our testbed we offer an avenue for measuring the relative benefit of different trust modeling options. Parts of this research were presented at the TRUM workshop at UMAP 2012 .
There are a number of avenues for future work. The obvious first direction is to explore a variety of other application domains where agents may need to rely on reports from peers that offer non-binary trust values. It would be interesting, for instance, to examine the possible value of a kind of push and pull-based communication in environments such as peer recommender systems or electronic marketplaces, where rating scales mirror the kind of non-binary reports we have been discussing. Another avenue for future work would be to enhance our current solution for our chosen application of traffic reports and transportation. In earlier work, we discussed the need to distinguish second-hand reports from first-hand reports, applying penalties for incorrect reports declared to be first hand knowledge . Integrating more sophisticated methods for reasoning about the trustworthiness of reports based on whether they were in fact second hand may be of value. In addition, it is quite apparent that the collective travel decision making of the entire set of vehicles on the road is an important consideration. Each agent may be advised to make its final travel decisions by reasoning about the actions likely to be taken by other agents once they have received (perhaps similar) reports. This is another topic that we are currently exploring within our research.
The work of Bazzan et al.  may shed some light on how to achieve this particular goal. A form of multiagent reinforcement learning may be effective in coordinating the activities of the collective of cars on the road. The work of  also emphasizes the value of machine learning for vehicle coordination, again suggesting this as the most promising first step for our future efforts on this topic. Regardless, the issue of system-wide coordination is one that has been argued as of significant importance for any intelligent approaches to managing traffic, as discussed in . As such, this is certainly a valuable topic for future exploration.
As a final avenue for future work, it would be useful to continue to assess the value and contribution of our simulation testbed. A useful starting point would be to explore how to employ the existing testbed for other trust models that have been developed, in order to demonstrate its robustness. One class of trust models that would be appropriate to examine are ones based on Dirichlet distributions, designed to cope with multi-valued information. Extending one of these kinds of models for decision making of agents and then demonstrating its value with the testbed that we have developed would be an interesting future project. In addition, a paper that has just recently been published  provides an excellent survey of agent-based technology for traffic and transportation; comparing our simulation testbed and what it offers to designers against frameworks being explored by other authors, to address other vehicular challenges, would be another very informative path for future research.
As a final comment, we clarify that this research was designed with a realworld implementation in mind as the ultimate application. Reflecting on what might actually be deployed in the future, an implementation as a phone GPS add-on we feel could actually be possible. Implementing the framework in this manner would allow for easy integration into a city’s driving population. The Android operating system and platform is a viable candidate for implementation due to its use of Java as a primary language and the capability to allow applications access to a wide range of phone systems (such as the GPS). Android phones also allow multi-threading. The phones could communicate with each other through minimal Internet access. Once we migrate to the use of GPS, we move to reflecting on the value of reports exchanged mechanically, so into a territory where deliberate misinformation by drivers is less of an issue. In any case, we acknowledge that may certainly be new avenues for the future to enable vehicles to make travel decisions based on coordinated communication with other vehicles on the road.
a For now, we are assuming that reports are coming in from vehicles on the road rather than other disassociated entities. As clarified in Section “Agent communication protocols to exchange reports”, we distinguish those vehicles reporting first hand observation from those that are passing on information acquired indirectly.
b For the remainder of the paper, we use the term agent to refer to the intelligent entity that is directing the actions of its vehicle. The word user refers to the driver who will ultimately be deciding where to direct the vehicle.
c This integrates task-based trust. For instance, an agent may set n to be fairly small, say n≤10, if she needs to make a quick driving decision, or set a larger n if she has time to process responses.
d For example, setting W(B i )=1/2 for the case of direct witnesses indicates that the requesting agent values direct evidence two times more than indirect evidence.
e Note that a reported congestion value for instance of 23 would ideally be representing the actual number of cars on the road; for our simulation, for example, the actual number of cars is known and can be reported by truthful vehicles. Agents that are not truthful will be providing inaccurate values in their reports. It may also be reasonable for cars to report their speed and for this to be a reflection of the road’s congestion.
f A geometric series is necessary because the calculations are capturing atomic increases in trust values but we are reasoning about non-Boolean factors that are therefore not atomic. See Appendix Appendix A Confidence geometric series for a fuller depiction of the geometric series in question.
g The order of application used throughout our experiments is the one we follow in this section of the paper.
h Note that we use the absolute value of G as the exponent in order to ensure that the number of times is a positive number.
i This is consistent with the placement of these factors in the denominator of Equation 3.
j This required scaling was not considered in sufficient detail in the model of Minhas et al. and Equation 3.
k The trust model described in this paper can be incorporated with a penalty mechanism such as the one presented in  to more severely reduce the trust value of an agent who is not a direct witness but claims to be one, resulting in the agent not being responded/helped by other agents in the system.
l Location closeness is not incorporated because it is dependent on the agent who is using the report.
m However, we use InPenal=-2 in the example here instead for a more effective illustration.
n Protege is used due to our knowledge-based representation for storing trust and traffic information; the details of this part of our solution have been omitted in this paper.
o Note that packet delivery success for the messaging is 100%. We did not simulate packet failure since this would be too similar to just reducing the volume/frequency of messages.
p With no profiling, no trust modeling is done and all reports received are simply assumed to be entirely trustworthy.
q Routing without traffic just uses a shortest path calculation.
r The worst case (i.e. No Traffic) is not present so that a finer granularity of the presented simulations can be shown.
s Messages are sent according to intervals to avoid all agents sending messages at the same time.
t This more gradual decrease is likely due in part to the pull protocol requesting information on roads with more immediate priority and use, generating information on roads that will be used in decision making.
u A more complete discussion of trust management for VANETs can be found in the recent survey paper .
This appendix seeks to further clarify and detail the geometric series equation and design rationale for calculating confidence in Section ‘Confidence calculation’ and to provide examples.
The following will describe why a geometric series was necessary.
A report’s confidence is initially set to the experience-based trust of the agent that provided the report. If Equations 12 and 13 were used to atomically increase a report’s confidence according to various attributes (Time, Loc, Indirect, etc.), then their influence on confidence would be disproportionate to their value and importance. A simple solution to this issue would be to weight or multiply α and β according to the attribute (Time, Loc, Indirect, etc.). However, this can result in the confidence value being above 100% or below 0%. In addition, to solve this by simply placing a bound on the confidence value (So that max is 100% and minimum is 0%) would not be faithful to the founding research.
Equations 12 and 13 implicitly bound T A (B), and have an effect of decreasing the magnitude by which trust is increased or decreased as the trust value becomes greater or smaller, respectively. Equation 11 is intended to reflect the culmination of several increases or decreases, according to 12 and 13. If you were to graph the trust value over all atomic iterations, the graph would form a Sigmoid function (“S” curve).
Defining our confidence calculation using Equation 11(6, the simplification of Equation 16) allows us to utilize Equations 12 and 13, their Sigmoid nature and implicit bounding, use of decimal numbers for G (n) (providing a granularity that atomic changes do not allow), and a representation of the calculation in a simple format.
The following example demonstrates the modification of confidence according the time difference attribute.
Simulation without our framework or
Worst case scenario
any incorporation of traffic data.
Simulation without our framework
Best case scenario
but incorporations traffic data by querying
the road through the JiST/SWANS simulator.
Simulation with just Majority and Experience
Simulation with all multidimensional
Full utilization scenario
Full/Basic + (Parameter(s))
Full or Basic simulation with a modification
Special case scenario.
on one or more parameters.
Simulation framework variables
Percent of honest agents.
Hon # (0.5 is 50% honesty)
Number of agents
Number of agents and cars simulated in the tests.
Agent # (100 is 100 agents)
Interval between congestion request messages sent by the agents.
MsgI #-# (6–15 is 6–15 second message intervals)
Use of profiling.
No P indicates no use of profiling (False)
True (Basic, Full)
Use of role based trust.
Role # (0.2 is 20% agents are given a role aboveOrdinary)
Use of time closeness factor.
Use of location closeness factor.
Use of indirect messages.
Percent of agent trust updates ignored to simulate data sparsity.
MThresh # (0.6 means 60% of trust updates are ignored)
Dishonest Lie Percent
Percent of the time a dishonest agent lies.
Lie # (0.8 is 80% of the time dishonest agents lie)
Simulation algorithm variables
Number of agents used in a majority opinion.
MajN # (10 is 10 agents used)
Honest trust increase α
Standard increment to an agent’s trust resulting from an honesty evaluation, with a maximum value of 1.0.
α # (0.1 is 10% trust increase)
Dishonest trust decrease β
Standard decrement to an agent’s trust resulting from an honesty evaluation, with a minimum value of 0.0.
β # (0.2 is 20% trust decrease)
Advice trust threshold
Threshold where only agents with a trust value above this percent may be considered for advice.
AThresh # (0.41 is 41% trust threshold)
Majority confidence threshold
Threshold which the majority opinion must be above in order to be considered.
MThresh # (0.51 is 51% majority threshold)
Standard factor for increasing confidence depending on agent role.
Standard comparison factor for time closeness.
Standard comparison factor for location closeness.
Standard factor for modifying confidence if the advice is indirect.
Standard factor for weighting the congestion value when calculating a road’s A* cost.
Agents within the JiST/SWANS simulation software utilize an A* search algorithm that determines the most effective path for a car to take to its destination.
The A* search algorithm is the driving force behind when an agent is in need of advice. The algorithm is called either when a new destination is set for an agent, and the agent has to find out how to most effectively reach the destination, or if an agent’s path is reassessed during their journey, so that the algorithm can incorporate more recently received traffic information.
It is provided with the agent’s current location and destination.
It incrementally assesses potential roads, from the current location to the destination, according to a cost.
The potential road’s cost is calculated as its length plus congestion (triggers in need of advice).
It returns a list of roads which forms a path to the destination that has the least cost (which theoretically takes the shortest amount of time, according to current traffic information).
The algorithm attributes a cost to every road segment. The JiST/SWANS initially calculated this cost as the length of the road segment. In our implementation, cost is calculated as the length of the road segment and its congestion. RoadCong is the congestion of the road, which is multiplied by a simulation specific weight CongWeight. The retrieval of a road’s congestion signifies an agent being in need of advice from Algorithm 4.
To facilitate efficient use of congestion information, and to increase the speed of the A* search algorithm, the implementation post-processes traffic information to form majority opinions so that the information can be immediately retrieved during algorithm execution. This means that majority opinions are calculated every time new information is retrieved, which is then stored in a local hash table for constant time (O(1)) retrieval by the A* algorithm.
Thanks to Graham Pinhey for his assistance with this paper. Financial support was received from NSERC (the Natural Sciences and Engineering Council of Canada).
Parts of this research were presented at the TRUM workshop at UMAP 2012 . The work is also partially supported by the project of Dr. Jie Zhang funded by the MOE AcRF Tier 1.
- Minhas UF, Zhang J, Tran T, Cohen R (2010) Promoting effective exchanges between vehicular agents in traffic through transportation-oriented trust modeling In: Proceedings of international joint conference on Autonomous Agents and Multi Agent Systems (AAMAS) workshop on Agents in Traffic and Transportation (ATT), 77–86.. ACM.Google Scholar
- Minhas UF, Zhang J, Tran TT, Cohen R (2010) Intelligent agents in mobile vehicular ad-hoc networks: Leveraging trust modeling based on direct experience with incentives for honesty In: Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology (IAT), 243–247.Google Scholar
- Minhas UF, Zhang J, Tran TT, Cohen R: A multifaceted approach to modeling agent trust for effective communication in the application of mobile ad hoc vehicular networks. IEEE Trans Syst Man Cybern C Appl Rev 2011, 41(3):407–420. 10.1109/TSMCC.2010.2084571View ArticleGoogle Scholar
- Tran T, Cohen R (2003) Modelling reputation in agent-based marketplaces to improve the performance of buying agents In: Proceedings of the ninth international conference on User Modelling (UM), 273–282.. Springer.View ArticleGoogle Scholar
- Finnson J: Modeling trust in multiagent mobile vehicular ad-hoc networks through enhanced knowledge exchange for effective travel decision making. Master’s thesis, School of Computer Science, University of Waterloo. Waterloo, Canada; 2012.Google Scholar
- Zhang J, Cohen R: Evaluating the trustworthiness of advice about seller agents in e-marketplaces: a personalized approach. Electron Commerce Res Appl 2008, 7(3):330–340. 10.1016/j.elerap.2008.03.001View ArticleGoogle Scholar
- Whitby A, Jøsang A, Indulska J (2004) Filtering out unfair ratings in bayesian reputation systems In: Proceedings of the Workshop on Trust in Agent Societies, at the Autonomous Agents and Multi-Agent Systems Conference (AAMAS2004), New York. July 2004. Google Scholar
- Yu B, Singh MP: Detecting deception in reputation management. In Proceedings of the second international joint conference on Autonomous Agents and Multiagent Systems. AAMAS ‘03. ACM, New York; 2003:73–80. 10.1145/860575.860588View ArticleGoogle Scholar
- Yolum P, Singh MP: Engineering self-organizing referral networks for trustworthy service selection. IEEE Trans Syst Man Cybern Syst Hum 2005, 35(3):396–407. 10.1109/TSMCA.2005.846401View ArticleGoogle Scholar
- Burnett C, Norman T, Sycara K (2011) Sources of stereotypical trust in multi-agent systems In: Proceedings of the 14th international workshop on trust in agent societies, 25. Google Scholar
- Wang Y, Vassileva J (2003) Bayesian network-based trust model In: Proceedings of the IEEE/WIC international conference on Web Intelligence (WI), 372–378.View ArticleGoogle Scholar
- Regan K, Poupart P, Cohen R (2006) Bayesian reputation modeling in e-marketplaces sensitive to subjectivity, deception and change In: AAAI, 1206–1212.Google Scholar
- Fung CJ, Zhang J, Aib I, Boutaba R: Dirichlet-based trust management for effective collaborative intrusion detection networks. IEEE Trans Netw Serv Manag 2011, 8(2):79–91. 10.1109/TNSM.2011.050311.100028View ArticleGoogle Scholar
- Gerlach M (2007) Trust for vehicular applications In: Proceedings of the international symposium on autonomous decentralized systems, 295–304.. IEEE.View ArticleGoogle Scholar
- Raya M, Papadimitratos P, Gligor VD, Hubaux J-P (2008) On data-centric trust establishment in ephemeral ad hoc networks In: Proceedings of the 27th Annual IEEE International Conference on Computer Communications (IEEE INFOCOM), 1238–1246.View ArticleGoogle Scholar
- Golle P, Greene D, Staddon J (2004) Detecting and correcting malicious data in vanets In: Proceedings of the 1st ACM international workshop on vehicular ad hoc networks, 29–37.. ACM.View ArticleGoogle Scholar
- Dotzer F, Fischer L, Magiera P (2005) VARS: a vehicle ad-hoc network reputation system In: Proceedings of the IEEE international symposium on a world of wireless, mobile and multimedia networks, 453–456.Google Scholar
- Patwardhan A, Joshi A, Finin T, Yesha Y (2006) A data intensive reputation management scheme for vehicular ad hoc networks In: Proceedings of the Third Annual International Conference on Mobile and Ubiquitous Systems: Networking & Services, 1–8.. IEEE. Google Scholar
- Desjardins C, Laumônier J, Chaib-draa B: Learning agents for collaborative driving. In Multiagent systems for traffic and transportation engineering. Edited by: Bazzan A, Klügl F. IGI Global, Hershey; 2009:240–260. 10.4018/978-1-60566-226-8.ch011View ArticleGoogle Scholar
- Bazzan AL (2007) Traffic as a complex system: Four challenges for computer science and engineering In: Proceedings of the XXXIV SEMISH, 2128–2142.. Citeseer.Google Scholar
- Taillandier P (2014) Traffic simulation with the gama platform. In: Klugel F, Bazzan A, Ossowoski S, Chaib-Draa B (eds)Proceedings of the international conference on Autonomous Agents and Multiagent Systems (AAMAS) sixth workshop on Agents in Traffic and Transportation (ATT), 77–86.Google Scholar
- Huynh N, Cao VL, Wickramasuriya R, Berryman M, Perez P, Barthelemy J (2014) An agent based model for the simulation of road traffic and transport demand in a Sydney metropolitan area. In: Klugel F, Bazzan A, Ossowoski S, Chaib-Draa B (eds)Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) sixth workshop on Agents in Traffic and Transportation (ATT).Google Scholar
- Chou C-M, Lan K-c (2009) On the effects of detailed mobility models in vehicular network simulations In: Proceedings of the ACM MobiCom.Google Scholar
- Piorkowski M, Raya M, Lugo AL, Papadimitratos P, Grossglauser M, Hubaux J-P: TraNS: realistic joint traffic and network simulator for VANETs. ACM SIGMOBILE mobile computing and communications review 2008, 12(1):31–33. 10.1145/1374512.1374522View ArticleGoogle Scholar
- Vidhya S, Mugunthan SR: Trust modeling scheme using cluster aggregation of messages for vehicular ad hoc networks. IOSR J Comput Eng 2014, 16(2):16–21.Google Scholar
- Shaihk R, Alzahrani A (2013) Intrusion-aware trust model for vehicular ad hoc networks. Security and Communication Networks. doi:10.1002/sec.862.Google Scholar
- Marmola FG, Pere GM: Trip, a trust and reputation infrastructure-based proposal for vehicular ad hoc networks. J Netw Comput Appl 2012, 35(3):934–941. 10.1016/j.jnca.2011.03.028View ArticleGoogle Scholar
- Chen J, Boreli R, Sivaraman V (2010) Taro: Trusted anonymous routing for manets In: Proceedings of the IEEE/IFIP 8th international conference on embedded and ubiquitous computing, 756–762.Google Scholar
- Finnson J, Cohen R, Zhang J, Tran T, Minhas UF: Reasoning about user trustworthiness with non-binary advice from peers. Adaptation and Personalization (UMAP) workshop on Trust, Reputation and User Modeling (TRUM). pp 12; 2012.Google Scholar
- Bazzan ALC, de Oliveira D, da Silva BC: Learning in groups of traffic signals. Eng Appl Artif Intell 2010, 23(4):560–568. 10.1016/j.engappai.2009.11.009View ArticleGoogle Scholar
- Bazzan A, Klugl F (2013) A review on agent-based technology for traffic and transportation. Knowl Eng Rev: 1–29. doi:10.1017/S0269888913000118.Google Scholar
- Zhang J (2011) A survey on trust management for vanets In: Proceedings of the 25th international conference on Advanced Information Networking and Applications (AINA), 105–112.. IEEE. View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.