 Research
 Open Access
 Published:
Efficient private multiparty computations of trust in the presence of curious and malicious users
Journal of Trust Management volume 1, Article number: 8 (2014)
Abstract
Schemes for multiparty trust computation are presented. The schemes do not make use of a Trusted Authority. The schemes are more ein a completely distributed manner, where each user calculates its trust value privately and independently. Given a community C and its members (users) U_{1},…,U_{ n }, we present computationally secure schemes for trust computation. The first scheme, Accumulated Protocol AP computes the average trust attributed to a specific user, U_{ t } following a trust evaluation request initiated by a user U_{ n }. The exact trust values of each queried user are not disclosed to U_{ n }. The next scheme, Weighted Accumulated Protocol WAP generates the average weighted trust in a specific user U_{ t } taking into consideration the unrevealed trust that U_{ n } has in each user participating in the trust evaluation process. The Public Key Encryption Protocol PKEP outputs a set of the exact trust values given by the users without linking the user that contributed a specific trust value to the trust this user contributed. The obtained vector of trust values assists in removing outliers. Given the set of trust values, the outliers that provide extremely low or high trust values can be removed from the trust evaluation process. We extend our schemes to the case when the initiator, U_{ n }, can be compromised by the adversary, and we introduce the Multiple Private Keys and the Weighted protocols (MPKP and MPWP) for computing average unweighted and weighted trust, respectively. Moreover, the Csed Protocol (CEBP) extends the PKEBP in this case. The computation of all our algorithms requires the transmission of O(n) (possibly large) messages.
Our contribution
The purpose of this paper is to introduce new schemes for decentralized reputation systems. These schemes do not make use of a Trusted Authority to compute the trust in a particular user that is attributed by a community of users. Our objective is to compute trust while preserving user privacy.
We present new efficient schemes for calculating the trust in a specific user by a group of community members upon the request of an initiator. The trust computation is provided in a completely distributed manner, where each user calculates its trust value privately. The user privacy is preserved in a computationally secure manner. The notions of privacy and privately computed trust, are determined in the sense that given an output average trust in a certain user, it is computationally infeasible to reveal the exact trust values in this user, given by community users. We assume a community of users C={U_{1}, U_{2},…,U_{ n }}. Let U_{ n } be an initiator. The goal of U_{ n } is to get an assessment of the trust in a certain user, U_{ t } by a group consisting of U_{1}, U_{2},…,U_{n−1} users from C. The AP calculates the average trust (or the sum of trust levels) in the user U_{ t } (Section ‘Accumulated protocol AP’). The AP protocol is based on a computationally secure homomorphic cryptosystem, e.g., the Paillier cryptosystem [1], which provides a homomorphic encryption of the secure trust levels T_{1},…,T_{n−1} calculated by each user U_{1}, U_{2},…,U_{n−1} from C. The AP satisfies the features of the Additive Reputation System [2] and does not take into consideration ${U}_{n}^{\prime}s$ subjective trust values in the queried users U_{1}, U_{2},…,U_{n−1}. A decentralized reputation system is defined as additive/non additive [2] if feedback collection, combination, and propagation are implemented in a decentralized way, and if a combination of feedbacks provided by agents is calculated in an additive/non additive manner, respectively. The WAP carries out a non additive trust computation (Section ‘Weighted accumulated protocol WAP’). It outputs the weighted average trust which is based on the trust given by the initiator U_{ n } in each C member participating in the feedback. The WAP is an enhanced version of the AP protocol. The AP and WAP protocols cope with a curious adversary and are restricted to the case of an uncompromised initiator, U_{ n }. The MPKP and MPWP protocols, introduced in Section ‘Multiple Private Keys Protocol MPKP’, use additional communication to relax the condition that the initiator U_{ n } is uncompromised and provide average unweighted and weighted privately computed trust, respectively.
Compared with the recent results in [2] and [3], our schemes have several advantages.
The Private Trust scheme is resistant against either curious or semimalicious users
The AP and WAP protocols preserve user privacy in a computationally secure manner. Our protocols cope with any number of curious but honest adversarial users. Moreover, the PKEBP (Section ‘Protocols for removal of outliers’) is resistant against semimalicious users that return false trust values. The PKEBP supports the removal of outliers. The general case, when the initiator, U_{ n }, can be compromised by the adversary, is addressed by the MPKP, MPWP and CEBP (Sections ‘Protocols for removal of outliers’ and ‘Multiple Private Keys Protocol MPKP’) protocols. Unlike our model, [2] suggests protocols that are resistant against curious agents who only try to collude in order to reveal private trust information. Moreover, the reputation computation in some of the algorithms in [3] contains a random parameter that reveals information about the reputation range of the queried users.
Low communicational overhead
The proposed schemes require only O(n) size messages to be sent, while the protocols of [2] and [3] require O(n^{3}) communication messages.
No limitations on the number of curious users
The computational security of the proposed schemes does not depend on the number of curious users in the community. Moreover, privacy is preserved regardless of the size of the coalition of curious users. Note that the number of the curious users should be no greater than half of the community users in the model presented in [2].
Background and literature review
The use of homomorphic cryptosystems in general Multiparty Computation (MPC) models is presented in [4]. In [4] it is demonstrated that, given keys for any sufficiently efficient homomorphic cryptosystem, general MPC protocols for n players can be devised such that they are secure against an active adversary who corrupts any minority group of the players. The problem stated and solved in [4] is as follows: given the encryptions of two numbers, say a and b (where each player knows only its input), compute securely an encryption of c=a b. The correctness of the result is verified. The total number of bits sent is O(n k C), where k is a security parameter and C is the size of a Boolean circuit computing the function that should be securely evaluated. An earlier scheme proposed in [5] with the same complexity was only secure for passive adversaries. Earlier protocols had complexity that was at least quadratic in n. Threshold homomorphic encryption is used to achieve the linear communication complexity in [4]. The schemes proposed in [4, 6], and [7] are based on public key infrastructure and use Zero Knowledge proofs (ZKP) as building blocks. When compared to [4, 6], and [7] our schemes privately compute the average unweighted (additive) and weighted (non additive) characteristics, respectively, without using relatively hardtoimplement techniques such as ZKP.
Independently, (though slightly later than [8, 9]), linear communication MPC was presented in [10]. A perfectly secure MPC protocol with linear communication complexity was proposed in [11]. Our model presented herein, (with a semihonest but curious adversary) copes with at most $\frac{n}{2}1$ compromised users that supply arbitrary trust values, while [11] copes (in an information theoretic manner) with up to $\frac{n}{3}$ compromised users (even totally malicious users) with a similar communication overhead, O(n^{3}·l n^{3}). Here, l denotes the total message length.
Following [8, 9], privacy preserving protocols were investigated in [12] and [13]. Protocols for efficient multiparty sum computation (in the semihonest adversarial model) are proposed in [12] and [13]. The derived protocols are augmented (by applying Zero Knowledge Proofs of plaintext equality and set membership) to handle malicious adversary. The simulation results demonstrate the efficiency of the designed methods. Compared with our results, the most powerful and efficient StR protocol of [12] and [13] is based on a completely connected network topology where each network user is directly connected to all other users. In addition, the schemes of [12] and [13] can be applied in the Additive Reputation Systems, while our schemes are designed also for the NonAdditive Reputation System.
Homomorphic ElGamal encryption is used in [6] as part of a scheme for multiparty private web search with untrusted partners (users). The scheme is based on multiparty computation that protects the privacy of the users with regards to the web search engine and any number of dishonest internal users. The number of sent messages is linear in the number of users (each of the n users sends 4n−4 messages). In order to obtain a secure permutation (of N elements), switches of the Optimized Arbitrary Size Benes network (OASBenes) are distributed among a group of n users, and the honest users control at least a large function S(N) of the switches of the OASBenes. The proposed MPC protocol is based on the homomorphic threshold nout ofn ElGamal encryption. Nevertheless, unlike our model, a MPC protocol is based on the computationally expensive honestverifier ZKP protocol, and the Benez permutation network.
The efficient scheme for the secure twoparty computation for “asymmetric settings” in which one of the devices (smart card, mobile device, etc.) is strictly computationally weaker than the other, is introduced in [7]. The workload for one of the parties is minimized in the presented scheme. The proposed protocol satisfies oneround complexity (i.e., a single message is sent in each direction assuming trusted setup).The proposed protocol performs only twoparty secure computations, while the number of participants is not bounded in our schemes. Moreover, computationally expensive, Non Interactive ZeroKnowledge Proof techniques, and “extractable hash functions” are used in the scheme of [7].
A number of systematic approaches and corresponding architectures for creating reliable trust and reputation systems have been recently proposed in [14–18]. The main scope of these papers is the definitions of variety of settings for decentralized trust and reputation systems. A probabilistic approach for constructing computational models of trust and reputation, is presented in [17], where trust and reputation are studied in the scope of various social and scientific disciplines.
The computation models for the reputation systems of [16] support user anonymity by generating a pseudonym for any user, therefore, concealing user identity. In contrast to [16], the main challenge of our approach is to preserve the user anonymity in the computation process of the trust.
One of the common problems stated and discussed in [18] is that most existing reputation systems lack the ability to differentiate dishonest from honest feedback and, therefore, are vulnerable to malicious cooperations of users (peers in P2P systems) who provide dishonest feedback. The dishonest feedback is effectively filtered out in [18] by introducing the factor of feedback similarity between a user (pair) in the collusive group, and a user (peer) outside the group. We propose a different approach for the removal of dishonest users (outliers) by estimating the range of the correct trust values [19].
Two other works that are related to our scheme appear in [2] and [3]. In [2] several privacy and anonymity preserving protocols are suggested for an Additive Reputation System.
The authors state that supporting perfect privacy in decentralized reputation systems is impossible. Nevertheless, they present alternative probabilistic schemes for preserving privacy. A probabilistic “witness selection” method is proposed in [2] in order to reduce the risk of selecting dishonest witnesses. Two schemes are proposed. The first scheme is very efficient in terms of communication overhead, but this scheme is vulnerable to collusion of even two witnesses. The second scheme is more resistant toward curious users, but still is vulnerable to collusion. It is based on a secret splitting scheme. This scheme provides a secure protocol based on the verifiable secret sharing scheme [20] derived from Shamir’s secret sharing scheme [21]. The number of dishonest users is heavily restricted and must be no more than $\frac{n}{2}$, where n is the number of contributing users. The communication overhead of this scheme is rather high and requires O(n^{3}) messages.
An enhanced model for reputation computation that extends the results of [2] is introduced in [3]. The main enhancement of [2] is that a non additive (weighted) trust and reputation can be computed privately. Three algorithms for computing non additive reputation are proposed in [3]. The algorithms have various degrees of privacy and different levels of protection against adversarial users. These schemes are computationally secure regardless of the number of dishonest users.
The paper [22] (published later than [8, 9]), proposes the distributed Maliciouskshares protocol, which extends the results of [2] and [3] in the sense that a high majority of users (agents) can find k, k<<n sufficiently trustworthy agents in a set of n−1 usersfeedback providers. This protocol is based on homomorphic encryption and NonInteractive ZeroKnowledge proofs. The Malicious kshares protocol is applicable in the Additive Reputation System only, while our schemes privately compute also the weighted trust. The techniques, used for removal of outliers, is based on NonZero Knowledge Proofs of setmembership and plaintext equality, while the proof preserves that a certain share lies in the correct interval. The proposed protocol requires the exchange of O(n+ log N) messages (where n and N are the number of users in the protocol and environment, respectively), while we use a more computationally effective techniques for removal of outliers exchanging only O(n) messages.
We propose new efficient trust computation schemes that can replace any of the above schemes. Our schemes enable the initiator to compute unweighted (additive) and weighted (non additive) trust with low communication complexity of O(n) (large) messages.
Table 1 summarizes the approaches proposed in this paper, computations that they perform, resistance to the different types of attacks and the crypto building blocks that are used.
This paper extends the schemes of [9] by introducing the MPKP and MPWP protocols that compute average unweighted and weighted trust in the general case, even when the initiator U_{ n } can be compromised by the adversary. The proofs of correctness of the proposed protocols extend the presentations of [9] and [8].
Paper organization
The formal system description appears in Section ‘Research design and methodology’. The computationally resistant (against curious but honest adversary) private trust protocol, AP, is introduced in Section ‘Results and discussion’ (Subsection “Accumulated protocol AP”). The enhanced version of AP, WAP, is presented in Section ‘Results and discussion’ (Subsection “Weighted accumulated protocol WAP”). The (resistant against semimalicious users) PKEBP and CEBP and the scheme for removing outliers are presented in Section ‘Results and discussion’ (Subsection “Protocols for removal of outliers”). The generalized MPKP protocol and the weighted MPWP protocol are introduced in Section ‘Results and discussion’ (Subsection “Multiple Private Keys Protocol MPKP”). Conclusions appear in Section ‘Conclusions’.
Research design and methodology
The purpose of this paper is to generate new schemes for private trust computation within a community. The contribution of our work is as follows: (a) the trust computation is performed in a completely distributed manner without involving a Trusted Authority. (b) the trust in a particular user within the community is computed privately. The privacy of trust values, held by the community users is preserved subject to standard cryptographic assumptions, when the adversary is computationally bounded. (c) The proposed protocols are resistant to a curious but honest polybounded klistening adversary, Ad[23]. Such an adversary Ad may do the following: Ad may trace all the network links in the system and Ad may compromise up to k users, k< n. We require that an adversary Ad, compromising an intermediate node can only learn the node’s trust values and an adversary Ad, compromising the initiator U_{ n } can learn the output of the protocol, namely the average trust. We distinguish between two categories of adversaries: honest but curious adversaries, and semimalicious adversaries [2]. An honest but curious klistening adversary follows the protocol by providing correct input. Nevertheless, it might try to learn trust values in different ways, including collusion with, at most, k compromised users. While an honest but curious adversary does not try to modify the correct output of the protocol, a semimalicious adversary may provide dishonest input in order to bias the average trust value.
Let C=U_{1},…,U_{ n } be a community of users such that each pair of users is connected via an authenticated channel. Assume that the purpose of a user U_{ n } from C is to get the unweighted ${T}_{t}^{\mathit{\text{avr}}}$ or weighted average trust ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}$ in a specific user, U_{ t }, evaluated by the community of users. Denote by T^{i}, i=1.. n, the trust of user U_{ i } in U_{ t }, and by ${T}_{t}^{\mathit{\text{avr}}}=\frac{\sum _{i=1}^{n}{T}^{i}}{n}$ and ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}=1/10\sum _{i=1}^{n}{w}_{i}{T}^{i}$ the unweighted and weighted average trust in U_{ t }, respectively. Here w_{ i }=1,2,…,10 is the subjective trust of the initiator U_{ n } in U_{ i } in the form of an integer that facilitates our secure computation. In the subsequent work we always assume that w_{ i } is an integer in this range. Denote by M_{ t } the message sent by U_{ n } to the first member of the community, C.
Our definitions of computational indistinguishability, simulation and private computation follow the definitions of [24]. Informally speaking, two probability ensembles are computationally indistinguishable if no polynomial time, probabilistic algorithm can decide with nonnegligible probability if a given input is drawn from the first or the second ensemble. A distributed protocol computes a function f privately if an adversary cannot obtain any information on the input and output of other parties, beyond what is implicit in the adversary’s own input and output. The way to prove that a protocol is private is to show that there exists a polynomial time, probabilistic simulator that receives as input the same input and output as an adversary and generates a string that is computationally indistinguishable from the whole view of the adversary, including every message that the adversary received in the protocol. Intuitively, the existence of a simulator implies that the adversary learns nothing from the execution of the protocol except its input and output.
Methods
The main tool we use in our schemes is publickey, homomorphic encryption. In such an encryption scheme there is a modulus, M, and an efficiently computable function ϕ that maps a pair of encrypted values (E_{ K }(x),E_{ K }(y)), where 0≤x,y<M, to a single encrypted element ϕ(E_{ K }(x),E_{ K }(y))=E_{ K }(x+y mod M). In many homomorphic encryption systems the function ϕ is multiplication modulo some integer N. Given a natural number, c, and an encryption, E_{ K }(x), it is possible to compute E_{ K }(c·x mod M), without knowing the private key. Set β=E_{ K }(1) and let the binary representation of c be c=c_{ k }c_{k−1}…c_{0}. Go over the bits c_{ k },…,c_{0} in descending order. If c_{ j }=0, set β=ϕ(β,β) and if c_{ j }=1, set β=ϕ(ϕ(β,β),E_{ K }(x)). If ϕ is modular multiplication, this algorithm is identical to standard modular exponentiation.
There are quite a few examples of homomorphic encryption schemes known in the cryptographic literature, including [1, 25–28]. There are also systems that allow both addition and multiplication of two encrypted plaintexts, e.g., [29] where only a single multiplication is possible for a pair of ciphertexts, and [30]. All of these examples of homomorphic cryptosystems are currently assumed to be semantically secure [26].
Results and discussion
Accumulated protocol AP
The AP protocol may be based on any homomorphic encryption scheme such that the modulus N satisfies $N>\sum _{i=1}^{n}{T}_{i}$. We illustrate the protocol by using the semantically secure Paillier cryptosystem [1]. This cryptosystem possesses a homomorphic property and is based on the Decisional Composite Residuosity assumption. Let p and q be large prime numbers, and N=p q. Let g be some element of ${Z}_{{N}^{2}}^{\ast}$. Note that the base, g, should be chosen properly by checking whether g c d(L(g^{λ}m o d N^{2}),N)=1, where λ=l c m(p−1,q−1), and the L function is defined as $L\left(u\right)=\frac{u1}{N}$. The public key is the (N,g) pair, while the (p,q) pair is the secret private key. The ciphertext, c, for the plaintext message m<N is generated by the sender as c=g^{m}r^{N} mod N^{2}, where r<N is a randomly chosen number. The decryption is performed as $m=\frac{L({c}^{\lambda}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2})}{L({g}^{\lambda}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2})}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}N$ at the destination. Our schemes are based on the homomorphic property of the Paillier cryptosystem. Namely, the multiplication of two encrypted plaintexts m_{1} and m_{2} is decrypted as the sum m_{1}+m_{2} mod N of the plaintexts. Thus, E(m_{1})·E(m_{2})≡E(m_{1}+m_{2} mod m) mod N^{2} and $E{\left({m}_{1}\right)}^{{m}_{2}}\equiv E({m}_{1}\xb7{m}_{2}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}N)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2}$. The AP protocol is described in Algorithm 1.
Assume that the initiator, U_{ n }, has generated a pair of its public and private keys as described above, and it has shared its public key with each community user. Then, U_{ n } initializes to 1 the single entry trust message M_{ t } and sends it to the first U_{1} user (lines 1–3). Upon receiving the message, M_{ t }, each node, U_{ i }, encrypts its trust in U_{ t } as $E\left({T}_{i}\right)={g}^{{T}_{i}}{r}_{i}^{N}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}\phantom{\rule{1em}{0ex}}{N}^{2}$. Here, T_{ i } is a secret ${U}_{i}^{\prime}s$ trust level in U_{ t } and r_{ i } is a randomly generated number. The ${U}_{i}^{\prime}s$ output is accumulated in the accumulated variable A multiplying its current value by the new encrypted U_{ i }−t h trust E(T_{ i }) from the i−t h entry as A=A·(E(T_{ i })). Then U_{ i } sends the updated M_{ t } message to the next user, U_{i+1}. This procedure is repeated until all trust values are accumulated in A (lines 4–9). The final M_{ t } message received by the initiator, U_{ n } is ${M}_{t}=A=\prod _{i=1}^{n}E\left({T}_{i}\right)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2}$. As a result, the U_{ n } user decrypts the value accumulated in the M message as the sum of trusts ${S}_{t}=D\left({M}_{t}\right)=\sum _{i=1}^{n}{T}_{i}$. Thus, the average trust is ${T}_{t}^{\mathit{\text{avr}}}=\frac{{S}_{t}}{n1}$ (Algorithm 1, lines 10–12). Proposition 1 proves that AP is a computationally private protocol to compute the trust of a community in U_{ t }.
Proposition 1
Assume that an honest but curious adversary corrupts at most k users out of a community of n users, k<n. Then, AP privately computes T^{avr}, the average trust in user U_{ t }.
Proof
In order to prove the proposition, we have to prove that for every adversary there exists a simulator that given only the adversary’s input and output, generates a string that is computationally indistinguishable from the adversary’s view in AP. Let $I=\{{U}_{{i}_{1}},{U}_{{i}_{2}},\dots ,{U}_{{i}_{k}}\}$ denote the set of users that the adversary controls. Let ${\mathit{\text{view}}}_{I}^{\mathit{\text{AP}}}({X}_{I},{1}^{n})$ denote the combined view of all users in I. ${\mathit{\text{view}}}_{I}^{\mathit{\text{AP}}}$ includes the input, ${X}_{I}=\{{T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}\}$, of all users in I, and a sequence of messages $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\dots ,E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ received by users in I. A simulator cannot generate the exact sequence $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\dots ,E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$, since it does not have the input of uncorrupted users. Instead, the simulator chooses a random value α_{ j } for any user U_{ j }∉I, from the distribution of trust values, D. The simulator denotes ${\alpha}_{{i}_{1}}={T}_{{i}_{1}},\dots ,{\alpha}_{{i}_{k}}={T}_{{i}_{k}}$ and computes E(α_{ j }) for j=1,…,n−1. The simulator now computes: $\prod _{j=1}^{{i}_{1}}E\left({\alpha}_{j}\right)\equiv E\left(\sum _{j=1}^{{i}_{1}}{\alpha}_{j}\right)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2},\dots ,\prod _{j=1}^{{i}_{k}}E\left({\alpha}_{j}\right)\equiv E\left(\sum _{j=1}^{{i}_{k}}{\alpha}_{j}\right)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2}$. Hence, a simulator replaces $E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ by $E\left(\sum _{j=1}^{{i}_{k}}{\alpha}_{j}\right)$.
Assume, in contradiction, that there exists an algorithm DIS that distinguishes between the encryption of partial sums $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\cdots E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ of the correct trust values and the values $E\left(\sum _{j=1}^{{i}_{1}}{\alpha}_{j}\right),\cdots \phantom{\rule{1em}{0ex}}E\left(\sum _{j=1}^{{i}_{k}}{\alpha}_{j}\right)$ randomly produced by a simulator. We construct an algorithm, B, that distinguishes between the two sequences E(T_{1}),⋯ E(T_{n−1}) and E(α_{1}),⋯,E(α_{ k }), contradicting the semantic security property of E. The input to algorithm B is a sequence of values E(x_{1}),⋯ E(x_{n−1}) and it attempts to determine whether the values x_{1},…,x_{n−1} are equal to the values T_{1},…,T_{n−1} that the users provide, or is a sequence of random values chosen from the distribution D. The algorithm B computes for every ℓ=1,…,k
and provides the encryption of partial sums $E\left(\sum _{j=1}^{{i}_{1}}{x}_{j}\right),\dots E\left(\sum _{j=1}^{{i}_{k}}{x}_{j}\right)$ as input to DIS. B returns as output the same output as DIS. Since the input of DIS is $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\dots E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ if and only if the input of B is E(T_{1}),…E(T_{n−1}), we find that B distinguishes between its two possible input distributions with the same probability that DIS distinguishes between its input distributions. □
AP uses O(n) messages each of length O(n).
Weighted accumulated protocol WAP
The Weighted Accumulated WAP protocol, in addition to the AP protocol, generates the weighted average trust in a specific user, U_{ t }, by the users in the community. The WAP protocol is based on an anonymous communications protocol proposed in [31] and on the homomorphic cryptosystem, e.g., Paillier cryptosystem [1]. It is described in Algorithm 2.
The initiator, U_{ n }, generates n−1 weights w_{1},…,w_{n−1}. Each w_{ i } value reflects the ${U}_{n}^{\prime}s$ subjective trust level in user U_{ i }. U_{ n } initializes the accumulated variable, A, to 1, encrypts each w_{ i } value by means of, e.g., the Paillier cryptosystem [1] as $E\left({w}_{i}\right)={g}^{{w}_{i}}{h}^{{r}_{n,i}}\left(\mathit{\text{mod}}\phantom{\rule{1em}{0ex}}{N}^{2}\right)$, composes a Trust Vector T V= [ E(w_{1}).. E(w_{n−1})] and sends the message M_{ t }=(T V,A) to U_{1}. Here, as in the AP case, p, q are large prime numbers which compose the Paillier cryptosystem, N=(p−1)(q−1), and g and h are properly chosen parameters of the Paillier cryptosystem. r_{n,i} is a random degree of h chosen by U_{ n } for each U_{ i } from C. Note that the AP protocol is a private case of the WAP protocol where all weights w_{ i } are equal to 1.
As in the AP case, the M_{ t } message is received by the community users in the prescribed order. Each U_{ i } user encrypts its weighted trust in U_{ t } as $E\left({T}_{i}\right)=E{\left({w}_{i}\right)}^{{T}_{i}}E\left(\overline{0}\right)$ and accumulates it in the accumulated variable A (lines 6–10). Note that multiplying by the random encryption of zero $E\left(\overline{0}\right)$ ensures semantic security of the WAP protocol since the user’s output cannot be distinguished from a simulated random string. As a result, the initiator, U_{ n }, receives the M_{ t } message and decrypts the value accumulated in A as the weighted sum of trust ${S}_{t}=D\left(A\right)=\sum _{i=1}^{n1}{w}_{i}{T}_{i}$. Therefore, the average trust is equal to ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}=1/10\sum _{i=1}^{n}{w}_{i}{T}^{i}$. Proposition 2 proves the privacy of the weighted average trust ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}$ in the U_{ t } user by the community users in a computationally secure manner.
Proposition 2
Assume that an honest but curious adversary corrupts at most k users out of a community of n users, k<n. Then, WAP privately computes w T^{avr}, the average weighted trust in user U_{ t }.
Proof
The proof is similar to the proof of Proposition 1. View of adversary includes the input of compromised users ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$, trust vector TV, and the accumulated variable, A. Each compromised user ${U}_{{i}_{j}}$ from I receives $\mathit{\text{TV}}=\phantom{\rule{1em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}E\right({w}_{{i}_{j}}),\phantom{\rule{1em}{0ex}}E({w}_{{i}_{j+1}})\dots ,E({w}_{n}\left)\right]$ and $A=\prod _{i=1}^{{i}_{j}}E{\left({w}_{i}\right)}^{{T}_{i}}E\left(\overline{0}\right)$.
A simulator for the adversary simulates ${\mathit{\text{view}}}_{I}^{\mathit{\text{WAP}}}$ as follows. The simulator input ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$ is the same as the input of the compromised users. A simulator chooses at random v_{1},…,v_{ n } according to a distribution, W, of weights, and $\stackrel{~}{{T}_{1}},\dots ,\stackrel{~}{{T}_{n}}$ according to a distribution, D, of trust values. Here $\stackrel{~}{{T}_{{i}_{1}}}={T}_{{i}_{1}},\dots ,\stackrel{~}{{T}_{{i}_{k}}}={T}_{{i}_{k}}$. Due to the semantic security of the homomorphic cryptosystem, the encrypted random values E(v_{1}),…,E(v_{ n }) are indistinguishable from the encrypted correct weights $E\left({w}_{{i}_{1}}\right),\dots ,E\left({w}_{{i}_{n}}\right)$.
The randomization of any U_{ i }−t h user output is performed by multiplying its secret ${w}_{i}^{{T}_{i}}$ by the random encryption of a zero string $E\left(\overline{0}\right)$. Given E(w), the two values E(w)^{T} and E(u), where u is chosen at random from the distribution of wT, can be distinguished since T is chosen from a small domain of trust values. Given E(w), the values $E{\left(w\right)}^{T}E\left(\overline{0}\right)$ are distributed identically to an encryption E(w)^{T}=E(w T mod N). Based on the semantic security of the homomorphic cryptosystem, E(u) and E(w T) cannot be distinguished even given E(w). □
WAP uses O(n) messages each of length O(n).
Protocols for removal of outliers
The protocols for outliers removal are introduced in this section. The Public Key Encryption Based Protocol PKEBP produces a vector of the exact trust values. As a result, the initiator, U_{ n }, can evaluate the correct trust range by removing the outliers that provide extremely high or low trust feedback. PKEBP preserves user privacy in a case where the adversary cannot corrupt the initiator and several users at the same time.
The generalized Commutative Encryption Based Protocol (C E B P) relaxes this limitation and privately computes the exact trust values contributed by each community user, even in the case when an adversary can corrupt the initiator and several users at the same time.
Public Key Encryption Based Protocol PKEBP
Denote the encryption algorithm used in this scheme by E and the decryption algorithm by D. U_{ n } generates a pair (k,s) of publicprivate keys. Then U_{ n } publishes its decryption public key k, while the private decryption key s is kept secret.
The Public Key Encryption Based Protocol PKEBP is performed in two rounds (Algorithm 3, Figure 1). At the initialization stage U_{ n } initializes the n−1entry vector T V[ 1.. n−1] and sends it to the community of users in the prescribed order in the M_{ t }=(T V[ 1.. n−1]) message (Algorithm 3, lines 1–2 and Figure 1, Round 1).
In the first round, on reception of M_{ t } each user, U_{ i }, encrypts its trust, T_{ i }, by k in the corresponding T V[ i]^{′}s entry as E(T_{ i }), and sends the updated message M_{ t } to the next user (Algorithm 3, lines 3–7).
The second round of the PKEBP protocol is performed when the updated T V[ 1.. n−1] vector returns from U_{n−1} to the user U_{1} (see Algorithm 3, lines 8–11 and Figure 1, Round 2). Note that the TV vector does not visit the initiator U_{ n } after execution of the first round. Each user, U_{ i }, performs a random permutation of its i−t h entry with a randomly chosen i_{ j }−t h entry during the second round. After that, the newly updated M_{ t } vectormessage is sent to the next U_{i+1} user (Algorithm 3, lines 8–11).
The result of round 1 is a sequence of encrypted elements (E(T_{1}),…,E(T_{n−1})) while the result of round 2 is a sequence $\mathit{\text{TV}}[\phantom{\rule{0.3em}{0ex}}1\phantom{\rule{1em}{0ex}}\mathrm{..}\phantom{\rule{1em}{0ex}}n1]=\left(E\right({T}_{1}^{\ast}),\dots ,E({T}_{n1}^{\ast}\left)\right)$. The multiset T_{1}, …,T_{n−1} is identical to the multiset ${T}_{1}^{\ast},\phantom{\rule{1em}{0ex}}\dots ,{T}_{n1}^{\ast}$. The sequence T_{1},…,T_{n−1} is permuted to ${T}_{1}^{\ast},\dots ,{T}_{n1}^{\ast}$ by a permutation π, which is computed in a distributed manner by all community members (Algorithm 3, line 10). Thus, by applying the decryption procedure, all encrypted trust values T_{1},…,T_{n−1} are revealed (Algorithm 3, lines 12–13). Moreover, the random permutation π performed at the second round preserves the unlinkability of user identities.
Proposition 3 proves the privacy of the PKEBP protocol.
Proposition 3
PKEBP performs computationally secure computation of exact private trust values assuming that an adversary cannot corrupt the initiator and several users at the same time.
Proof sketch
Case 1: U_{ n }∉ I. We argue that PKEBP is private by showing that an adversary that controls a set of compromised users does not learn any information on the trust values of other users. We achieve that by showing a simulator that, given the input of compromised users, can simulate the messages that these users receive as part of the protocol. Therefore, protocol messages do not give users in I any information on users outside of I. Assume that the set I of compromised users includes k members $I=\left\{{U}_{{i}_{1}},\dots ,{U}_{{i}_{k}}\right\}$, while the uncompromised users are ${U}_{{i}_{k+1}},\dots ,{U}_{{i}_{n}}$. The view of users in I includes the input of compromised users ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$ and trust vectors TV. Each compromised user ${U}_{{i}_{j}}$ from I receives the TV vector with partially permuted entries.
A simulator for the adversary simulates this view as follows. The simulator input is the same as the input of compromised users and it contains the trust values of the compromised users ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$ and the set of their permuted indexes ${i}_{{j}_{}1},\dots {i}_{{j}_{}k}$. The simulator chooses a random value, ${\alpha}_{{i}_{\ell}}$, for any user U_{ ℓ }∉I from the distribution D of trust values. The simulator sets α_{ ℓ }=T_{ ℓ } and computes E(α_{ j }) for ℓ=k+1,…,n. Due to the semantic security of the homomorphic cryptosystem [24, 32], the simulator cannot distinguish between the encryption of the correct trust values and the encryption of simulated random variables, E(α_{ j }), of uncompromised users, U_{ j }, chosen from the distribution, D, of trust values.
Case 2: { U_{ n }}= I. In this case, the view of U_{ n } consists of the TV with the randomly permuted entries. TV includes the sequence of the randomly permuted exact trust values, decrypted by the secret key, s. We prove the privacy of PKEBP by showing a simulator that, given a PKEBP output sequence ${T}_{{i}_{1}},\dots ,{T}_{{i}_{n1}}$ of the exact trust values, can simulate the TV as U_{ n } receives it as a part of the protocol. A simulator for the compromised U_{ n } simulates this view as follows. The simulator input is the multiset T_{1},…,T_{ n } of the exact trust values that have been decrypted by ${U}_{n}^{\prime}s$ public key, s. The simulator chooses a random permutation and permutes the received values. Due to the random permutation, π, performed by each community user, the simulator cannot distinguish between the simulated sequence ${T}_{{j}_{}1},\dots ,{T}_{{j}_{n1}}$ and the correct output of the PKEBP.
As a result, given a multiset of the exact trust values, U_{ n } cannot link these values to the users that contributed them. □
PKEBP uses O(n) messages each of length O(n).
Generating the average trust level in the presence of semimalicious users is based on the algorithm suggested in [19]. Let us define by U, the multiset of non corrupted users which provide correct feedback, and by V, the multiset of all users participating in the trust computation process. According to [19] the following requirement must be satisfied in our model: V−U≤J and V≥2J for a certain J value. Then the range of the correct trust values, r a n g e(U), contains the subset r e d u c e^{J}(V) of V. Here r e d u c e^{J}(V) is received from the V multiset of all (correct and extremely low/high) trust values, by deleting the J smallest and J largest values, respectively.
If an adversary can corrupt the initiator and several users at the same time, a different protocol is required. The generalized Commutative Encryption Based Protocol CEBP is presented in the next subsection.
Commutative Encryption Based Protocol CEBP
The CEBP we propose, uses commutative encryption as a building block. An encryption scheme is commutative if a ciphertext that is encrypted by several keys can be decrypted regardless of the order of decryption keys. Formally, denote the encryption algorithm by E and the decryption algorithm by D. The encryption scheme is commutative if for every plaintext message m and every two keys k_{1},k_{2} if $c={E}_{{k}_{1}}\left({E}_{{k}_{2}}\right(m\left)\right)$ then $m={D}_{{k}_{1}}\left({D}_{{k}_{2}}\right(c\left)\right)$ (note that for any encryption scheme $m={D}_{{k}_{2}}\left({D}_{{k}_{1}}\right(c\left)\right)$. One possible candidate for a commutative encryption scheme is the PohligHellman scheme [33].
The basic idea of CEBP is for each user to encrypt all the trust values and then decrypt and permute them at the same time so that an adversary cannot associate decrypted trust values with the users that published their encryption. The CEBP protocol is executed in three rounds (Algorithm 4). Each round passes sequentially from the first user U_{ n } to the last U_{n−1}.
The first round begins with the initiator, U_{ n } choosing and publishing a public key. Every other user selects a symmetric key for a commutative encryption scheme. All the users encrypt their trust values both with their keys and with the public key of U_{ n }. Encryption with the initiator’s public key prevents an adversary that does not control the initiator, U_{ n }, from obtaining the multiset of trust values. After the first round, for every i=1,…,n−1, the ith entry in the trust vector, TV, includes the trust value of U_{ i } encrypted by both the public key of U_{ n } and the symmetric key of U_{ i }.
In the second round each user encrypts all entries in TV entries in such a way that at the end of the second round the ith entry is the trust value of U_{ i } encrypted by the keys of U_{1},U_{2},…,U_{ n }. Finally, in the third round, for every i=1,…,n−1, U_{ i } decrypts every entry using its own symmetric key and randomly permutes the entries of TV. At the end of round 3 the trust vector contains all the trust values, encrypted by the public key of U_{ n } and permuted. By decrypting all the entries in TV, U_{ n } obtains the vector of all trust values.
We use ElGamal encryption [34] as the initiator’s public key scheme. The symmetric scheme for users U_{1},…,U_{n−1} is PohligHellman. Both the PohligHellman and the ElGamal schemes are implemented over the same group, which is defined as follows. Let p be a large prime, such that p−1 has a large prime factor, q. Let $g\in {\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$ be an element of order q in ${\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$. In a PohligHellman scheme, the key is a pair $a,b\in {\mathrm{\xe2\u201e\xa4}}_{p1}^{\ast}$ such that a b≡1 mod (p−1). A plaintext $m\in {\mathrm{\xe2\u201e\xa4}}_{p}$ is encrypted by c≡m^{a} mod p and a ciphertext is decrypted by m≡c^{b} mod p. In an ElGamal scheme, the private key is a∈{0,…,q−1}, the public key is g^{a} mod p and a plaintext $m\in {\mathrm{\xe2\u201e\xa4}}_{p}$ is encrypted by the pair (g^{b} mod p,g^{ab}·m mod p). We refer to the two parts of an ElGamal encryption as two components.
By using PohligHellman and ElGamal encryption schemes over the same group we ensure that the security of CEBP can be reduced to the hardness of the Decisional DiffieHellman (DDH) problem [35]. The DDH problem is to distinguish between the two ensembles (g^{x} mod p,g^{y} mod p,g^{z} mod p) and (g^{x} mod p,g^{y} mod p,g^{xy} mod p). The hardness assumption of DDH is that no probabilistic, polynomial time algorithm can distinguish between these two probability ensembles with nonnegligible probability.
The details of the protocol follow.
The initiator begins round 1 (lines 1–9) by choosing parameters for ElGamal encryption and distributes its public key ${g}^{{k}_{n}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p$. Every other user U_{ i } (i=1,…,n−1) chooses four random and independent pairs of PohligHellman keys $({a}_{i}^{1},\phantom{\rule{1em}{0ex}}{b}_{i}^{1})$, $({a}_{i}^{2},\phantom{\rule{1em}{0ex}}{b}_{i}^{2})$, $({\alpha}_{i}^{1},{\beta}_{i}^{1})$, $({\alpha}_{i}^{2},{\beta}_{i}^{2})$. U_{ i } uses the ElGamal public key to encrypt its trust value, T_{ i }. The result is $\left({g}^{{k}_{i}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p,\phantom{\rule{1em}{0ex}}{T}_{i}{g}^{{k}_{i}{k}_{n}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p\right)$, where U_{ i } chooses k_{ i } randomly in the range 0,…,q−1. U_{ i } proceeds to encrypt the ElGamal encryption of T_{ i } with its PohligHellman keys. Each of the two components of the ElGamal encryption is encrypted by one distinct PohligHellman key. The result is
U_{ i } completes the round by publishing this value in T V[ i]. We think of T V[ i] as having two components, T V[ i,1] and T V[ i,2]. U_{ i } stores ${g}^{{k}_{i}{a}_{i}^{1}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p$ in T V[ i,1] and stores ${\left({T}_{i}{g}^{{k}_{i}{k}_{n}}\right)}^{{a}_{i}^{2}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p$ in T V[ i,2].
In round 2, every user, U_{ i }, i=1,…,n−1 makes sure that every entry in T V[ ] is encrypted with all four of its PohligHellman encryption keys (where two of the keys are used to encrypt the left component and two are used to encrypt the right component). Thus, U_{ i } encrypts T V[ i] with ${\alpha}_{i}^{1}$ and ${\alpha}_{i}^{2}$ and encrypts T V[ j] for any j≠i with ${a}_{i}^{1},{a}_{i}^{2}$, ${\alpha}_{i}^{1}$ and ${\alpha}_{i}^{2}$. After the second round the entry T V[ i] holds the value:
In round 3, the users both decrypt and permute all the values. Each user decrypts all values using both its pairs of PohligHellman keys (lines 20–27) and then randomly permutes the resulting vector of values. Due to the commutative property of the scheme, the initiator, U_{ n }, holds all the trust values at the end of round 3. However, the random permutation each user applies to the encrypted values in round 3 ensures that even if only a pair of users is not compromised, the decrypted trust values are randomly permuted in relation to their associated users.
Proposition 4
Assume that the DDH problem is hard and assume that an honest but curious adversary corrupts at most k users out of a community of n users, k≤n. If the trust values of all the users are in the subgroup generated by g, then, CEBP privately computes the set of all trust values of community users.
Proof sketch
If the adversary controls at least n−1 users, including the initiator, then the protocol is trivially private, since the output reveals the exact trust values of every user, and thus any protocol does not add information. If the adversary does not control the initiator then the protocol is private because all trust values are encrypted by the initiator’s public key throughout the protocol. Since the ElGamal encryption scheme is semantically secure, given the hardness of DDH problem, it is easy to argue privacy.
Therefore, the most interesting case is when k≤n−2 and the adversary controls the initiator. To prove privacy we define a simulator that is given the adversary’s input and output (which includes the set of trust values) and simulates the adversary’s view of protocol messages.
Each message in our protocol consists of the trust vector TV. Each entry in this vector is a pair of elements in ${\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$. Thus, the whole view of the adversary can be written as e_{1},…,e_{ m }, where ${e}_{i}\in {\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$ for every i=1,…,m. The value of m is at most O(n^{2}) because the number of elements in TV is 2(n−1) and the adversary receives a message with TV in it at most n−2 times for each of the three rounds.
Note that each element e_{ i } is obtained by raising g to a power η_{ i } that depends on the input and random coin tosses of each participant. The simulator generates a simulated view f_{1},…,f_{ m } as follows. If η_{ i } is determined by the input and coin tosses of the adversary, then the simulator who has access to this input and coin tosses sets f_{ i }=e_{ i }. However, if η_{ i } is generated at least partially by an uncorrupted node then the simulator independently chooses a random element ζ_{ i }∈{0,…,q−1} and sets ${f}_{i}={g}^{{\zeta}_{i}}$.
To prove that the simulator’s view is computationally indistinguishable from the realworld view, we construct a series of hybrid ensembles H_{0},…,H_{ m }, such that H_{0} is the real world view e_{1},…,e_{ m } and for every i=1,…,m we define H_{ i }=△f_{1},…,f_{ i },e_{i+1},…,f_{ m }. Essentially, H_{ m } is the view of the simulator.
We can show that for every i, if H_{ i } can be computationally distinguished from H_{i+1} then the DDH assumption is false. Since we assume that DDH is a hard problem we have that H_{ i } and H_{i+1} are computationally indistinguishable and since m is of polynomial size in n, we have that H_{0} is indistinguishable from H_{ m }, completing the proof. □
The protocol requires O(n) messages, each of length O(n) and the computation complexity for each participant in the scheme is O(n).
Multiple Private Keys Protocol MPKP
The AP and WAP protocols introduced in the previous sections carry out private trust computation assuming that the initiator U_{ n } is not compromised and does not share its private key with other users. In the rest of this work assume that any community user, including U_{ n }, may be compromised by a polybounded klistening curious adversary.
The generalized Multiple Private Keys Protocol MPKP copes with this problem and outputs the average trust. The idea of the MPKP protocol is as follows. During the initialization stage the U_{ n } user initializes all entries of trust vector, TV, and accumulated vector, AV, to 1, sets the accumulated variable A to 1, and sends the M_{ t }=(T V,A V,A) message to the first community user U_{1} as in the previous protocols. During the first round of the MPKP protocol execution each user, U_{ i }, randomly fragments its secret trust, T_{ i }, to a sum of n−1 shares, encrypts the corresponding share by the public key of each U_{ j }, j=1.. n−1 user and accumulates its encrypted shares (multiplying each of them with the corresponding entries) in the accumulated vector, AV. After execution of the first round, the updated AV vector does not return to the initiator U_{ n }. The AV vector visits each community user, while each U_{ i } opens the i−t h entry (that is encrypted by U_{ i }−t h public key) revealing a sum of decrypted shares, encrypts this sum by the public key of the initiator U_{ n }, accumulates this sum in the accumulated variable,A, and deletes the i−t h entry of the AV vector.
A detailed description of the MPKP protocol follows. Assume that each community user, U_{ i }, i=1.. n−1 generates its personal pair $({P}_{i}^{+},{P}_{i}^{})$ of private and public keys. Denote by E_{ i } and D_{ i } the encryption and decryption algorithms produced by U_{ i }. The private key, ${P}_{i}^{+}$, is kept secret, while the public key, ${P}_{i}^{}$, is shared with all other users U_{1},…,U_{i−1}, U_{i+1}…U_{ n }. As in the previous schemes, the cryptosystem must be homomorphic. An additional requirement is that the homomorphism modulus, m, must be identical for all users. One possibility is to use the Benaloh cryptosystem [28, 36] for which many different key pairs are possible for every homomorphism modulus. The system works as follows. Select two large primes, p,q, such that: N=△p q, mp−1, $gcd(m,(p1)/m)=1$ and $gcd(m,q1)=1$, which implies that m is odd. The density of such primes along appropriate arithmetic sequences is large enough to ensure efficient generation of multiple p,q (see [36] for details). Select $y\in {\mathrm{\xe2\u201e\xa4}}_{N}^{\ast}$ such that y^{ϕ(N)/m}≢1 mod N. The public key is (N,y), and encryption of $M\in {\mathrm{\xe2\u201e\xa4}}_{m}$ is performed by choosing a random $u\in {\mathrm{\xe2\u201e\xa4}}_{m}^{\ast}$ and sending y^{M}u^{m} mod N. In order to decrypt, the holder of the secret key computes at a preprocessing stage T_{ M }=△y^{Mϕ(N)/m} mod N for every $M\in {\mathrm{\xe2\u201e\xa4}}_{m}$. It should be noted that m is small enough such that m exponentiations can be performed. Decryption of z is performed by computing z^{ϕ(N)/n} mod N and finding the unique T_{ M } to which it is equal.
The MPKP is performed in two rounds (Algorithm 5). The initialization procedure is shown in lines 1–4. The first round is the accumulation round, where all users share their secret trust T_{ i } values with other users. Upon reception of a message, M_{ t }, each user, U_{ i }, proceeds as follows: (a) U_{ i } chooses ${r}_{1}^{i},\dots ,{r}_{n1}^{i}$ uniformly at random such that ${T}_{i}=\sum _{j=1}^{n1}{r}_{j}^{i}$; (b) U_{ i } encrypts each ${r}_{j}^{i},\phantom{\rule{1em}{0ex}}j=1\phantom{\rule{1em}{0ex}}\mathrm{..}\phantom{\rule{1em}{0ex}}n1$ by the public key ${P}_{j}^{}$ of the U_{ j } user and multiplies it by the current value stored in j−t h entry of AV. As a result, the output AV vector contains the accumulated product $\prod _{k=1}^{n1}{E}_{j}\left({r}_{j}^{k}\right)$ in each j−t h entry (lines 5–12).
In the second round, on reception of message M_{ t }, each user, U_{ i }, decrypts the M_{ t } message and decrypts the corresponding i−t h entry by its private key, ${P}_{i}^{+}$, computes the $\sum _{j=1}^{n1}{r}_{i}^{j}$ sum, encrypts it by the U n′s public key, ${P}_{n}^{}$, as ${E}_{n}\left(\sum _{j=1}^{n1}{r}_{i}^{j}\right)$, accumulates this sum in the accumulated variable, A, deletes the i−t h entry and sends the updated TV vector to the next U_{i+1} user. Note that the partial sum $\sum _{j=1}^{n1}{r}_{i}^{j}$ that U_{ i } decrypts reveals no information about correct trust values. As a result of the second round the initiator U_{ n } receives $A=\prod _{i=1}^{n1}{E}_{n}\left(\sum _{j=1}^{n1}{r}_{i}^{j}\right)$ (lines 13–19). U_{ n } decrypts $\prod _{j=1}^{n1}{E}_{n}\left({r}_{i}^{j}\right)$, and computes the sum of trusts as ${S}_{t}=\sum _{i=1}^{n1}\sum _{j=1}^{n1}{r}_{i}^{j}$. Actually, the average trust T^{avr} is equal to $\frac{{S}_{t}}{n}$ (lines 20–22). Proposition 4 states the privacy of the MPKP protocol. The communication complexity of the MPKP protocol is O(n) messages, each of length O(n).
Proposition 5
MPKP performs computationally secure computation of the exact private trust values in the Additive Reputation System. No restriction is imposed on the initiator U_{ n }.
The last introduced protocol is the MPWP for the weighted average trust ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}$ computation. The idea of the MPWP is as follows. During the initialization stage the U_{ n } user generates a vector, TV, such that each i−t h entry contains the U_{ i }−t h weight w_{ i } encrypted by the U_{ n }−t h public key. U_{ n } sends TV and a (n−1)×(n−1) matrix, SM, with all entries initialized to 1 to the first community user, U_{1}, as in the previous protocols. During the first round of the MPWP execution each U_{ i } computes its encrypted weight in the power of its secret trust ${E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}$, multiplies it by a randomly chosen number (bias) z_{ i }, and accumulates the product in the accumulated entry (by multiplying the entry by the obtained result). In addition, U_{ i } fragments its bias, z_{ i }, into n−1 shares, encrypts each j−t h share by the public key of U_{ j }, and inserts it in the j−t h location of i−t h matrix row. At the end of the first round U_{ n } decrypts the total biased weighted trust. The total random bias is removed during the second round of the MPWP execution when each U_{ j } decrypts the entries of j−t h matrix column, encrypts the sum of these values by the public key of the initiator, accumulates it in an accumulation variable, A, and deletes the j−t h column.
The details follow. The initiator, U_{ n }, starts the first round by generating the encryption of the n−1 entries trust vector, T V= [ E_{ n }(w_{1}).. E_{ n }(w_{n−1})]. Note, that each weight w_{ i } is encrypted by the U_{ n }−t h public key, ${P}_{n}^{}$. In addition, U_{ n } initializes to 1 each entry of the (n−1)×(n−1) matrix of shares SM. The ${M}_{t}^{w}$ message sent by U_{ n } to the community users is M=(T V,S M). Upon the TV vector reception each U_{ i } user proceeds as follows: (a) U_{ i } computes ${E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}\xb7{z}_{i}$. Here z_{ i } is a randomly generated by U_{ i } number that provides the secret bias. (b) U_{ i } accumulates its encrypted weighted trust in the accumulated variable A by setting $A=A\xb7{E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}\xb7{z}_{i}$. After that, the i−t h entry of TV is deleted. (c) U_{ i } shares z_{ i } in the i−t h row of the SM shares matrix as $\mathit{\text{SM}}\left[\phantom{\rule{0.3em}{0ex}}i\right]\left[\phantom{\rule{0.3em}{0ex}}\right]=\phantom{\rule{1em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}{E}_{1}\right({z}_{i}^{1}\left)\phantom{\rule{1em}{0ex}}\mathrm{..}\phantom{\rule{1em}{0ex}}{E}_{n1}\right({z}_{i}^{n1}\left)\right]$. At the end of the first round U_{ n } receives the T V[ ] entry that is equal to the biased product $\mathit{\text{BT}}=\prod _{j=1}^{n}{E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}{z}_{i}$, encrypted by its public key, and the updated shares matrix SM while $\mathit{\text{SM}}\left[\phantom{\rule{0.3em}{0ex}}i\right]\left[\phantom{\rule{0.3em}{0ex}}j\right]={E}_{j}\left({z}_{i}^{j}\right)$. Actually, the decryption procedure applied on the T V[] vector outputs the decrypted sum $D\left(\mathit{\text{TV}}\right[\phantom{\rule{0.3em}{0ex}}\left]\phantom{\rule{0.3em}{0ex}}\right)=\sum _{i=1}^{n1}{w}_{i}{T}_{i}+\sum _{i=1}^{n1}{z}_{i}$. A second round is performed in order to subtract the random bias $\sum _{i=1}^{n1}{z}_{i}$ from the correct weighted average trust w T^{avr}. The second round of the MPWP is identical to the corresponding round of the MPKP. Upon reception of the SM matrix each user, U_{ i }, decrypts the corresponding i−t h column ${E}_{i}\left({z}_{1}^{i}\right)\phantom{\rule{1em}{0ex}}{E}_{i}\left({z}_{2}^{i}\right)\dots {E}_{i}\left({z}_{n1}^{i}\right)$, encrypted by all community users by U_{ i }−t h public key, ${P}_{i}^{}$. Each U_{ i }, i=1.. n−1 computes the sum of the partial shares ${\mathit{\text{PSS}}}_{i}=\sum _{j=1}^{n1}{z}_{j}^{i}$, encrypts it by the U_{ n }−t h public key, ${P}_{n}^{}$, and accumulates it in the accumulated variable A. After that, i−t h S M^{′}s column S M[ ][ i] is deleted. As a result of the second round, the initiator, U_{ n }, receives the accumulated variable, $A=\prod _{i=1}^{n1}{E}_{i}\left({\mathit{\text{PSS}}}_{i}\right)$. The encrypted bias, BT, is decrypted as $D\left(A\right)=\sum _{i=1}^{n1}\sum _{j=1}^{n1}{z}_{j}^{i}$.
Finally, the weighted average trust w T^{avr} is equal to w T^{avr}=T V−A. The private trust computation carried out by the MPKP and the MPWP protocols is preserved in the computationally secure manner due to the following reasons:

(a)
Each community user, U _{ i }, fragments its trust, T _{ i }, randomly into n−1 shares (Algorithm 5, lines 6–8).

(b)
Each ${r}_{i}^{j}$ encrypted by U _{ i } by the U _{ j }−t h public key, ${P}_{j}^{}$, shared with each U _{ j },j=1,…,n−1 user and accumulated in the TV vector, reveals no information about the exact T _{ i } value to U _{ j } (lines 9–14).

(c)
The decryption performed by each U _{ i },i=1,…,n−1 by its private key, ${P}_{i}^{+}$, at the second round, outputs the sum of the partial shares, ${D}_{i}\left(\mathit{\text{TV}}\right[\phantom{\rule{0.3em}{0ex}}i\left]\phantom{\rule{0.3em}{0ex}}\right)=\sum _{j=1}^{n1}{r}_{j}^{i}$ of all community users. In essence, the $\sum _{j=1}^{n1}{r}_{j}^{i}$ value reveals no information about the secret trust values T _{1},…,T _{i−1}, T _{i+1},…,T _{n−1}.

(d)
The encryption ${E}_{n}\left(\sum _{j=1}^{n1}{r}_{j}^{i}\right)$ of the partial shares sum performed by each U _{ i } with the initiator U _{ n } public key ${P}_{n}^{}$ and accumulated in A, can be decrypted by U _{ n } only.

(e)
Assume a coalition ${U}_{{j}_{i}},\dots ,{U}_{{j}_{i+k1}}$ of at most k<n curious adversarial users, possibly including the initiator U _{ n }. Then the exact trust values revealed by the coalition, are the coalition members trust only. The privacy of the uncorrupted users is preserved by the homomorphic encryption scheme which generates for each user its secret private key, and by the random fragmentation of the secret trust.
In MPWP, O(n) messages of length O(n^{2}) are sent.
Conclusions
We derived a number of schemes for the private computation of trust in a given user by a community of users. Trust computation is performed in a fully distributed manner without involving a Trusted Authority. The proposed AP and WAP protocols are computationally secure, under the assumption of an uncompromised initiator, U_{ n }. The AP and WAP protocols compute the average unweighted and weighted trust, respectively. The generalized MPKP and MPWP protocols relax the assumption that U_{ n } is noncompromised. They carry out the private unweighted and weighted trust computation, respectively, without limitations imposed on U_{ n }. The number of messages sent in the proposed protocols is O(n) (large) messages.
The PKEBP and CEBP for the removal of outliers are presented as well.The protocols, introduced and analyzed in this paper, may be efficiently applied in the fully distributed environment without any trusted authority. Compared with other models, our schemes privately compute trust values with low communication overhead of O(n) (large) messages in the simplified ring network topology. The schemes may be applied to complete topology systems when all network users are connected by direct links. The schemes may be attractive in the case when sending the linear number (O(n)) of large messages is better than sending a substantially larger number (O(n^{3})) of possibly smaller messages. Moreover, the outliers removal (performed by the CEBP protocol) may be efficiently performed by the computationally restricted users when there are no resources for generating computationally expensive Interactive and Non Interactive Zero Knowledge Proofs. The schemes proposed in this paper are not restricted to trust computation. They may be extended to other models that compute privately sensitive information with only O(n) messages.
In a case where the trust is represented by several values rather then a single value, one can apply our techniques to each such value independently.
Acknowledgments
Supported by Deutsche Telekom Laboratories at BenGurion University of the Negev, Israel, Rita Altura Trust Chair in Computer Sciences, ICT Programme of the European Union under contract number FP7215270 (FRONTS), Lynne and William Frankel Center for Computer Sciences, and the internal research program of the Sami Shamoon College of Engineering. The paper is a full version of two extended abstracts each describing a different part of the results [8, 9].
References
 1.
Paillier P: Publickey cryptosystems based on composite degree residuosity classes. Advances in cryptology, EUROCRYPT 99. Springer Berlin Heidelberg, pp 223–238; 1999.
 2.
Pavlov E, Rosenschein JS, Topol Z: Supporting privacy in decentralized additive reputation systems. Trust Management, Springer Berlin Heidelberg, pp 108–119; 2004.
 3.
Gudes E, GalOz N: A grubshtein: methods for computing trust and reputation while preserving privacy. 2009. Proceedings of 23rd Annual IFIP WG 11.3 Working Conference on Data and Applications Security, Springer Berlin Heidelberg, pp 291–298
 4.
Cramer R, Damgard IB, Buus Nielsen J: Multiparty computation from threshold homomorphic encryption. In EUROCRYPT ‘01: proceedings of the international conference on the theory and application of cryptographic techniques. Springer, Berlin Heidelberg; 2001:280–299.
 5.
Franklin M, Haber S: Joint encryption and messageefficient secure computation. J Cryptology 1996, 9(4):217–232. 10.1007/s001459900013
 6.
RomeroTris C, CastellaRoca J, Viejo A: Multiparty private web search with untrusted partners. 2012. Security and Privacy in Communication Networks, Springer Berlin Heidelberg. pp. 261280. In: Rajarajan M, et al., Proceedings of SecureComm 2011, pp 261–280
 7.
Damgard I, Faust S, Hazay C: Secure twoparty computation with low communication. In TCC 2012, LNCS 7194 Edited by: Cramer R. 2012, 54–74.
 8.
Dolev S, Gilboa N, Kopeetsky M: Computing trust privately in the presence of curious and malicious users. In Proceedings of the international symposium on stochastic models in reliability engineering, life sciences and operations management. BeerSheva, Israel: Sami Shamoon College of Engineering; 2010.
 9.
Dolev S, Gilboa N, Kopeetsky M: Computing multiparty trust privately in O(n) time units sending one (possibly large) message at a time. In Proceedings of 25th Symposium On Applied Computing (SAC 2010). Sierre, Switzerland; 2010:1460–1465.
 10.
Asharov G, Jain A, Tromer E, Vaikuntanathan N, Wichs D: Multiparty computation with low communication, computation and interaction via threshold FHE. Proceedings of the EUROCRYPT 2012 2012, 483–501. 2012 2012
 11.
BeerliovaTrubmiova Z, Hirt M: Perfectlysecure mpc with linear communication complexity. TCC2008, LNCS 4948 2008, 213–230.
 12.
Dimitriou T, Michalas A: Multiparty trust computation in decentralized environment. In Proceedings of the 5th, IFIP international conference on New Technologies, Mobility and Security (NTMS 2012). Istanbul, Turkey; 2012.
 13.
Dimitriou T, Michalas A: Multiparty trust computation in decentralized environments in the presence of malicious adversaries. 2013.http://dx.doi.org/10.1016/j.adhoc.2013.04.013 Ad Hoc Netw J. Elsevier,
 14.
Bachrach Y, Parnes A, Procaccia AD, Rosenschein JS: Gossipbased aggregation of trust in decentralized reputation systems. Autonomous Agents, MultiAgent Syst 2009, 19(2):153–172. 10.1007/s1045800890736
 15.
Huynh TD, Jennings NR, Shadbolt NR: An integrated trust and reputation model for open multiagent systems. In Proceedings of 16th European Conference on Artificial Intelligence. Spain, pp 18–22; 2004.
 16.
Kinateder M, Rothermel K: Architecture and algorithms for a distributed reputation system. Trust, Management. Springer Berlin Heidelberg; 2003.
 17.
Mui L, Mohtashemi M, Halberstadt A: A computational model of trust and reputation. 2002. System Sciences, 2002. HICSS. Proceedings of the 35th Annual Hawaii International Conference on. IEEE/pp 1435–1439
 18.
Xiong L, Liu L: PeerTrust: supporting reputationbased trust for peertopeer electronic communities. IEEE Trans. Knowl Data, Eng 2004, 16(7):843–857. 10.1109/TKDE.2004.1318566
 19.
Dolev D, Lynch NA, Pinter SS, Stark EW, Weihl WE: Reaching approximate agreement in the presence of faults. J ACM 1986, 33(3):499–516. 10.1145/5925.5931
 20.
Pedersen TP: Noninteractive and information theoretic secure verifiable secret sharing. Advances in Cryptology CRYPTO 91,. Springer Berlin Heidelberg, pp 129–140; 1991.
 21.
Shamir A: How to share a secret. Commun ACM 1979, 11(22):612–613.
 22.
Hasan O, Brunie L, Bertino E, Shang N: A decentralized privacy preserving reputation protocol for the malicious adversarial model. Inf Forensics Secur J 2013, 8(6):1–14.
 23.
Dolev S, Ostrovsky R: Xortrees for efficient anonymous multicast and reception. ACM Trans Inf Syst Secur 2000, 3(2):63–84. 10.1145/354876.354877
 24.
Goldreich O: Foundations of cryptography: volume 1, basic tools. New York: Cambridge University Press; 2000.
 25.
Naccache D, Stern J: A new public key cryptosystem based on higher residues. 1998. Proceedings of the 5th, ACM Conference on Computer and Communications Security, pp 59–66
 26.
Goldwasser S, Micali S: Probabilistic encryption. J Comput Sci Contr Syst 2004, 28: 108–119.
 27.
Okamoto T, Uchiyama S: A new publickey cryptosystem as secure as factoring. EUROCRYPT 1998 1998, 308–318.
 28.
Benaloh J: Dense probabilistic encryption. Proceedings of the workshop on selected areas of cryptography. Kingston, pp 120–128 1994.
 29.
Boneh D, Goh EJ, Nissim K: Evaluating 2DNF formulas on ciphertexts. Theory of cryptography TCC, Springer Berlin Heidelberg, pp 325–341; 2005.
 30.
Gentry C: Fully homomorphic encryption using ideal lattices. In Proceedings of the fortyfirst annual ACM symposium on Theory of computing STOC. New York, NY, USA: ACM; 2009:169–178.
 31.
Beimel A, Dolev S: Buses for anonymous message delivery. J Cryptology 2003, 16(1):25–39. 10.1007/s0014500201286
 32.
Goldreich O: Foundations of cryptography: volume 2, basic applications. New York: Cambridge University Press; 2004.
 33.
Pohlig SC, Hellman ME, (1978): An improved algorithm for computing logarithms in GF(p) and its cryptographic significance. IEEE Trans. Inf. Theory 24(1):106–110.
 34.
El Gamal T: A public key cryptosystem and a signature scheme based on discrete logarithms. In Proceedings of CRYPTO 84 on Advances in cryptology. New York, NY, USA: SpringerVerlag New York, Inc; 1985:10–18.
 35.
Boneh D: The decision DiffieHellman problem. Proceedings of Algorithmic Number Theory, Third International Symposium, ANTSIII. 1998, 48–63.
 36.
Benaloh J: Verifiable secretballot elections. Ph.D. thesis 1987. Yale University Yale University
Author information
Affiliations
Corresponding authors
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Dolev, S., Gilboa, N. & Kopeetsky, M. Efficient private multiparty computations of trust in the presence of curious and malicious users. J Trust Manag 1, 8 (2014). https://doi.org/10.1186/2196064X18
Received:
Accepted:
Published:
Keywords
 Private trust computations
 Multiparty computations
 Anonymity