# Efficient private multi-party computations of trust in the presence of curious and malicious users

- Shlomi Dolev
^{1}Email author, - Niv Gilboa
^{2}and - Marina Kopeetsky
^{3}Email author

**1**:8

**DOI: **10.1186/2196-064X-1-8

© Dolev et al.; licensee Springer. 2014

**Received: **27 May 2013

**Accepted: **22 January 2014

**Published: **9 June 2014

## Abstract

Schemes for multi-party trust computation are presented. The schemes do not make use of a Trusted Authority. The schemes are more ein a completely distributed manner, where each user calculates its trust value privately and independently. Given a community *C* and its members (users) *U*_{1},…,*U*_{
n
}, we present computationally secure schemes for trust computation. The first scheme, Accumulated Protocol *AP* computes the average trust attributed to a specific user, *U*_{
t
} following a trust evaluation request initiated by a user *U*_{
n
}. The exact trust values of each queried user are not disclosed to *U*_{
n
}. The next scheme, Weighted Accumulated Protocol *WAP* generates the average weighted trust in a specific user *U*_{
t
} taking into consideration the unrevealed trust that *U*_{
n
} has in each user participating in the trust evaluation process. The Public Key Encryption Protocol *PKEP* outputs a set of the exact trust values given by the users without linking the user that contributed a specific trust value to the trust this user contributed. The obtained vector of trust values assists in removing outliers. Given the set of trust values, the outliers that provide extremely low or high trust values can be removed from the trust evaluation process. We extend our schemes to the case when the initiator, *U*_{
n
}, can be compromised by the adversary, and we introduce the Multiple Private Keys and the Weighted protocols (*MPKP* and *MPWP*) for computing average unweighted and weighted trust, respectively. Moreover, the Csed Protocol (*CEBP*) extends the *PKEBP* in this case. The computation of all our algorithms requires the transmission of *O*(*n*) (possibly large) messages.

### Keywords

Private trust computations Multi-party computations Anonymity## Our contribution

The purpose of this paper is to introduce new schemes for decentralized reputation systems. These schemes do not make use of a Trusted Authority to compute the trust in a particular user that is attributed by a community of users. Our objective is to compute trust while preserving user privacy.

We present new efficient schemes for calculating the trust in a specific user by a group of community members upon the request of an initiator. The trust computation is provided in a completely distributed manner, where each user calculates its trust value privately. The user privacy is preserved in a computationally secure manner. The notions of privacy and privately computed trust, are determined in the sense that given an output average trust in a certain user, it is computationally infeasible to reveal the exact trust values in this user, given by community users. We assume a community of users *C*={*U*_{1}, *U*_{2},…,*U*_{
n
}}. Let *U*_{
n
} be an initiator. The goal of *U*_{
n
} is to get an assessment of the trust in a certain user, *U*_{
t
} by a group consisting of *U*_{1}, *U*_{2},…,*U*_{n−1} users from *C*. The *AP* calculates the average trust (or the sum of trust levels) in the user *U*_{
t
} (Section ‘Accumulated protocol AP’). The *AP* protocol is based on a computationally secure homomorphic cryptosystem, e.g., the Paillier cryptosystem [1], which provides a homomorphic encryption of the secure trust levels *T*_{1},…,*T*_{n−1} calculated by each user *U*_{1}, *U*_{2},…,*U*_{n−1} from *C*. The *AP* satisfies the features of the Additive Reputation System [2] and does not take into consideration ${U}_{n}^{\prime}s$ subjective trust values in the queried users *U*_{1}, *U*_{2},…,*U*_{n−1}. A decentralized reputation system is defined as additive/non additive [2] if feedback collection, combination, and propagation are implemented in a decentralized way, and if a combination of feedbacks provided by agents is calculated in an additive/non additive manner, respectively. The *WAP* carries out a non additive trust computation (Section ‘Weighted accumulated protocol WAP’). It outputs the weighted average trust which is based on the trust given by the initiator *U*_{
n
} in each *C* member participating in the feedback. The *WAP* is an enhanced version of the *AP* protocol. The *AP* and *WAP* protocols cope with a curious adversary and are restricted to the case of an uncompromised initiator, *U*_{
n
}. The *MPKP* and *MPWP* protocols, introduced in Section ‘Multiple Private Keys Protocol *MPKP*’, use additional communication to relax the condition that the initiator *U*_{
n
} is uncompromised and provide average unweighted and weighted privately computed trust, respectively.

Compared with the recent results in [2] and [3], our schemes have several advantages.

### The Private Trust scheme is resistant against either curious or semi-malicious users

The *AP* and *WAP* protocols preserve user privacy in a computationally secure manner. Our protocols cope with any number of curious but honest adversarial users. Moreover, the *PKEBP* (Section ‘Protocols for removal of outliers’) is resistant against semi-malicious users that return false trust values. The *PKEBP* supports the removal of outliers. The general case, when the initiator, *U*_{
n
}, can be compromised by the adversary, is addressed by the *MPKP*, *MPWP* and *CEBP* (Sections ‘Protocols for removal of outliers’ and ‘Multiple Private Keys Protocol *MPKP*’) protocols. Unlike our model, [2] suggests protocols that are resistant against curious agents who only try to collude in order to reveal private trust information. Moreover, the reputation computation in some of the algorithms in [3] contains a random parameter that reveals information about the reputation range of the queried users.

### Low communicational overhead

The proposed schemes require only *O*(*n*) size messages to be sent, while the protocols of [2] and [3] require *O*(*n*^{3}) communication messages.

### No limitations on the number of curious users

The computational security of the proposed schemes does not depend on the number of curious users in the community. Moreover, privacy is preserved regardless of the size of the coalition of curious users. Note that the number of the curious users should be no greater than half of the community users in the model presented in [2].

## Background and literature review

The use of homomorphic cryptosystems in general Multiparty Computation (MPC) models is presented in [4]. In [4] it is demonstrated that, given keys for any sufficiently efficient homomorphic cryptosystem, general MPC protocols for *n* players can be devised such that they are secure against an active adversary who corrupts any minority group of the players. The problem stated and solved in [4] is as follows: given the encryptions of two numbers, say *a* and *b* (where each player knows only its input), compute securely an encryption of *c*=*a* *b*. The correctness of the result is verified. The total number of bits sent is *O*(*n* *k* *C*), where *k* is a security parameter and *C* is the size of a Boolean circuit computing the function that should be securely evaluated. An earlier scheme proposed in [5] with the same complexity was only secure for passive adversaries. Earlier protocols had complexity that was at least quadratic in *n*. Threshold homomorphic encryption is used to achieve the linear communication complexity in [4]. The schemes proposed in [4, 6], and [7] are based on public key infrastructure and use Zero Knowledge proofs (ZKP) as building blocks. When compared to [4, 6], and [7] our schemes privately compute the average unweighted (additive) and weighted (non additive) characteristics, respectively, without using relatively hard-to-implement techniques such as ZKP.

Independently, (though slightly later than [8, 9]), linear communication MPC was presented in [10]. A perfectly secure MPC protocol with linear communication complexity was proposed in [11]. Our model presented herein, (with a semi-honest but curious adversary) copes with at most $\frac{n}{2}-1$ compromised users that supply arbitrary trust values, while [11] copes (in an information theoretic manner) with up to $\frac{n}{3}$ compromised users (even totally malicious users) with a similar communication overhead, *O*(*n*^{3}·*l* *n*^{3}). Here, *l* denotes the total message length.

Following [8, 9], privacy preserving protocols were investigated in [12] and [13]. Protocols for efficient multi-party sum computation (in the semi-honest adversarial model) are proposed in [12] and [13]. The derived protocols are augmented (by applying Zero Knowledge Proofs of plaintext equality and set membership) to handle malicious adversary. The simulation results demonstrate the efficiency of the designed methods. Compared with our results, the most powerful and efficient *StR* protocol of [12] and [13] is based on a completely connected network topology where each network user is directly connected to all other users. In addition, the schemes of [12] and [13] can be applied in the Additive Reputation Systems, while our schemes are designed also for the Non-Additive Reputation System.

Homomorphic ElGamal encryption is used in [6] as part of a scheme for multi-party private web search with untrusted partners (users). The scheme is based on multi-party computation that protects the privacy of the users with regards to the web search engine and any number of dishonest internal users. The number of sent messages is linear in the number of users (each of the *n* users sends 4*n*−4 messages). In order to obtain a secure permutation (of *N* elements), switches of the Optimized Arbitrary Size Benes network (OAS-Benes) are distributed among a group of *n* users, and the honest users control at least a large function *S*(*N*) of the switches of the OAS-Benes. The proposed MPC protocol is based on the homomorphic threshold *n*-out of-*n* ElGamal encryption. Nevertheless, unlike our model, a MPC protocol is based on the computationally expensive honest-verifier ZKP protocol, and the Benez permutation network.

The efficient scheme for the secure two-party computation for “asymmetric settings” in which one of the devices (smart card, mobile device, etc.) is strictly computationally weaker than the other, is introduced in [7]. The workload for one of the parties is minimized in the presented scheme. The proposed protocol satisfies one-round complexity (i.e., a single message is sent in each direction assuming trusted setup).The proposed protocol performs only two-party secure computations, while the number of participants is not bounded in our schemes. Moreover, computationally expensive, Non Interactive Zero-Knowledge Proof techniques, and “extractable hash functions” are used in the scheme of [7].

A number of systematic approaches and corresponding architectures for creating reliable trust and reputation systems have been recently proposed in [14–18]. The main scope of these papers is the definitions of variety of settings for decentralized trust and reputation systems. A probabilistic approach for constructing computational models of trust and reputation, is presented in [17], where trust and reputation are studied in the scope of various social and scientific disciplines.

The computation models for the reputation systems of [16] support user anonymity by generating a pseudonym for any user, therefore, concealing user identity. In contrast to [16], the main challenge of our approach is to preserve the user anonymity in the computation process of the trust.

One of the common problems stated and discussed in [18] is that most existing reputation systems lack the ability to differentiate dishonest from honest feedback and, therefore, are vulnerable to malicious cooperations of users (peers in P2P systems) who provide dishonest feedback. The dishonest feedback is effectively filtered out in [18] by introducing the factor of feedback similarity between a user (pair) in the collusive group, and a user (peer) outside the group. We propose a different approach for the removal of dishonest users (outliers) by estimating the range of the correct trust values [19].

Two other works that are related to our scheme appear in [2] and [3]. In [2] several privacy and anonymity preserving protocols are suggested for an Additive Reputation System.

The authors state that supporting perfect privacy in decentralized reputation systems is impossible. Nevertheless, they present alternative probabilistic schemes for preserving privacy. A probabilistic “witness selection” method is proposed in [2] in order to reduce the risk of selecting dishonest witnesses. Two schemes are proposed. The first scheme is very efficient in terms of communication overhead, but this scheme is vulnerable to collusion of even two witnesses. The second scheme is more resistant toward curious users, but still is vulnerable to collusion. It is based on a secret splitting scheme. This scheme provides a secure protocol based on the verifiable secret sharing scheme [20] derived from Shamir’s secret sharing scheme [21]. The number of dishonest users is heavily restricted and must be no more than $\frac{n}{2}$, where *n* is the number of contributing users. The communication overhead of this scheme is rather high and requires *O*(*n*^{3}) messages.

An enhanced model for reputation computation that extends the results of [2] is introduced in [3]. The main enhancement of [2] is that a non additive (weighted) trust and reputation can be computed privately. Three algorithms for computing non additive reputation are proposed in [3]. The algorithms have various degrees of privacy and different levels of protection against adversarial users. These schemes are computationally secure regardless of the number of dishonest users.

The paper [22] (published later than [8, 9]), proposes the distributed *Malicious-k-shares* protocol, which extends the results of [2] and [3] in the sense that a high majority of users (agents) can find *k*, *k*<<*n* sufficiently trustworthy agents in a set of *n*−1 users-feedback providers. This protocol is based on homomorphic encryption and Non-Interactive Zero-Knowledge proofs. The *Malicious k-shares protocol* is applicable in the Additive Reputation System only, while our schemes privately compute also the weighted trust. The techniques, used for removal of outliers, is based on Non-Zero Knowledge Proofs of set-membership and plain-text equality, while the proof preserves that a certain share lies in the correct interval. The proposed protocol requires the exchange of *O*(*n*+ log *N*) messages (where *n* and *N* are the number of users in the protocol and environment, respectively), while we use a more computationally effective techniques for removal of outliers exchanging only *O*(*n*) messages.

We propose new efficient trust computation schemes that can replace any of the above schemes. Our schemes enable the initiator to compute unweighted (additive) and weighted (non additive) trust with low communication complexity of *O*(*n*) (large) messages.

**Summary of the Presented Approaches**

Protocol | Computation | Adversarial model | Crypto building blocks |
---|---|---|---|

| Average trust | Honest but curious | Homomorphic (of Paillier) |

| Weighted average trust | Honest but curious | Homomorphic (of Paillier) |

| Vector of exact trust; | Semi-malicious | Any public key encryption |

removal of outliers | (restricted) | ||

| Vector of exact trust; | Semi-malicious | Commutative (Polhig-Hellman), |

removal outliers | (non restricted) | ElGamal |

This paper extends the schemes of [9] by introducing the *MPKP* and *MPWP* protocols that compute average unweighted and weighted trust in the general case, even when the initiator *U*_{
n
} can be compromised by the adversary. The proofs of correctness of the proposed protocols extend the presentations of [9] and [8].

### Paper organization

The formal system description appears in Section ‘Research design and methodology’. The computationally resistant (against curious but honest adversary) private trust protocol, *AP*, is introduced in Section ‘Results and discussion’ (Subsection “Accumulated protocol AP”). The enhanced version of *AP*, *WAP*, is presented in Section ‘Results and discussion’ (Subsection “Weighted accumulated protocol WAP”). The (resistant against semi-malicious users) *PKEBP* and *CEBP* and the scheme for removing outliers are presented in Section ‘Results and discussion’ (Subsection “Protocols for removal of outliers”). The generalized *MPKP* protocol and the weighted *MPWP* protocol are introduced in Section ‘Results and discussion’ (Subsection “Multiple Private Keys Protocol *MPKP*”). Conclusions appear in Section ‘Conclusions’.

## Research design and methodology

The purpose of this paper is to generate new schemes for private trust computation within a community. The contribution of our work is as follows: (a) the trust computation is performed in a completely distributed manner without involving a Trusted Authority. (b) the trust in a particular user within the community is computed privately. The privacy of trust values, held by the community users is preserved subject to standard cryptographic assumptions, when the adversary is computationally bounded. (c) The proposed protocols are resistant to a curious but honest poly-bounded *k*-listening adversary, *Ad*[23]. Such an adversary *Ad* may do the following: *Ad* may trace all the network links in the system and *Ad* may compromise up to *k* users, *k*< *n*. We require that an adversary *Ad*, compromising an intermediate node can only learn the node’s trust values and an adversary *Ad*, compromising the initiator *U*_{
n
} can learn the output of the protocol, namely the average trust. We distinguish between two categories of adversaries: honest but curious adversaries, and semi-malicious adversaries [2]. An honest but curious *k*-listening adversary follows the protocol by providing correct input. Nevertheless, it might try to learn trust values in different ways, including collusion with, at most, *k* compromised users. While an honest but curious adversary does not try to modify the correct output of the protocol, a semi-malicious adversary may provide dishonest input in order to bias the average trust value.

Let *C*=*U*_{1},…,*U*_{
n
} be a community of users such that each pair of users is connected via an authenticated channel. Assume that the purpose of a user *U*_{
n
} from *C* is to get the unweighted ${T}_{t}^{\mathit{\text{avr}}}$ or weighted average trust ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}$ in a specific user, *U*_{
t
}, evaluated by the community of users. Denote by *T*^{
i
}, *i*=1.. *n*, the trust of user *U*_{
i
} in *U*_{
t
}, and by ${T}_{t}^{\mathit{\text{avr}}}=\frac{\sum _{i=1}^{n}{T}^{i}}{n}$ and ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}=1/10\sum _{i=1}^{n}{w}_{i}{T}^{i}$ the unweighted and weighted average trust in *U*_{
t
}, respectively. Here *w*_{
i
}=1,2,…,10 is the subjective trust of the initiator *U*_{
n
} in *U*_{
i
} in the form of an integer that facilitates our secure computation. In the subsequent work we always assume that *w*_{
i
} is an integer in this range. Denote by *M*_{
t
} the message sent by *U*_{
n
} to the first member of the community, *C*.

Our definitions of computational indistinguishability, simulation and private computation follow the definitions of [24]. Informally speaking, two probability ensembles are *computationally indistinguishable* if no polynomial time, probabilistic algorithm can decide with non-negligible probability if a given input is drawn from the first or the second ensemble. A distributed protocol computes a function *f* *privately* if an adversary cannot obtain any information on the input and output of other parties, beyond what is implicit in the adversary’s own input and output. The way to prove that a protocol is private is to show that there exists a polynomial time, probabilistic *simulator* that receives as input the same input and output as an adversary and generates a string that is computationally indistinguishable from the whole view of the adversary, including every message that the adversary received in the protocol. Intuitively, the existence of a simulator implies that the adversary learns nothing from the execution of the protocol except its input and output.

## Methods

The main tool we use in our schemes is public-key, homomorphic encryption. In such an encryption scheme there is a modulus, *M*, and an efficiently computable function *ϕ* that maps a pair of encrypted values (*E*_{
K
}(*x*),*E*_{
K
}(*y*)), where 0≤*x*,*y*<*M*, to a single encrypted element *ϕ*(*E*_{
K
}(*x*),*E*_{
K
}(*y*))=*E*_{
K
}(*x*+*y* mod *M*). In many homomorphic encryption systems the function *ϕ* is multiplication modulo some integer *N*. Given a natural number, *c*, and an encryption, *E*_{
K
}(*x*), it is possible to compute *E*_{
K
}(*c*·*x* mod *M*), without knowing the private key. Set *β*=*E*_{
K
}(1) and let the binary representation of *c* be *c*=*c*_{
k
}*c*_{k−1}…*c*_{0}. Go over the bits *c*_{
k
},…,*c*_{0} in descending order. If *c*_{
j
}=0, set *β*=*ϕ*(*β*,*β*) and if *c*_{
j
}=1, set *β*=*ϕ*(*ϕ*(*β*,*β*),*E*_{
K
}(*x*)). If *ϕ* is modular multiplication, this algorithm is identical to standard modular exponentiation.

There are quite a few examples of homomorphic encryption schemes known in the cryptographic literature, including [1, 25–28]. There are also systems that allow both addition and multiplication of two encrypted plaintexts, e.g., [29] where only a single multiplication is possible for a pair of ciphertexts, and [30]. All of these examples of homomorphic cryptosystems are currently assumed to be semantically secure [26].

## Results and discussion

### Accumulated protocol AP

*AP*protocol may be based on any homomorphic encryption scheme such that the modulus

*N*satisfies $N>\sum _{i=1}^{n}{T}_{i}$. We illustrate the protocol by using the semantically secure Paillier cryptosystem [1]. This cryptosystem possesses a homomorphic property and is based on the Decisional Composite Residuosity assumption. Let

*p*and

*q*be large prime numbers, and

*N*=

*p*

*q*. Let

*g*be some element of ${Z}_{{N}^{2}}^{\ast}$. Note that the base,

*g*, should be chosen properly by checking whether

*g*

*c*

*d*(

*L*(

*g*

^{ λ }

*m*

*o*

*d*

*N*

^{2}),

*N*)=1, where

*λ*=

*l*

*c*

*m*(

*p*−1,

*q*−1), and the

*L*function is defined as $L\left(u\right)=\frac{u-1}{N}$. The public key is the (

*N*,

*g*) pair, while the (

*p*,

*q*) pair is the secret private key. The ciphertext,

*c*, for the plaintext message

*m*<

*N*is generated by the sender as

*c*=

*g*

^{ m }

*r*

^{ N }mod

*N*

^{2}, where

*r*<

*N*is a randomly chosen number. The decryption is performed as $m=\frac{L({c}^{\lambda}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2})}{L({g}^{\lambda}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2})}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}N$ at the destination. Our schemes are based on the homomorphic property of the Paillier cryptosystem. Namely, the multiplication of two encrypted plaintexts

*m*

_{1}and

*m*

_{2}is decrypted as the sum

*m*

_{1}+

*m*

_{2}mod

*N*of the plaintexts. Thus,

*E*(

*m*

_{1})·

*E*(

*m*

_{2})≡

*E*(

*m*

_{1}+

*m*

_{2}mod

*m*) mod

*N*

^{2}and $E{\left({m}_{1}\right)}^{{m}_{2}}\equiv E({m}_{1}\xb7{m}_{2}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}N)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2}$. The

*AP*protocol is described in Algorithm 1.

Assume that the initiator, *U*_{
n
}, has generated a pair of its public and private keys as described above, and it has shared its public key with each community user. Then, *U*_{
n
} initializes to 1 the single entry trust message *M*_{
t
} and sends it to the first *U*_{1} user (lines 1–3). Upon receiving the message, *M*_{
t
}, each node, *U*_{
i
}, encrypts its trust in *U*_{
t
} as $E\left({T}_{i}\right)={g}^{{T}_{i}}{r}_{i}^{N}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}\phantom{\rule{1em}{0ex}}{N}^{2}$. Here, *T*_{
i
} is a secret ${U}_{i}^{\prime}s$ trust level in *U*_{
t
} and *r*_{
i
} is a randomly generated number. The ${U}_{i}^{\prime}s$ output is accumulated in the accumulated variable *A* multiplying its current value by the new encrypted *U*_{
i
}−*t* *h* trust *E*(*T*_{
i
}) from the *i*−*t* *h* entry as *A*=*A*·(*E*(*T*_{
i
})). Then *U*_{
i
} sends the updated *M*_{
t
} message to the next user, *U*_{i+1}. This procedure is repeated until all trust values are accumulated in *A* (lines 4–9). The final *M*_{
t
} message received by the initiator, *U*_{
n
} is ${M}_{t}=A=\prod _{i=1}^{n}E\left({T}_{i}\right)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2}$. As a result, the *U*_{
n
} user decrypts the value accumulated in the *M* message as the sum of trusts ${S}_{t}=D\left({M}_{t}\right)=\sum _{i=1}^{n}{T}_{i}$. Thus, the average trust is ${T}_{t}^{\mathit{\text{avr}}}=\frac{{S}_{t}}{n-1}$ (Algorithm 1, lines 10–12). Proposition 1 proves that *AP* is a computationally private protocol to compute the trust of a community in *U*_{
t
}.

####
**Proposition**
**1**

Assume that an honest but curious adversary corrupts at most *k* users out of a community of *n* users, *k*<*n*. Then, *AP* privately computes *T*^{
a
v
r
}, the average trust in user *U*_{
t
}.

####
*Proof*

In order to prove the proposition, we have to prove that for every adversary there exists a simulator that given only the adversary’s input and output, generates a string that is computationally indistinguishable from the adversary’s view in *AP*. Let $I=\{{U}_{{i}_{1}},{U}_{{i}_{2}},\dots ,{U}_{{i}_{k}}\}$ denote the set of users that the adversary controls. Let ${\mathit{\text{view}}}_{I}^{\mathit{\text{AP}}}({X}_{I},{1}^{n})$ denote the combined view of all users in *I*. ${\mathit{\text{view}}}_{I}^{\mathit{\text{AP}}}$ includes the input, ${X}_{I}=\{{T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}\}$, of all users in *I*, and a sequence of messages $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\dots ,E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ received by users in *I*. A simulator cannot generate the exact sequence $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\dots ,E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$, since it does not have the input of uncorrupted users. Instead, the simulator chooses a random value *α*_{
j
} for any user *U*_{
j
}∉*I*, from the distribution of trust values, *D*. The simulator denotes ${\alpha}_{{i}_{1}}={T}_{{i}_{1}},\dots ,{\alpha}_{{i}_{k}}={T}_{{i}_{k}}$ and computes *E*(*α*_{
j
}) for *j*=1,…,*n*−1. The simulator now computes: $\prod _{j=1}^{{i}_{1}}E\left({\alpha}_{j}\right)\equiv E\left(\sum _{j=1}^{{i}_{1}}{\alpha}_{j}\right)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2},\dots ,\prod _{j=1}^{{i}_{k}}E\left({\alpha}_{j}\right)\equiv E\left(\sum _{j=1}^{{i}_{k}}{\alpha}_{j}\right)\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}{N}^{2}$. Hence, a simulator replaces $E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ by $E\left(\sum _{j=1}^{{i}_{k}}{\alpha}_{j}\right)$.

*DIS*that distinguishes between the encryption of partial sums $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\cdots E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ of the correct trust values and the values $E\left(\sum _{j=1}^{{i}_{1}}{\alpha}_{j}\right),\cdots \phantom{\rule{1em}{0ex}}E\left(\sum _{j=1}^{{i}_{k}}{\alpha}_{j}\right)$ randomly produced by a simulator. We construct an algorithm,

*B*, that distinguishes between the two sequences

*E*(

*T*

_{1}),⋯

*E*(

*T*

_{n−1}) and

*E*(

*α*

_{1}),⋯,

*E*(

*α*

_{ k }), contradicting the semantic security property of

*E*. The input to algorithm

*B*is a sequence of values

*E*(

*x*

_{1}),⋯

*E*(

*x*

_{n−1}) and it attempts to determine whether the values

*x*

_{1},…,

*x*

_{n−1}are equal to the values

*T*

_{1},…,

*T*

_{n−1}that the users provide, or is a sequence of random values chosen from the distribution

*D*. The algorithm

*B*computes for every

*ℓ*=1,…,

*k*

and provides the encryption of partial sums $E\left(\sum _{j=1}^{{i}_{1}}{x}_{j}\right),\dots E\left(\sum _{j=1}^{{i}_{k}}{x}_{j}\right)$ as input to *DIS*. *B* returns as output the same output as *DIS*. Since the input of *DIS* is $E\left(\sum _{j=1}^{{i}_{1}}{T}_{j}\right),\dots E\left(\sum _{j=1}^{{i}_{k}}{T}_{j}\right)$ if and only if the input of *B* is *E*(*T*_{1}),…*E*(*T*_{n−1}), we find that *B* distinguishes between its two possible input distributions with the same probability that *DIS* distinguishes between its input distributions. □

*AP* uses *O*(*n*) messages each of length *O*(*n*).

### Weighted accumulated protocol WAP

*WAP*protocol, in addition to the

*AP*protocol, generates the weighted average trust in a specific user,

*U*

_{ t }, by the users in the community. The

*WAP*protocol is based on an anonymous communications protocol proposed in [31] and on the homomorphic cryptosystem, e.g., Paillier cryptosystem [1]. It is described in Algorithm 2.

The initiator, *U*_{
n
}, generates *n*−1 weights *w*_{1},…,*w*_{n−1}. Each *w*_{
i
} value reflects the ${U}_{n}^{\prime}s$ subjective trust level in user *U*_{
i
}. *U*_{
n
} initializes the accumulated variable, *A*, to 1, encrypts each *w*_{
i
} value by means of, e.g., the Paillier cryptosystem [1] as $E\left({w}_{i}\right)={g}^{{w}_{i}}{h}^{{r}_{n,i}}\left(\mathit{\text{mod}}\phantom{\rule{1em}{0ex}}{N}^{2}\right)$, composes a Trust Vector *T* *V*= [ *E*(*w*_{1}).. *E*(*w*_{n−1})] and sends the message *M*_{
t
}=(*T* *V*,*A*) to *U*_{1}. Here, as in the *AP* case, *p*, *q* are large prime numbers which compose the Paillier cryptosystem, *N*=(*p*−1)(*q*−1), and *g* and *h* are properly chosen parameters of the Paillier cryptosystem. *r*_{n,i} is a random degree of *h* chosen by *U*_{
n
} for each *U*_{
i
} from *C*. Note that the *AP* protocol is a private case of the *WAP* protocol where all weights *w*_{
i
} are equal to 1.

As in the *AP* case, the *M*_{
t
} message is received by the community users in the prescribed order. Each *U*_{
i
} user encrypts its weighted trust in *U*_{
t
} as $E\left({T}_{i}\right)=E{\left({w}_{i}\right)}^{{T}_{i}}E\left(\overline{0}\right)$ and accumulates it in the accumulated variable *A* (lines 6–10). Note that multiplying by the random encryption of zero $E\left(\overline{0}\right)$ ensures semantic security of the *WAP* protocol since the user’s output cannot be distinguished from a simulated random string. As a result, the initiator, *U*_{
n
}, receives the *M*_{
t
} message and decrypts the value accumulated in *A* as the weighted sum of trust ${S}_{t}=D\left(A\right)=\sum _{i=1}^{n-1}{w}_{i}{T}_{i}$. Therefore, the average trust is equal to ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}=1/10\sum _{i=1}^{n}{w}_{i}{T}^{i}$. Proposition 2 proves the privacy of the weighted average trust ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}$ in the *U*_{
t
} user by the community users in a computationally secure manner.

####
**Proposition**
**2**

Assume that an honest but curious adversary corrupts at most *k* users out of a community of *n* users, *k*<*n*. Then, *WAP* privately computes *w* *T*^{
a
v
r
}, the average weighted trust in user *U*_{
t
}.

####
*Proof*

The proof is similar to the proof of Proposition 1. View of adversary includes the input of compromised users ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$, trust vector *TV*, and the accumulated variable, *A*. Each compromised user ${U}_{{i}_{j}}$ from *I* receives $\mathit{\text{TV}}=\phantom{\rule{1em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}E\right({w}_{{i}_{j}}),\phantom{\rule{1em}{0ex}}E({w}_{{i}_{j+1}})\dots ,E({w}_{n}\left)\right]$ and $A=\prod _{i=1}^{{i}_{j}}E{\left({w}_{i}\right)}^{{T}_{i}}E\left(\overline{0}\right)$.

A simulator for the adversary simulates ${\mathit{\text{view}}}_{I}^{\mathit{\text{WAP}}}$ as follows. The simulator input ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$ is the same as the input of the compromised users. A simulator chooses at random *v*_{1},…,*v*_{
n
} according to a distribution, *W*, of weights, and $\stackrel{~}{{T}_{1}},\dots ,\stackrel{~}{{T}_{n}}$ according to a distribution, *D*, of trust values. Here $\stackrel{~}{{T}_{{i}_{1}}}={T}_{{i}_{1}},\dots ,\stackrel{~}{{T}_{{i}_{k}}}={T}_{{i}_{k}}$. Due to the semantic security of the homomorphic cryptosystem, the encrypted random values *E*(*v*_{1}),…,*E*(*v*_{
n
}) are indistinguishable from the encrypted correct weights $E\left({w}_{{i}_{1}}\right),\dots ,E\left({w}_{{i}_{n}}\right)$.

The randomization of any *U*_{
i
}−*t* *h* user output is performed by multiplying its secret ${w}_{i}^{{T}_{i}}$ by the random encryption of a zero string $E\left(\overline{0}\right)$. Given *E*(*w*), the two values *E*(*w*)^{
T
} and *E*(*u*), where *u* is chosen at random from the distribution of *wT*, can be distinguished since *T* is chosen from a small domain of trust values. Given *E*(*w*), the values $E{\left(w\right)}^{T}E\left(\overline{0}\right)$ are distributed identically to an encryption *E*(*w*)^{
T
}=*E*(*w* *T* mod *N*). Based on the semantic security of the homomorphic cryptosystem, *E*(*u*) and *E*(*w* *T*) cannot be distinguished even given *E*(*w*). □

*WAP* uses *O*(*n*) messages each of length *O*(*n*).

### Protocols for removal of outliers

The protocols for outliers removal are introduced in this section. The Public Key Encryption Based Protocol *PKEBP* produces a vector of the exact trust values. As a result, the initiator, *U*_{
n
}, can evaluate the correct trust range by removing the outliers that provide extremely high or low trust feedback. *PKEBP* preserves user privacy in a case where the adversary cannot corrupt the initiator and several users at the same time.

The generalized Commutative Encryption Based Protocol (*C* *E* *B* *P*) relaxes this limitation and privately computes the exact trust values contributed by each community user, even in the case when an adversary can corrupt the initiator and several users at the same time.

#### Public Key Encryption Based Protocol PKEBP

Denote the encryption algorithm used in this scheme by *E* and the decryption algorithm by *D*. *U*_{
n
} generates a pair (*k*,*s*) of public-private keys. Then *U*_{
n
} publishes its decryption public key *k*, while the private decryption key *s* is kept secret.

*PKEBP*is performed in two rounds (Algorithm 3, Figure 1). At the initialization stage

*U*

_{ n }initializes the

*n*−1-entry vector

*T*

*V*[ 1..

*n*−1] and sends it to the community of users in the prescribed order in the

*M*

_{ t }=(

*T*

*V*[ 1..

*n*−1]) message (Algorithm 3, lines 1–2 and Figure 1, Round 1).

In the first round, on reception of *M*_{
t
} each user, *U*_{
i
}, encrypts its trust, *T*_{
i
}, by *k* in the corresponding *T* *V*[ *i*]^{′}*s* entry as *E*(*T*_{
i
}), and sends the updated message *M*_{
t
} to the next user (Algorithm 3, lines 3–7).

The second round of the *PKEBP* protocol is performed when the updated *T* *V*[ 1.. *n*−1] vector returns from *U*_{n−1} to the user *U*_{1} (see Algorithm 3, lines 8–11 and Figure 1, Round 2). Note that the *TV* vector does not visit the initiator *U*_{
n
} after execution of the first round. Each user, *U*_{
i
}, performs a random permutation of its *i*−*t* *h* entry with a randomly chosen *i*_{
j
}−*t* *h* entry during the second round. After that, the newly updated *M*_{
t
} vector-message is sent to the next *U*_{i+1} user (Algorithm 3, lines 8–11).

The result of round 1 is a sequence of encrypted elements (*E*(*T*_{1}),…,*E*(*T*_{n−1})) while the result of round 2 is a sequence $\mathit{\text{TV}}[\phantom{\rule{0.3em}{0ex}}1\phantom{\rule{1em}{0ex}}\mathrm{..}\phantom{\rule{1em}{0ex}}n-1]=\left(E\right({T}_{1}^{\ast}),\dots ,E({T}_{n-1}^{\ast}\left)\right)$. The multi-set *T*_{1}, …,*T*_{n−1} is identical to the multi-set ${T}_{1}^{\ast},\phantom{\rule{1em}{0ex}}\dots ,{T}_{n-1}^{\ast}$. The sequence *T*_{1},…,*T*_{n−1} is permuted to ${T}_{1}^{\ast},\dots ,{T}_{n-1}^{\ast}$ by a permutation *π*, which is computed in a distributed manner by all community members (Algorithm 3, line 10). Thus, by applying the decryption procedure, all encrypted trust values *T*_{1},…,*T*_{n−1} are revealed (Algorithm 3, lines 12–13). Moreover, the random permutation *π* performed at the second round preserves the unlinkability of user identities.

Proposition 3 proves the privacy of the *PKEBP* protocol.

#####
**Proposition**
**3**

*PKEBP* performs computationally secure computation of exact private trust values assuming that an adversary cannot corrupt the initiator and several users at the same time.

#####
*Proof sketch*

**Case 1:** U_{
n
}**∉** I. We argue that *PKEBP* is private by showing that an adversary that controls a set of compromised users does not learn any information on the trust values of other users. We achieve that by showing a *simulator* that, given the input of compromised users, can simulate the messages that these users receive as part of the protocol. Therefore, protocol messages do not give users in *I* any information on users outside of *I*. Assume that the set *I* of compromised users includes *k* members $I=\left\{{U}_{{i}_{1}},\dots ,{U}_{{i}_{k}}\right\}$, while the uncompromised users are ${U}_{{i}_{k+1}},\dots ,{U}_{{i}_{n}}$. The view of users in *I* includes the input of compromised users ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$ and trust vectors *TV*. Each compromised user ${U}_{{i}_{j}}$ from *I* receives the *TV* vector with partially permuted entries.

A simulator for the adversary simulates this view as follows. The simulator input is the same as the input of compromised users and it contains the trust values of the compromised users ${T}_{{i}_{1}},\dots ,{T}_{{i}_{k}}$ and the set of their permuted indexes ${i}_{{j}_{}1},\dots {i}_{{j}_{}k}$. The simulator chooses a random value, ${\alpha}_{{i}_{\ell}}$, for any user *U*_{
ℓ
}∉*I* from the distribution *D* of trust values. The simulator sets *α*_{
ℓ
}=*T*_{
ℓ
} and computes *E*(*α*_{
j
}) for *ℓ*=*k*+1,…,*n*. Due to the semantic security of the homomorphic cryptosystem [24, 32], the simulator cannot distinguish between the encryption of the correct trust values and the encryption of simulated random variables, *E*(*α*_{
j
}), of uncompromised users, *U*_{
j
}, chosen from the distribution, *D*, of trust values.

**Case 2: {** U_{
n
}**}=** I. In this case, the view of *U*_{
n
} consists of the *TV* with the randomly permuted entries. *TV* includes the sequence of the randomly permuted exact trust values, decrypted by the secret key, *s*. We prove the privacy of *PKEBP* by showing a simulator that, given a *PKEBP* output sequence ${T}_{{i}_{1}},\dots ,{T}_{{i}_{n-1}}$ of the exact trust values, can simulate the *TV* as *U*_{
n
} receives it as a part of the protocol. A simulator for the compromised *U*_{
n
} simulates this view as follows. The simulator input is the multi-set *T*_{1},…,*T*_{
n
} of the exact trust values that have been decrypted by ${U}_{n}^{\prime}s$ public key, *s*. The simulator chooses a random permutation and permutes the received values. Due to the random permutation, *π*, performed by each community user, the simulator cannot distinguish between the simulated sequence ${T}_{{j}_{}1},\dots ,{T}_{{j}_{n-1}}$ and the correct output of the *PKEBP*.

As a result, given a multi-set of the exact trust values, *U*_{
n
} cannot link these values to the users that contributed them. □

*PKEBP* uses *O*(*n*) messages each of length *O*(*n*).

Generating the average trust level in the presence of semi-malicious users is based on the algorithm suggested in [19]. Let us define by *U*, the multi-set of non corrupted users which provide correct feedback, and by *V*, the multi-set of all users participating in the trust computation process. According to [19] the following requirement must be satisfied in our model: |*V*−*U*|≤*J* and |*V*|≥2*J* for a certain *J* value. Then the range of the correct trust values, *r* *a* *n* *g* *e*(*U*), contains the subset *r* *e* *d* *u* *c* *e*^{
J
}(*V*) of *V*. Here *r* *e* *d* *u* *c* *e*^{
J
}(*V*) is received from the *V* multi-set of all (correct and extremely low/high) trust values, by deleting the *J* smallest and *J* largest values, respectively.

If an adversary can corrupt the initiator and several users at the same time, a different protocol is required. The generalized Commutative Encryption Based Protocol *CEBP* is presented in the next subsection.

#### Commutative Encryption Based Protocol CEBP

The *CEBP* we propose, uses *commutative encryption* as a building block. An encryption scheme is commutative if a ciphertext that is encrypted by several keys can be decrypted regardless of the order of decryption keys. Formally, denote the encryption algorithm by *E* and the decryption algorithm by *D*. The encryption scheme is commutative if for every plaintext message *m* and every two keys *k*_{1},*k*_{2} if $c={E}_{{k}_{1}}\left({E}_{{k}_{2}}\right(m\left)\right)$ then $m={D}_{{k}_{1}}\left({D}_{{k}_{2}}\right(c\left)\right)$ (note that for any encryption scheme $m={D}_{{k}_{2}}\left({D}_{{k}_{1}}\right(c\left)\right)$. One possible candidate for a commutative encryption scheme is the Pohlig-Hellman scheme [33].

*CEBP*is for each user to encrypt all the trust values and then decrypt and permute them at the same time so that an adversary cannot associate decrypted trust values with the users that published their encryption. The

*CEBP*protocol is executed in three rounds (Algorithm 4). Each round passes sequentially from the first user

*U*

_{ n }to the last

*U*

_{n−1}.

The first round begins with the initiator, *U*_{
n
} choosing and publishing a public key. Every other user selects a symmetric key for a commutative encryption scheme. All the users encrypt their trust values both with their keys and with the public key of *U*_{
n
}. Encryption with the initiator’s public key prevents an adversary that does not control the initiator, *U*_{
n
}, from obtaining the multi-set of trust values. After the first round, for every *i*=1,…,*n*−1, the *i*-th entry in the trust vector, *TV*, includes the trust value of *U*_{
i
} encrypted by both the public key of *U*_{
n
} and the symmetric key of *U*_{
i
}.

In the second round each user encrypts all entries in *TV* entries in such a way that at the end of the second round the *i*-th entry is the trust value of *U*_{
i
} encrypted by the keys of *U*_{1},*U*_{2},…,*U*_{
n
}. Finally, in the third round, for every *i*=1,…,*n*−1, *U*_{
i
} decrypts every entry using its own symmetric key and randomly permutes the entries of *TV*. At the end of round 3 the trust vector contains all the trust values, encrypted by the public key of *U*_{
n
} and permuted. By decrypting all the entries in *TV*, *U*_{
n
} obtains the vector of all trust values.

We use El-Gamal encryption [34] as the initiator’s public key scheme. The symmetric scheme for users *U*_{1},…,*U*_{n−1} is Pohlig-Hellman. Both the Pohlig-Hellman and the El-Gamal schemes are implemented over the same group, which is defined as follows. Let *p* be a large prime, such that *p*−1 has a large prime factor, *q*. Let $g\in {\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$ be an element of order *q* in ${\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$. In a Pohlig-Hellman scheme, the key is a pair $a,b\in {\mathrm{\xe2\u201e\xa4}}_{p-1}^{\ast}$ such that *a* *b*≡1 mod (*p*−1). A plaintext $m\in {\mathrm{\xe2\u201e\xa4}}_{p}$ is encrypted by *c*≡*m*^{
a
} mod *p* and a ciphertext is decrypted by *m*≡*c*^{
b
} mod *p*. In an El-Gamal scheme, the private key is *a*∈{0,…,*q*−1}, the public key is *g*^{
a
} mod *p* and a plaintext $m\in {\mathrm{\xe2\u201e\xa4}}_{p}$ is encrypted by the pair (*g*^{
b
} mod *p*,*g*^{
a
b
}·*m* mod *p*). We refer to the two parts of an El-Gamal encryption as two *components*.

By using Pohlig-Hellman and El-Gamal encryption schemes over the same group we ensure that the security of *CEBP* can be reduced to the hardness of the Decisional Diffie-Hellman (DDH) problem [35]. The DDH problem is to distinguish between the two ensembles (*g*^{
x
} mod *p*,*g*^{
y
} mod *p*,*g*^{
z
} mod *p*) and (*g*^{
x
} mod *p*,*g*^{
y
} mod *p*,*g*^{
x
y
} mod *p*). The hardness assumption of DDH is that no probabilistic, polynomial time algorithm can distinguish between these two probability ensembles with non-negligible probability.

The details of the protocol follow.

*U*

_{ i }(

*i*=1,…,

*n*−1) chooses four random and independent pairs of Pohlig-Hellman keys $({a}_{i}^{1},\phantom{\rule{1em}{0ex}}{b}_{i}^{1})$, $({a}_{i}^{2},\phantom{\rule{1em}{0ex}}{b}_{i}^{2})$, $({\alpha}_{i}^{1},{\beta}_{i}^{1})$, $({\alpha}_{i}^{2},{\beta}_{i}^{2})$.

*U*

_{ i }uses the El-Gamal public key to encrypt its trust value,

*T*

_{ i }. The result is $\left({g}^{{k}_{i}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p,\phantom{\rule{1em}{0ex}}{T}_{i}{g}^{{k}_{i}{k}_{n}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p\right)$, where

*U*

_{ i }chooses

*k*

_{ i }randomly in the range 0,…,

*q*−1.

*U*

_{ i }proceeds to encrypt the El-Gamal encryption of

*T*

_{ i }with its Pohlig-Hellman keys. Each of the two components of the El-Gamal encryption is encrypted by one distinct Pohlig-Hellman key. The result is

*U*_{
i
} completes the round by publishing this value in *T* *V*[ *i*]. We think of *T* *V*[ *i*] as having two components, *T* *V*[ *i*,1] and *T* *V*[ *i*,2]. *U*_{
i
} stores ${g}^{{k}_{i}{a}_{i}^{1}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p$ in *T* *V*[ *i*,1] and stores ${\left({T}_{i}{g}^{{k}_{i}{k}_{n}}\right)}^{{a}_{i}^{2}}\phantom{\rule{0.2em}{0ex}}mod\phantom{\rule{0.2em}{0ex}}p$ in *T* *V*[ *i*,2].

*U*

_{ i },

*i*=1,…,

*n*−1 makes sure that every entry in

*T*

*V*[ ] is encrypted with all four of its Pohlig-Hellman encryption keys (where two of the keys are used to encrypt the left component and two are used to encrypt the right component). Thus,

*U*

_{ i }encrypts

*T*

*V*[

*i*] with ${\alpha}_{i}^{1}$ and ${\alpha}_{i}^{2}$ and encrypts

*T*

*V*[

*j*] for any

*j*≠

*i*with ${a}_{i}^{1},{a}_{i}^{2}$, ${\alpha}_{i}^{1}$ and ${\alpha}_{i}^{2}$. After the second round the entry

*T*

*V*[

*i*] holds the value:

In round 3, the users both decrypt and permute all the values. Each user decrypts all values using both its pairs of Pohlig-Hellman keys (lines 20–27) and then randomly permutes the resulting vector of values. Due to the commutative property of the scheme, the initiator, *U*_{
n
}, holds all the trust values at the end of round 3. However, the random permutation each user applies to the encrypted values in round 3 ensures that even if only a pair of users is not compromised, the decrypted trust values are randomly permuted in relation to their associated users.

#####
**Proposition 4**

Assume that the DDH problem is hard and assume that an honest but curious adversary corrupts at most *k* users out of a community of *n* users, *k*≤*n*. If the trust values of all the users are in the sub-group generated by *g*, then, *CEBP* privately computes the set of all trust values of community users.

#####
*Proof sketch*

If the adversary controls at least *n*−1 users, including the initiator, then the protocol is trivially private, since the output reveals the exact trust values of every user, and thus any protocol does not add information. If the adversary does not control the initiator then the protocol is private because all trust values are encrypted by the initiator’s public key throughout the protocol. Since the El-Gamal encryption scheme is semantically secure, given the hardness of DDH problem, it is easy to argue privacy.

Therefore, the most interesting case is when *k*≤*n*−2 and the adversary controls the initiator. To prove privacy we define a simulator that is given the adversary’s input and output (which includes the set of trust values) and simulates the adversary’s view of protocol messages.

Each message in our protocol consists of the trust vector *TV*. Each entry in this vector is a pair of elements in ${\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$. Thus, the whole view of the adversary can be written as *e*_{1},…,*e*_{
m
}, where ${e}_{i}\in {\mathrm{\xe2\u201e\xa4}}_{p}^{\ast}$ for every *i*=1,…,*m*. The value of *m* is at most *O*(*n*^{2}) because the number of elements in *TV* is 2(*n*−1) and the adversary receives a message with *TV* in it at most *n*−2 times for each of the three rounds.

Note that each element *e*_{
i
} is obtained by raising *g* to a power *η*_{
i
} that depends on the input and random coin tosses of each participant. The simulator generates a simulated view *f*_{1},…,*f*_{
m
} as follows. If *η*_{
i
} is determined by the input and coin tosses of the adversary, then the simulator who has access to this input and coin tosses sets *f*_{
i
}=*e*_{
i
}. However, if *η*_{
i
} is generated at least partially by an uncorrupted node then the simulator independently chooses a random element *ζ*_{
i
}∈{0,…,*q*−1} and sets ${f}_{i}={g}^{{\zeta}_{i}}$.

To prove that the simulator’s view is computationally indistinguishable from the real-world view, we construct a series of hybrid ensembles *H*_{0},…,*H*_{
m
}, such that *H*_{0} is the real world view *e*_{1},…,*e*_{
m
} and for every *i*=1,…,*m* we define *H*_{
i
}=△*f*_{1},…,*f*_{
i
},*e*_{i+1},…,*f*_{
m
}. Essentially, *H*_{
m
} is the view of the simulator.

We can show that for every *i*, if *H*_{
i
} can be computationally distinguished from *H*_{i+1} then the DDH assumption is false. Since we assume that DDH is a hard problem we have that *H*_{
i
} and *H*_{i+1} are computationally indistinguishable and since *m* is of polynomial size in *n*, we have that *H*_{0} is indistinguishable from *H*_{
m
}, completing the proof. □

The protocol requires *O*(*n*) messages, each of length *O*(*n*) and the computation complexity for each participant in the scheme is *O*(*n*).

### Multiple Private Keys Protocol *MPKP*

The *AP* and *WAP* protocols introduced in the previous sections carry out private trust computation assuming that the initiator *U*_{
n
} is not compromised and does not share its private key with other users. In the rest of this work assume that any community user, including *U*_{
n
}, may be compromised by a poly-bounded *k*-listening curious adversary.

The generalized Multiple Private Keys Protocol *MPKP* copes with this problem and outputs the average trust. The idea of the *MPKP* protocol is as follows. During the initialization stage the *U*_{
n
} user initializes all entries of trust vector, *TV*, and accumulated vector, *AV*, to 1, sets the accumulated variable *A* to 1, and sends the *M*_{
t
}=(*T* *V*,*A* *V*,*A*) message to the first community user *U*_{1} as in the previous protocols. During the first round of the *MPKP* protocol execution each user, *U*_{
i
}, randomly fragments its secret trust, *T*_{
i
}, to a sum of *n*−1 shares, encrypts the corresponding share by the public key of each *U*_{
j
}, *j*=1.. *n*−1 user and accumulates its encrypted shares (multiplying each of them with the corresponding entries) in the accumulated vector, *AV*. After execution of the first round, the updated *AV* vector does not return to the initiator *U*_{
n
}. The *AV* vector visits each community user, while each *U*_{
i
} opens the *i*−*t* *h* entry (that is encrypted by *U*_{
i
}−*t* *h* public key) revealing a sum of decrypted shares, encrypts this sum by the public key of the initiator *U*_{
n
}, accumulates this sum in the accumulated variable,*A*, and deletes the *i*−*t* *h* entry of the *AV* vector.

A detailed description of the *MPKP* protocol follows. Assume that each community user, *U*_{
i
}, *i*=1.. *n*−1 generates its personal pair $({P}_{i}^{+},{P}_{i}^{-})$ of private and public keys. Denote by *E*_{
i
} and *D*_{
i
} the encryption and decryption algorithms produced by *U*_{
i
}. The private key, ${P}_{i}^{+}$, is kept secret, while the public key, ${P}_{i}^{-}$, is shared with all other users *U*_{1},…,*U*_{i−1}, *U*_{i+1}…*U*_{
n
}. As in the previous schemes, the cryptosystem must be homomorphic. An additional requirement is that the homomorphism modulus, *m*, must be identical for all users. One possibility is to use the Benaloh cryptosystem [28, 36] for which many different key pairs are possible for every homomorphism modulus. The system works as follows. Select two large primes, *p*,*q*, such that: *N*=△*p* *q*, *m*|*p*−1, $gcd(m,(p-1)/m)=1$ and $gcd(m,q-1)=1$, which implies that *m* is odd. The density of such primes along appropriate arithmetic sequences is large enough to ensure efficient generation of multiple *p*,*q* (see [36] for details). Select $y\in {\mathrm{\xe2\u201e\xa4}}_{N}^{\ast}$ such that *y*^{ϕ(N)/m}≢1 mod *N*. The public key is (*N*,*y*), and encryption of $M\in {\mathrm{\xe2\u201e\xa4}}_{m}$ is performed by choosing a random $u\in {\mathrm{\xe2\u201e\xa4}}_{m}^{\ast}$ and sending *y*^{
M
}*u*^{
m
} mod *N*. In order to decrypt, the holder of the secret key computes at a preprocessing stage *T*_{
M
}=△*y*^{M ϕ(N)/m} mod *N* for every $M\in {\mathrm{\xe2\u201e\xa4}}_{m}$. It should be noted that *m* is small enough such that *m* exponentiations can be performed. Decryption of *z* is performed by computing *z*^{ϕ(N)/n} mod *N* and finding the unique *T*_{
M
} to which it is equal.

*MPKP*is performed in two rounds (Algorithm 5). The initialization procedure is shown in lines 1–4. The first round is the accumulation round, where all users share their secret trust

*T*

_{ i }values with other users. Upon reception of a message,

*M*

_{ t }, each user,

*U*

_{ i }, proceeds as follows: (a)

*U*

_{ i }chooses ${r}_{1}^{i},\dots ,{r}_{n-1}^{i}$ uniformly at random such that ${T}_{i}=\sum _{j=1}^{n-1}{r}_{j}^{i}$; (b)

*U*

_{ i }encrypts each ${r}_{j}^{i},\phantom{\rule{1em}{0ex}}j=1\phantom{\rule{1em}{0ex}}\mathrm{..}\phantom{\rule{1em}{0ex}}n-1$ by the public key ${P}_{j}^{-}$ of the

*U*

_{ j }user and multiplies it by the current value stored in

*j*−

*t*

*h*entry of

*AV*. As a result, the output

*AV*vector contains the accumulated product $\prod _{k=1}^{n-1}{E}_{j}\left({r}_{j}^{k}\right)$ in each

*j*−

*t*

*h*entry (lines 5–12).

In the second round, on reception of message *M*_{
t
}, each user, *U*_{
i
}, decrypts the *M*_{
t
} message and decrypts the corresponding *i*−*t* *h* entry by its private key, ${P}_{i}^{+}$, computes the $\sum _{j=1}^{n-1}{r}_{i}^{j}$ sum, encrypts it by the *U* *n*′*s* public key, ${P}_{n}^{-}$, as ${E}_{n}\left(\sum _{j=1}^{n-1}{r}_{i}^{j}\right)$, accumulates this sum in the accumulated variable, *A*, deletes the *i*−*t* *h* entry and sends the updated *TV* vector to the next *U*_{i+1} user. Note that the partial sum $\sum _{j=1}^{n-1}{r}_{i}^{j}$ that *U*_{
i
} decrypts reveals no information about correct trust values. As a result of the second round the initiator *U*_{
n
} receives $A=\prod _{i=1}^{n-1}{E}_{n}\left(\sum _{j=1}^{n-1}{r}_{i}^{j}\right)$ (lines 13–19). *U*_{
n
} decrypts $\prod _{j=1}^{n-1}{E}_{n}\left({r}_{i}^{j}\right)$, and computes the sum of trusts as ${S}_{t}=\sum _{i=1}^{n-1}\sum _{j=1}^{n-1}{r}_{i}^{j}$. Actually, the average trust *T*^{
a
v
r
} is equal to $\frac{{S}_{t}}{n}$ (lines 20–22). Proposition 4 states the privacy of the *MPKP* protocol. The communication complexity of the *MPKP* protocol is *O*(*n*) messages, each of length *O*(*n*).

####
**Proposition 5**

*MPKP* performs computationally secure computation of the exact private trust values in the Additive Reputation System. No restriction is imposed on the initiator *U*_{
n
}.

The last introduced protocol is the *MPWP* for the weighted average trust ${\mathit{\text{wT}}}_{t}^{\mathit{\text{avr}}}$ computation. The idea of the *MPWP* is as follows. During the initialization stage the *U*_{
n
} user generates a vector, *TV*, such that each *i*−*t* *h* entry contains the *U*_{
i
}−*t* *h* weight *w*_{
i
} encrypted by the *U*_{
n
}−*t* *h* public key. *U*_{
n
} sends *TV* and a (*n*−1)×(*n*−1) matrix, *SM*, with all entries initialized to 1 to the first community user, *U*_{1}, as in the previous protocols. During the first round of the *MPWP* execution each *U*_{
i
} computes its encrypted weight in the power of its secret trust ${E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}$, multiplies it by a randomly chosen number (bias) *z*_{
i
}, and accumulates the product in the accumulated entry (by multiplying the entry by the obtained result). In addition, *U*_{
i
} fragments its bias, *z*_{
i
}, into *n*−1 shares, encrypts each *j*−*t* *h* share by the public key of *U*_{
j
}, and inserts it in the *j*−*t* *h* location of *i*−*t* *h* matrix row. At the end of the first round *U*_{
n
} decrypts the total biased weighted trust. The total random bias is removed during the second round of the *MPWP* execution when each *U*_{
j
} decrypts the entries of *j*−*t* *h* matrix column, encrypts the sum of these values by the public key of the initiator, accumulates it in an accumulation variable, *A*, and deletes the *j*−*t* *h* column.

The details follow. The initiator, *U*_{
n
}, starts the first round by generating the encryption of the *n*−1 entries trust vector, *T* *V*= [ *E*_{
n
}(*w*_{1}).. *E*_{
n
}(*w*_{n−1})]. Note, that each weight *w*_{
i
} is encrypted by the *U*_{
n
}−*t* *h* public key, ${P}_{n}^{-}$. In addition, *U*_{
n
} initializes to 1 each entry of the (*n*−1)×(*n*−1) matrix of shares *SM*. The ${M}_{t}^{w}$ message sent by *U*_{
n
} to the community users is *M*=(*T* *V*,*S* *M*). Upon the *TV* vector reception each *U*_{
i
} user proceeds as follows: (a) *U*_{
i
} computes ${E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}\xb7{z}_{i}$. Here *z*_{
i
} is a randomly generated by *U*_{
i
} number that provides the secret bias. (b) *U*_{
i
} accumulates its encrypted weighted trust in the accumulated variable *A* by setting $A=A\xb7{E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}\xb7{z}_{i}$. After that, the *i*−*t* *h* entry of *TV* is deleted. (c) *U*_{
i
} shares *z*_{
i
} in the *i*−*t* *h* row of the *SM* shares matrix as $\mathit{\text{SM}}\left[\phantom{\rule{0.3em}{0ex}}i\right]\left[\phantom{\rule{0.3em}{0ex}}\right]=\phantom{\rule{1em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}{E}_{1}\right({z}_{i}^{1}\left)\phantom{\rule{1em}{0ex}}\mathrm{..}\phantom{\rule{1em}{0ex}}{E}_{n-1}\right({z}_{i}^{n-1}\left)\right]$. At the end of the first round *U*_{
n
} receives the *T* *V*[ ] entry that is equal to the biased product $\mathit{\text{BT}}=\prod _{j=1}^{n}{E}_{n}{\left({w}_{i}\right)}^{{T}_{i}}{z}_{i}$, encrypted by its public key, and the updated shares matrix *SM* while $\mathit{\text{SM}}\left[\phantom{\rule{0.3em}{0ex}}i\right]\left[\phantom{\rule{0.3em}{0ex}}j\right]={E}_{j}\left({z}_{i}^{j}\right)$. Actually, the decryption procedure applied on the *T* *V*[] vector outputs the decrypted sum $D\left(\mathit{\text{TV}}\right[\phantom{\rule{0.3em}{0ex}}\left]\phantom{\rule{0.3em}{0ex}}\right)=\sum _{i=1}^{n-1}{w}_{i}{T}_{i}+\sum _{i=1}^{n-1}{z}_{i}$. A second round is performed in order to subtract the random bias $\sum _{i=1}^{n-1}{z}_{i}$ from the correct weighted average trust *w* *T*^{
a
v
r
}. The second round of the *MPWP* is identical to the corresponding round of the *MPKP*. Upon reception of the *SM* matrix each user, *U*_{
i
}, decrypts the corresponding *i*−*t* *h* column ${E}_{i}\left({z}_{1}^{i}\right)\phantom{\rule{1em}{0ex}}{E}_{i}\left({z}_{2}^{i}\right)\dots {E}_{i}\left({z}_{n-1}^{i}\right)$, encrypted by all community users by *U*_{
i
}−*t* *h* public key, ${P}_{i}^{-}$. Each *U*_{
i
}, *i*=1.. *n*−1 computes the sum of the partial shares ${\mathit{\text{PSS}}}_{i}=\sum _{j=1}^{n-1}{z}_{j}^{i}$, encrypts it by the *U*_{
n
}−*t* *h* public key, ${P}_{n}^{-}$, and accumulates it in the accumulated variable *A*. After that, *i*−*t* *h* *S* *M*^{′}*s* column *S* *M*[ ][ *i*] is deleted. As a result of the second round, the initiator, *U*_{
n
}, receives the accumulated variable, $A=\prod _{i=1}^{n-1}{E}_{i}\left({\mathit{\text{PSS}}}_{i}\right)$. The encrypted bias, *BT*, is decrypted as $D\left(A\right)=\sum _{i=1}^{n-1}\sum _{j=1}^{n-1}{z}_{j}^{i}$.

*w*

*T*

^{ a v r }is equal to

*w*

*T*

^{ a v r }=

*T*

*V*−

*A*. The private trust computation carried out by the

*MPKP*and the

*MPWP*protocols is preserved in the computationally secure manner due to the following reasons:

- (a)
Each community user,

*U*_{ i }, fragments its trust,*T*_{ i }, randomly into*n*−1 shares (Algorithm 5, lines 6–8). - (b)
Each ${r}_{i}^{j}$ encrypted by

*U*_{ i }by the*U*_{ j }−*t**h*public key, ${P}_{j}^{-}$, shared with each*U*_{ j },*j*=1,…,*n*−1 user and accumulated in the*TV*vector, reveals no information about the exact*T*_{ i }value to*U*_{ j }(lines 9–14). - (c)
The decryption performed by each

*U*_{ i },*i*=1,…,*n*−1 by its private key, ${P}_{i}^{+}$, at the second round, outputs the sum of the partial shares, ${D}_{i}\left(\mathit{\text{TV}}\right[\phantom{\rule{0.3em}{0ex}}i\left]\phantom{\rule{0.3em}{0ex}}\right)=\sum _{j=1}^{n-1}{r}_{j}^{i}$ of all community users. In essence, the $\sum _{j=1}^{n-1}{r}_{j}^{i}$ value reveals no information about the secret trust values*T*_{1},…,*T*_{i−1},*T*_{i+1},…,*T*_{n−1}. - (d)
The encryption ${E}_{n}\left(\sum _{j=1}^{n-1}{r}_{j}^{i}\right)$ of the partial shares sum performed by each

*U*_{ i }with the initiator*U*_{ n }public key ${P}_{n}^{-}$ and accumulated in*A*, can be decrypted by*U*_{ n }only. - (e)
Assume a coalition ${U}_{{j}_{i}},\dots ,{U}_{{j}_{i+k-1}}$ of at most

*k*<*n*curious adversarial users, possibly including the initiator*U*_{ n }. Then the exact trust values revealed by the coalition, are the coalition members trust only. The privacy of the uncorrupted users is preserved by the homomorphic encryption scheme which generates for each user its secret private key, and by the random fragmentation of the secret trust.

In *MPWP*, *O*(*n*) messages of length *O*(*n*^{2}) are sent.

## Conclusions

We derived a number of schemes for the private computation of trust in a given user by a community of users. Trust computation is performed in a fully distributed manner without involving a Trusted Authority. The proposed *AP* and *WAP* protocols are computationally secure, under the assumption of an uncompromised initiator, *U*_{
n
}. The *AP* and *WAP* protocols compute the average unweighted and weighted trust, respectively. The generalized *MPKP* and *MPWP* protocols relax the assumption that *U*_{
n
} is non-compromised. They carry out the private unweighted and weighted trust computation, respectively, without limitations imposed on *U*_{
n
}. The number of messages sent in the proposed protocols is *O*(*n*) (large) messages.

The *PKEBP* and *CEBP* for the removal of outliers are presented as well.The protocols, introduced and analyzed in this paper, may be efficiently applied in the fully distributed environment without any trusted authority. Compared with other models, our schemes privately compute trust values with low communication overhead of *O*(*n*) (large) messages in the simplified ring network topology. The schemes may be applied to complete topology systems when all network users are connected by direct links. The schemes may be attractive in the case when sending the linear number (*O*(*n*)) of large messages is better than sending a substantially larger number (*O*(*n*^{3})) of possibly smaller messages. Moreover, the outliers removal (performed by the *CEBP* protocol) may be efficiently performed by the computationally restricted users when there are no resources for generating computationally expensive Interactive and Non Interactive Zero Knowledge Proofs. The schemes proposed in this paper are not restricted to trust computation. They may be extended to other models that compute privately sensitive information with only *O*(*n*) messages.

In a case where the trust is represented by several values rather then a single value, one can apply our techniques to each such value independently.

## Acknowledgments

Supported by Deutsche Telekom Laboratories at Ben-Gurion University of the Negev, Israel, Rita Altura Trust Chair in Computer Sciences, ICT Programme of the European Union under contract number FP7-215270 (FRONTS), Lynne and William Frankel Center for Computer Sciences, and the internal research program of the Sami Shamoon College of Engineering. The paper is a full version of two extended abstracts each describing a different part of the results [8, 9].

## Declarations

## Authors’ Affiliations

## References

- Paillier P:
*Public-key cryptosystems based on composite degree residuosity classes. Advances in cryptology, EUROCRYPT 99*. Springer Berlin Heidelberg, pp 223–238; 1999.Google Scholar - Pavlov E, Rosenschein JS, Topol Z:
*Supporting privacy in decentralized additive reputation systems*. Trust Management, Springer Berlin Heidelberg, pp 108–119; 2004.View ArticleGoogle Scholar - Gudes E, Gal-Oz N: A grubshtein: methods for computing trust and reputation while preserving privacy. 2009. Proceedings of 23rd Annual IFIP WG 11.3 Working Conference on Data and Applications Security, Springer Berlin Heidelberg, pp 291–298Google Scholar
- Cramer R, Damgard IB, Buus Nielsen J: Multiparty computation from threshold homomorphic encryption. In
*EUROCRYPT ‘01: proceedings of the international conference on the theory and application of cryptographic techniques*. Springer, Berlin Heidelberg; 2001:280–299.Google Scholar - Franklin M, Haber S: Joint encryption and message-efficient secure computation.
*J Cryptology*1996, 9(4):217–232. 10.1007/s001459900013MATHMathSciNetView ArticleGoogle Scholar - Romero-Tris C, Castella-Roca J, Viejo A: Multi-party private web search with untrusted partners. 2012. Security and Privacy in Communication Networks, Springer Berlin Heidelberg. pp. 261-280. In: Rajarajan M, et al., Proceedings of SecureComm 2011, pp 261–280View ArticleGoogle Scholar
- Damgard I, Faust S, Hazay C: Secure two-party computation with low communication. In
*TCC 2012, LNCS 7194*Edited by: Cramer R. 2012, 54–74.Google Scholar - Dolev S, Gilboa N, Kopeetsky M: Computing trust privately in the presence of curious and malicious users. In
*Proceedings of the international symposium on stochastic models in reliability engineering, life sciences and operations management*. Beer-Sheva, Israel: Sami Shamoon College of Engineering; 2010.Google Scholar - Dolev S, Gilboa N, Kopeetsky M: Computing multi-party trust privately in
*O(n)*time units sending one (possibly large) message at a time. In*Proceedings of 25-th Symposium On Applied Computing (SAC 2010)*. Sierre, Switzerland; 2010:1460–1465.View ArticleGoogle Scholar - Asharov G, Jain A, Tromer E, Vaikuntanathan N, Wichs D: Multiparty computation with low communication, computation and interaction via threshold FHE.
*Proceedings of the EUROCRYPT 2012*2012, 483–501. 2012 2012View ArticleGoogle Scholar - Beerliova-Trubmiova Z, Hirt M: Perfectly-secure mpc with linear communication complexity.
*TCC2008, LNCS 4948*2008, 213–230.Google Scholar - Dimitriou T, Michalas A: Multi-party trust computation in decentralized environment. In
*Proceedings of the 5-th, IFIP international conference on New Technologies, Mobility and Security (NTMS 2012)*. Istanbul, Turkey; 2012.Google Scholar - Dimitriou T, Michalas A: Multi-party trust computation in decentralized environments in the presence of malicious adversaries. 2013.http://dx.doi.org/10.1016/j.adhoc.2013.04.013 Ad Hoc Netw J. Elsevier,Google Scholar
- Bachrach Y, Parnes A, Procaccia AD, Rosenschein JS: Gossip-based aggregation of trust in decentralized reputation systems.
*Autonomous Agents, Multi-Agent Syst*2009, 19(2):153–172. 10.1007/s10458-008-9073-6View ArticleGoogle Scholar - Huynh TD, Jennings NR, Shadbolt NR: An integrated trust and reputation model for open multi-agent systems. In
*Proceedings of 16th European Conference on Artificial Intelligence*. Spain, pp 18–22; 2004.Google Scholar - Kinateder M, Rothermel K:
*Architecture and algorithms for a distributed reputation system. Trust, Management*. Springer Berlin Heidelberg; 2003.Google Scholar - Mui L, Mohtashemi M, Halberstadt A: A computational model of trust and reputation. 2002. System Sciences, 2002. HICSS. Proceedings of the 35th Annual Hawaii International Conference on. IEEE/pp 1435–1439View ArticleGoogle Scholar
- Xiong L, Liu L: PeerTrust: supporting reputation-based trust for peer-to-peer electronic communities.
*IEEE Trans. Knowl Data, Eng*2004, 16(7):843–857. 10.1109/TKDE.2004.1318566View ArticleGoogle Scholar - Dolev D, Lynch NA, Pinter SS, Stark EW, Weihl WE: Reaching approximate agreement in the presence of faults.
*J ACM*1986, 33(3):499–516. 10.1145/5925.5931MathSciNetView ArticleGoogle Scholar - Pedersen TP:
*Non-interactive and information theoretic secure verifiable secret sharing. Advances in Cryptology CRYPTO 91,*. Springer Berlin Heidelberg, pp 129–140; 1991.Google Scholar - Shamir A: How to share a secret.
*Commun ACM*1979, 11(22):612–613.MathSciNetView ArticleGoogle Scholar - Hasan O, Brunie L, Bertino E, Shang N: A decentralized privacy preserving reputation protocol for the malicious adversarial model.
*Inf Forensics Secur J*2013, 8(6):1–14.View ArticleGoogle Scholar - Dolev S, Ostrovsky R: Xor-trees for efficient anonymous multicast and reception.
*ACM Trans Inf Syst Secur*2000, 3(2):63–84. 10.1145/354876.354877View ArticleGoogle Scholar - Goldreich O:
*Foundations of cryptography: volume 1, basic tools*. New York: Cambridge University Press; 2000.Google Scholar - Naccache D, Stern J: A new public key cryptosystem based on higher residues. 1998. Proceedings of the 5th, ACM Conference on Computer and Communications Security, pp 59–66View ArticleGoogle Scholar
- Goldwasser S, Micali S: Probabilistic encryption.
*J Comput Sci Contr Syst*2004, 28: 108–119.Google Scholar - Okamoto T, Uchiyama S: A new public-key cryptosystem as secure as factoring.
*EUROCRYPT 1998*1998, 308–318.Google Scholar - Benaloh J: Dense probabilistic encryption.
*Proceedings of the workshop on selected areas of cryptography. Kingston, pp 120–128*1994.Google Scholar - Boneh D, Goh E-J, Nissim K:
*Evaluating 2-DNF formulas on ciphertexts*. Theory of cryptography TCC, Springer Berlin Heidelberg, pp 325–341; 2005.View ArticleGoogle Scholar - Gentry C: Fully homomorphic encryption using ideal lattices. In
*Proceedings of the forty-first annual ACM symposium on Theory of computing STOC*. New York, NY, USA: ACM; 2009:169–178.Google Scholar - Beimel A, Dolev S: Buses for anonymous message delivery.
*J Cryptology*2003, 16(1):25–39. 10.1007/s00145-002-0128-6MATHMathSciNetView ArticleGoogle Scholar - Goldreich O:
*Foundations of cryptography: volume 2, basic applications*. New York: Cambridge University Press; 2004.View ArticleGoogle Scholar - Pohlig SC, Hellman ME, (1978): An improved algorithm for computing logarithms in GF(p) and its cryptographic significance. IEEE Trans. Inf. Theory 24(1):106–110.Google Scholar
- El Gamal T: A public key cryptosystem and a signature scheme based on discrete logarithms. In
*Proceedings of CRYPTO 84 on Advances in cryptology*. New York, NY, USA: Springer-Verlag New York, Inc; 1985:10–18.Google Scholar - Boneh D: The decision Diffie-Hellman problem.
*Proceedings of Algorithmic Number Theory, Third International Symposium, ANTS-III.*1998, 48–63.View ArticleGoogle Scholar - Benaloh J: Verifiable secret-ballot elections.
*Ph.D. thesis*1987. Yale University Yale UniversityGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.