Smart contracts are a relatively new technological solution to the problem of trusting other parties in transactions. The idea is to create a contract in the form of a computer program that runs on a cryptocurrency platform. Instead of trusting the parties involved, or even a third party, the smart contract only relies on the platform functioning correctly. They are limited, however, to the sorts of unambiguous inputs and outputs that a computer program can reliably work with, such as balances of various crypto tokens. However, advocates are expecting innovations to expand the scope of the inputs and outputs that they can interact with (smart locks, smart cars, etc). Smart contracts are most notably associated with the Ethereum platform.
More traditional methods for dealing with the issue of trust include things such as escrow (a trusted third party who holds and releases funds based on conditions specified in the agreement) and legally enforceable contracts. This requires trust in some human beings, but only a relative few administrators who are trusted by all parties to the transaction. The disadvantages here are that the third parties charge a higher fee (compared to smart contracts) for their trustworthiness, and also that they may not turn out to be so trustworthy after all.
All of these solutions require at least one party to put themself in a sort of compromised position in order to be trustable by the other party or parties. For instance, in the case of escrow, one party needs to put some of their money into a condition where they can no longer access it. Only at this point does the other party feel secure enough to send their goods or services. In the case of a legally binding contract, one or more parties choose to put themselves into a position where they will be punished if they do not fulfill their end. It is the inability to renege, or the consequence for reneging, that makes the other party comfortable with proceeding.
The interesting thing to note here is that there is power in the ability to put one’s self in a more vulnerable position, where one cannot renege without consequence.
Personal Trust at Smaller Scales
On smaller scales, we trust people directly because we have personal relationships with them. If you think about it, this vulnerability mechanism is the same. By making a personal promise, we are making ourselves emotionally vulnerable, inasmuch as we would feel guilt for breaking that promise. This puts us in an advantageous position, because people trust (correctly or not) that we would feel guilty for screwing them over. It’s the Original Smart Contract. Without the human capacity for guilt, we could not earn the trust of other people.
There is also the factor of reputation within a group which appears to be another basis for trust. However I would argue that this piggybacks on the same emotional vulnerability as above. What exactly does one have a reputation for? By having a series of good business transactions, what is somebody actually demonstrating about themself other than that they are a moral person? People with no sense of morality could get by in the business world, to a large extent still to everybody’s benefit, by maintaining the false reputation of being vulnerable to feelings of guilt. However if this person’s true nature were discovered, they would lose a lot of trust. In a world where nobody has a sense of guilt and everybody knows it, the illusion holding up the system of reputations would fall apart completely.
An Ad Hoc Hypothesis
I know how easy it is to come up with plausible sounding yet incorrect scientific hypotheses. (There’s a whole festival dedicated to doing this on purpose. I’ve been.) This is particularly true when it comes to evolutionary psychology.
With that disclaimer, as a matter of pure speculation, I wonder whether this emotional vulnerability is an evolved feature for this very reason; we need it in order to be trustworthy. Granted, it would be evolutionary advantageous within a society to merely fake a senese of morality, and indeed such people do exist. However, too many untrustworthy people would make the society as a whole less fit than others.
According to one view, direct relationships were the basis of trust long ago while people were in small tribes. This is perhaps anthropologically accurate or perhaps merely allegorical. At any rate, devices such as escrow, contracts, even markets themselves, are necessary to scale past relations with people who we could relate to personally. We accept some alienation for efficiency. Smart contracts could be the next step on that scale. This is fine, but let us remember the trade-off we are making: we want a more alienating trust-less system for efficiency at scale.
What of those who do not heed the warning, and would prefer a techno-utopia of smart contracts where we “no longer need trust”? Again, here’s where I would speculate. What if, as per my ad hoc hypothesis, our sense of morality came about from the evolutionary advantage of being trustworthy? What direction, then, would we evolve, biologically or culturally, if we no longer need to be trustworthy, and morality is no longer an evolutionary advantage?