I wrote my first production Solidity contract in early 2022. It was a straightforward ERC-20 token for a project that needed a custom token with a fixed supply and a basic vesting schedule. Nothing exotic. The kind of contract that experienced Solidity developers would consider routine.
It took me three weeks. The Solidity itself took about two days. The other nineteen days were spent on everything around it.
The contract itself
The token logic was standard. Fixed supply minted to the deployer, a vesting contract that held allocations and released them on a schedule, and standard ERC-20 transfer functions. I used OpenZeppelin's ERC-20 base because writing your own is an unnecessary risk.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.17;
import "@openzeppelin/contracts/token/ERC20/ERC20.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
contract ProjectToken is ERC20, Ownable {
uint256 public constant MAX_SUPPLY = 100_000_000 * 10**18;
constructor() ERC20("ProjectToken", "PTK") {
_mint(msg.sender, MAX_SUPPLY);
}
}
This part was fine. The problems started with gas.
Gas estimation
Gas costs in Solidity aren't intuitive if you come from backend development. Every storage write costs gas. Every computation costs gas. But the costs aren't proportional in the way you expect.
Writing to a storage slot for the first time costs 20,000 gas. Writing to the same slot again costs 5,000 gas. Reading from storage costs 2,100 gas for the first access in a transaction and 100 gas for subsequent accesses. These numbers matter because they determine whether your contract is economically viable.
The vesting contract had a function that released tokens to a beneficiary. My first version looped through all vesting schedules for that beneficiary and released everything that was due. On a local blockchain, this worked fine. On testnet with real gas costs, releasing 10 vesting schedules in one transaction cost $40 at the gas prices at the time.
The fix was to allow releasing one schedule at a time, letting the caller choose which one. This reduced the gas cost by an order of magnitude for the common case.
Hardhat and local development
Hardhat was the development framework. The setup is straightforward and the documentation is good.
// hardhat.config.ts
import { HardhatUserConfig } from "hardhat/config";
import "@nomicfoundation/hardhat-toolbox";
const config: HardhatUserConfig = {
solidity: "0.8.17",
networks: {
hardhat: {
chainId: 1337,
},
goerli: {
url: process.env.GOERLI_RPC_URL || "",
accounts: process.env.PRIVATE_KEY ? [process.env.PRIVATE_KEY] : [],
},
},
};
The local Hardhat network is an in-memory blockchain that mines a block for every transaction instantly. This is convenient but misleading, because it means your tests never encounter the timing issues that exist on a real network.
Forking mainnet locally is useful for testing against real state:
npx hardhat node --fork https://mainnet.infura.io/v3/YOUR_KEY
This gives you a local copy of the Ethereum mainnet state at the current block. You can interact with real deployed contracts without spending real gas.
Reentrancy
The classic smart contract vulnerability. It happens when your contract calls an external contract, and that external contract calls back into yours before the first call has finished.
The sequence:
- User calls
withdraw()on your contract - Your contract checks the user's balance: they have 10 ETH
- Your contract sends 10 ETH to the user's address
- The user's address is actually a contract, and its
receive()function callswithdraw()again - Your contract checks the balance again. It hasn't been updated yet because step 3 hasn't completed
- Your contract sends another 10 ETH
- This repeats until the contract is drained
The fix is the checks-effects-interactions pattern. Update state before making external calls:
function withdraw() external {
uint256 amount = balances[msg.sender];
require(amount > 0, "No balance");
balances[msg.sender] = 0; // Effect before interaction
(bool success, ) = msg.sender.call{value: amount}("");
require(success, "Transfer failed");
}
I knew about this in theory. I still wrote a version of the vesting release function that made the transfer before updating the released amount. The test suite caught it because I wrote a test specifically for reentrancy, but if I hadn't, it would have gone to testnet with that bug.
Integer overflow before Solidity 0.8
Solidity 0.8 added built-in overflow checking. Before that, you needed SafeMath. If you used an older compiler version and forgot SafeMath, arithmetic could wrap around silently. uint256(1) - uint256(2) would give you a very large number instead of reverting.
Since I was using 0.8, this wasn't my problem. But I was reading older contract code as reference, and some of it used SafeMath. Understanding why it existed helped me understand what the compiler was now doing automatically.
tx.origin vs msg.sender
msg.sender is the address that directly called your contract. tx.origin is the address that initiated the entire transaction chain. If a user calls Contract A, which calls Contract B, then inside Contract B, msg.sender is Contract A and tx.origin is the user.
Using tx.origin for authorization is a vulnerability. An attacker can create a contract that tricks a user into calling it, and that contract then calls your contract. Your contract sees tx.origin as the user and grants access.
Always use msg.sender for authorization.
Deploying to testnet
Local tests passed. I deployed to Goerli (the testnet available at the time). The deployment script:
import { ethers } from "hardhat";
async function main() {
const Token = await ethers.getContractFactory("ProjectToken");
const token = await Token.deploy();
await token.deployed();
console.log("Token deployed to:", token.address);
}
main().catch((error) => {
console.error(error);
process.exitCode = 1;
});
The deployment succeeded, but the vesting contract behaved differently than in tests. The issue was block timestamps. On Hardhat, you can advance time with evm_increaseTime. On testnet, time advances with real blocks. My vesting schedule used block timestamps for release calculations, and the precision assumptions I made locally didn't hold on a real network where block times vary.
The fix was to add a tolerance window to the release time checks rather than requiring exact matches.
What the experience taught me
Smart contract development is backend development where bugs are permanent and public. You can't push a hotfix. You can't roll back a migration. Once a contract is deployed, its code is immutable.
This changes how you think about testing. In web development, I write tests to prevent regressions. In Solidity, I write tests to prove the contract can't be exploited. The difference is the adversarial framing. You aren't testing happy paths. You're testing what happens when someone actively tries to break your contract.
The tooling is good now. Hardhat, Foundry, OpenZeppelin's libraries. The Solidity language itself is reasonable. The hard part is the mental model shift: from building features to building guarantees.
The reentrancy section is the clearest explanation I've read. I've seen it described in theory but seeing the exact sequence of calls laid out like you did finally made it stick.