If I take the simplest example of running a small standard demand Linux instance continuously for a year (8760 hours) in the US and ignore data transfer, storage and leap years, the prices work out as:
- On Demand Instance: 8760 * 0.10 = $876
Reserved Instance: $325 + (8760 * 0.03) = $587.80
- x * 0.10 = 325 + (x * 0.03)
x = 4643 hours (rounded up)
To make a real world cost-effectiveness calculation for pay-by-the-hour computing resources implies you have a comprehensive understanding of your physical hardware costs (electricity, rack space, IT staffing/hardware support, equipment costs, replacement costs, etc.) as well as average and peak data storage and usage requirements factoring in growth, and implementation costs of any new software solution. The calculation is further complicated when data service providers start offering license fees by the hour instead of fixed one time or annual fees - another calculation around the number of usage hours required to break even becomes necessary.
Also to be factored in is whether security, performance, availability and reliability concerns outweigh the many cloud benefits including rapid scalability and SAAS friendly model. Not everyone is convinced commercial cloud offerings currently meet all the requirements: Cloud computing not fully enterprise-ready, IT execs say - though enough people are convinced to make for some nice looking graphs; and AWS customer numbers are now approaching half a million.