700 M records after 5 months means 8.4 B after 5 years (assuming data flow is not growing). Welcome to the world of big data. It is exciting here and we welcome new residents every day :)
I will describe three additional steps you can take. The first two are temporary - at some point you will have too much data and you will have to move on. However, each of them requires more work and / or more money, so it makes sense to do it step by step.
Step 1: the best equipment - scaling
Faster drives, RAID, and more RAM will take you part of the way. Scaling, as it is called, ultimately breaks, but if your data grows linearly and not exponentially, then it will hold you back for a while.
You can also use SQL Server replication to create a copy of your database on another server. Replication works by reading transaction logs and sending them to your replica. Then you can run scripts that create summary (daily, monthly, annual) tables on the secondary server that will not kill the performance of your primary.
Step 2: OLAP
Since you have SSIS at your disposal, start a discussion of multidimensional data. With a good design, OLAP Cubes will take you a long way. They may even be enough to manage billions of records, and you can stay there for several years (this was done, and it lasted two years or so).
Step 3: Scaling
Processing more data by distributing and processing data on multiple machines. When done correctly, it allows you to scale almost linearly - to have more data, and then add more machines to maintain continuous processing.
If you have $$$, use solutions from Vertica or Greenplum (there may be other options, these are the ones I am familiar with).
If you prefer open source / byo, use Hadoop, register event data in files, use MapReduce to process it, store the results in HBase or Hypertable. There are many different configurations and solutions - the entire area is still in its infancy.