Try the following:
select min(t.dateColumn) date1, max(t.dateColumn) date2, count(*) from ( select t.*, sum(val) over ( order by t.dateColumn ) grp from ( select t.*, case when datediff(ms, lag(t.dateColumn, 1, t.dateColumn) over ( order by t.dateColumn ), t.dateColumn) > 60000 then 1 else 0 end val from your_table t ) t ) t group by grp;
It produces:

uses the analytic function lag() to mark the beginning of the next batch based on the difference of the datecolumn from the last, and then use the analytic sum() on it to create a group of batches and then group it to find the desired aggregates.
There may be some classification in the group due to rounding problems with DATETIME . From MSDN ,
datetime values are rounded to increments of .000, .003, or .007 seconds, as shown in the following table.

Here is the same query rewritten using CTE:
WITH cte1(DateColumn, ValueColumn) AS ( -- Insert your query that returns a datetime column and any other column SELECT SomeDate, SomeValue FROM SomeTable WHERE SomeColumn IS NOT NULL ), cte2 AS ( -- This query adds a column called "val" that contains -- 1 when current row date - previous row date > 1 minute -- 0 otherwise SELECT cte1.*, CASE WHEN DATEDIFF(MS, LAG(DateColumn, 1, DateColumn) OVER (ORDER BY DateColumn), DateColumn) > 60000 THEN 1 ELSE 0 END AS val FROM cte1 ), cte3 AS ( -- This query adds a column called "grp" that numbers -- the groups using running sum over the "val" column SELECT cte2.*, SUM(val) OVER (ORDER BY DateColumn) AS grp FROM cte2 ) SELECT MIN(DateColumn) Date1, MAX(DateColumn) Date2, COUNT(ValueColumn) [Count] FROM cte3 GROUP BY grp
source share