Using date and time to determine a high frequency event over a period of time

I have an information frame, which includes a number of statistics on crimes, including the date and time of the crime, as well as a category.

0       5/13/2015 8:55           VEHICLE THEFT   
1       5/13/2015 8:41          OTHER OFFENSES   
2       5/13/2015 8:36          OTHER OFFENSES   
3       5/13/2015 8:30            NON-CRIMINAL   
4       5/13/2015 8:17          OTHER OFFENSES   
5       5/13/2015 8:16          OTHER OFFENSES   
6       5/13/2015 8:10           LARCENY/THEFT   
7       5/13/2015 8:00                BURGLARY   
8       5/13/2015 8:00          MISSING PERSON   
9       5/13/2015 8:00          OTHER OFFENSES   
10      5/13/2015 8:00                 ASSAULT 
---

So, for the example above, it simply prints: "Other violations."

This is a massive database with over 400,000 rows.

I need to write a function that allows me to enter any given time interval (using from and to), and then determine which category of crime has occurred with the greatest frequency. This is what I have and it does not work:

import pandas as pd
import csv
import datetime
timeData = open("timeData.csv")
df = pd.read_csv('timeData.csv')

from datetime import timedelta, date
df['Dates'] = pd.to_datetime(df['Dates']) #this converts the values in the Dates column to datetime format

def daterange(start_date, end_date):
    for n in range(int ((end_date - start_date).days)):
        yield start_date + timedelta(n)

start_date = date(2015, 5, 1)
end_date = date(2015, 6, 2)
for daterange(start_date, end_date):
    df['Category'].value_counts() 

( A) , . ( B). , , .

?

+4
2

-, , . , Date, Crime

# make pd.Series with date as the index and crimes as the values
d1 = df.set_index('Date').Crime.sort_index()

# function that uses date slicing and uses values counts
def most_freq(start, end):
    return d1[start:end].value_counts().index[0]

# demo
most_freq('2015-05', '2015-05')

'OTHER_OFFENSES'
+4

DF :

In [204]: df.groupby([pd.Grouper(key='date', freq='D')])['crime'].value_counts()
Out[204]:
date        crime
2015-05-13  OTHER OFFENSES    5
            ASSAULT           1
            BURGLARY          1
            LARCENY/THEFT     1
            MISSING PERSON    1
            NON-CRIMINAL      1
            VEHICLE THEFT     1
Name: crime, dtype: int64

In [201]: df
Out[201]:
                  date           crime
0  2015-05-13 08:55:00   VEHICLE THEFT
1  2015-05-13 08:41:00  OTHER OFFENSES
2  2015-05-13 08:36:00  OTHER OFFENSES
3  2015-05-13 08:30:00    NON-CRIMINAL
4  2015-05-13 08:17:00  OTHER OFFENSES
5  2015-05-13 08:16:00  OTHER OFFENSES
6  2015-05-13 08:10:00   LARCENY/THEFT
7  2015-05-13 08:00:00        BURGLARY
8  2015-05-13 08:00:00  MISSING PERSON
9  2015-05-13 08:00:00  OTHER OFFENSES
10 2015-05-13 08:00:00         ASSAULT

In [202]: df.groupby([pd.Grouper(key='date', freq='D'), 'crime']).size()
Out[202]:
date        crime
2015-05-13  ASSAULT           1
            BURGLARY          1
            LARCENY/THEFT     1
            MISSING PERSON    1
            NON-CRIMINAL      1
            OTHER OFFENSES    5
            VEHICLE THEFT     1
dtype: int64
+3

Source: https://habr.com/ru/post/1667464/


All Articles