Make grep stop after first line with inconsistent

I am trying to use grep to go through some logs and select only the most recent entries. There are many years of heavy traffic in magazines, so it's silly to

tac error.log | grep 2012 tac error.log | grep "Jan.2012" 

and etc.

and wait 10 minutes for it to go through several million lines, which, as I already know, do not match. I know that the -m option exists to stop in the first match, but I don't know how to stop it in the first match. I could do something like grep -B MAX_INT -m 1 2011 , but this is hardly the optimal solution.

Can grep handle this or will awk make more sense?

+4
source share
4 answers

How about using awk as follows:

 tac error.log | awk '{if(/2012/)print;else exit}' 

This should exit as soon as a row is found that does not match 2012.

+4
source

Here is the solution in python:

 # foo.py import sys, re for line in sys.stdin: if re.match(r'2012', line): print line, continue break 

you @host> tac foo.txt | python foo.py

+2
source

I do not think grep supports this.

But here is my β€œwhy we had awk again” answer:

 tail -n `tac biglogfile | grep -vnm1 2012 | sed 's/:.*//' | xargs expr -1 +` biglogfile 

Please note that this will not be accurate if your log is written to.

+1
source

Great single line script for sed page to save:

 # print section of file between two regular expressions (inclusive) sed -n '/Iowa/,/Montana/p' # case sensitive 

In other words, you should be able to do the following:

 sed -n '/Jan 01 2012/,/Feb 01 2012/p' error.log | grep whatevs 
+1
source

Source: https://habr.com/ru/post/1392026/


All Articles