awk 'FNR==NR{a[$1]++;next}(a[$1] > 1)' ./infile ./infile
Yes, you give it the same file as the input twice. Since you do not know in advance if the current record is uniq or not, you create an array based on $1 in the first pass, then you only output records that saw $1 more than once in the second pass.
I'm sure there are ways to do this in just one pass through the file, but I doubt they will be as clean
Explanation
FNR==NR : This is true only when awk reads the first file. It essentially checks the total number of records seen (NR) and the input record in the current file (FNR).a[$1]++ : create an associative array a , which is the first field ( $1 ), and the value of which increases each time you look at it.next : ignore the rest of the script, if this is achieved, start with a new entry(a[$1] > 1) This will be evaluated only on the second pass ./infile , and it prints only those records in which we saw the first field ( $1 ) more than once. This is essentially a shorthand for if(a[$1] > 1){print $0}
Proof of concept
$ cat ./infile 1 abcd 1 efgh 2 ijkl 3 mnop 4 qrst 4 uvwx $ awk 'FNR==NR{a[$1]++;next}(a[$1] > 1)' ./infile ./infile 1 abcd 1 efgh 4 qrst 4 uvwx
source share