The difference between instruction and microoperation

What is the difference between machine instruction and microoperation? I found the following definition here :

A small basic instruction used sequentially to create a high-level machine instruction

Here is what I found on Wikipedia

In computer CPUs, micro-operations (also known as micro-ops or μops) are low-level detailed instructions used by some projects to implement complex machine instructions (sometimes called macro instructions in this context)

Do I understand correctly that micro-op is a processor instruction that runs in this loop. Say, like ADD, SUB, MUL, ST, LD. Did I miss something?

Any help is appreciated.

+5
source share
1 answer

A few years ago it was discovered that RISC is better than CISC: if you want a processor with a very high speed, you would like all your instructions to be very simple: this allows them to complete in a short period of time and thus a higher clock speed frequency. So, Andrew Tanenbaum predicted that "in 5 years no one will have x86." That was in the 90s.

So what happened? Isn't x86 (and therefore AMD64, also known as x86_64), the most well-known CISC instruction set. Well, don't appreciate the ingenuity of Intel (and AMD) engineers. Understanding that if they need a processor with a higher speed (back with a single core, we beat> 4 GHz in the late 90s), they cannot process their complex instructions in one clock cycle. The solution was to use "micro-operations."

Each microoperator will be executed in a single cycle and will resemble instructions such as RISC. When the processor came across a CISC instruction, it would decrypt it to several micro-operations, each of which would last one cycle. That way, they could preserve their old rough instruction set architecture (for backward compatibility) and have very high clock speeds.

+5
source

Source: https://habr.com/ru/post/1233288/


All Articles