Gibbon1's answer is correct, but I think the sample code is useful for this kind of questions.
#include <stdio.h> int main(void) { union { unsigned int x; struct { unsigned int a : 1; unsigned int b : 10; unsigned int c : 20; unsigned int d : 1; } bits; } u; ux = 0x00000000; u.bits.a = 1; printf("After changing a: 0x%08x\n", ux); ux = 0x00000000; u.bits.b = 1; printf("After changing b: 0x%08x\n", ux); ux = 0x00000000; u.bits.c = 1; printf("After changing c: 0x%08x\n", ux); ux = 0x00000000; u.bits.d = 1; printf("After changing d: 0x%08x\n", ux); return 0; }
On a small x86-64 processor using MinGW GCC, the output is:
After changing a: 0x00000001
After changing b: 0x00000002
After changing c: 0x00000800
After changing d: 0x80000000
Since this is a union, unsigned int (x) and bitfield structure (a / b / c / d) occupy the same storage block. The distribution order of the [bit] bit determines whether u.bits.a is the least significant bit x or the most significant bit x. Typically, on a small-end machine:
u.bits.a == (ux & 0x00000001) u.bits.b == (ux & 0x000007fe) >> 1 u.bits.c == (ux & 0xeffff800) >> 11 u.bits.d == (ux & 0x80000000) >> 31
and on a big end machine:
u.bits.a == (ux & 0x80000000) >> 31 u.bits.b == (ux & 0x7fe00000) >> 21 u.bits.c == (ux & 0x001ffffe) >> 1 u.bits.d == (ux & 0x00000001)
What the standard says is that the C programming language does not require any particular endianness. Hospital and low-end machines can put data in the order most natural for their addressing scheme.
source share