typedef struct {
bool a: 1;
bool b: 1;
bool c: 1;
bool d: 1;
bool e: 1;
bool f: 1;
bool g: 1;
bool h: 1;
} __attribute__((__packed__)) not_if_you_have_enough_booleans_t;
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
You beat me to it!
Or just std::bitset<8>
for C++.
Bit fields are neat though, it can store weird stuff like a 3 bit integer, packed next to booleans
This was gonna be my response to OP so I'll offer an alternative approach instead:
typedef enum flags_e : unsigned char {
F_1 = (1 << 0),
F_2 = (1 << 1),
F_3 = (1 << 2),
F_4 = (1 << 3),
F_5 = (1 << 4),
F_6 = (1 << 5),
F_7 = (1 << 6),
F_8 = (1 << 7),
} Flags;
int main(void) {
Flags f = F_1 | F_3 | F_5;
if (f & F_1 && f & F_3) {
// do F_1 and F_3 stuff
}
}
Depending on the language
And compiler. And hardware architecture. And optimization flags.
As usual, it's some developer that knows little enough to think the walls they see around enclose the entire world.
Fucking lol at the downvoters haha that second sentence must have rubbed them the wrong way for being too accurate.
I set all 8 bits to 1 because I want it to be really true.
01111111 = true
11111111 = negative true = false
What if it's an unsigned boolean?
Could also store our bools as floats.
00111111100000000000000000000000
is true and 10111111100000000000000000000000
is negative true.
Has the fun twist that true & false is true and true | false is false .
I was programming in assembly for ARM (some cortex chip) and I kid you not the C program we were integrating with required 255, with just 1 it read it as false
Then you need to ask yourself: Performance or memory efficiency? Is it worth the extra cycles and instructions to put 8 bools in one byte and & 0x bitmask the relevant one?
Sounds like a compiler problem to me. :p
A lot of times using less memory is actually better for performance because the main bottleneck is memory bandwidth or latency.
Back in the day when it mattered, we did it like
#define BV00 (1 << 0)
#define BV01 (1 << 1)
#define BV02 (1 << 2)
#define BV03 (1 << 3)
...etc
#define IS_SET(flag, bit) ((flag) & (bit))
#define SET_BIT(var, bit) ((var) |= (bit))
#define REMOVE_BIT(var, bit) ((var) &= ~(bit))
#define TOGGLE_BIT(var, bit) ((var) ^= (bit))
....then...
#define MY_FIRST_BOOLEAN BV00
SET_BIT(myFlags, MY_FIRST_BOOLEAN)
The 8-bit Intel 8051 family provides a dedicated bit-addressable memory space (addresses 20h-2Fh in internal RAM), giving 128 directly addressable bits. Used them for years. I'd imagine many microcontrollers have bit-width variables.
bit myFlag = 0;
Or even return from a function:
bit isValidInput(unsigned char input) { // Returns true (1) if input is valid, false (0) otherwise return (input >= '0' && input <= '9'); }
Nothing like that in ARM. Even microcontrollers have enough RAM that nobody cares, I guess.
We could go the other way as well: TI's C2000 microcontroller architecture has no way to access a single byte, let alone a bit. A Boolean is stored in 16-bits on that one.
In the industrial automation world and most of the IT industry, data is aligned to the nearest word. Depending on architecture, that's usually either 16, 32, or 64 bits. And that's the space a single Boolean takes.
That's why I primarily use booleans in return parameters, beyond that I'll try to use bitfields. My game engine's tilemap format uses a 32 bit struct, with 16 bit selecting the tile, 12 bit selecting the palette, and 4 bit used for various bitflags (horizontal and vertical mirroring, X-Y axis invert, and priority bit).
Bit fields are a necessity in low level networking too.
They're incredibly useful, I wish more people made use of them.
I remember I interned at a startup programming microcontrollers once and created a few bitfields to deal with something. Then the lead engineer went ahead and changed them to masked ints. Because. The most aggravating thing is that an int size isn't consistent across platforms, so if they were ever to change platforms to a different word length, they'd be fucked as their code was full of platform specific shenanigans like that.
/rant
Weird how I usually learn more from the humor communities than the serious ones... 😎
It's far more often stored in a word, so 32-64 bytes, depending on the target architecture. At least in most languages.
if wasting a byte or seven matters to you, then then you need to be working in a lower level language.
It's 7 bits....
Pay attention. 🤪
Joke’s on you, I always use 64 bit wide unsigned integers to store a 1 and compare to check for value.
I have a solution with a bit fields. Now your bool is 1 byte :
struct Flags {
bool flag0 : 1;
bool flag1 : 1;
bool flag2 : 1;
bool flag3 : 1;
bool flag4 : 1;
bool flag5 : 1;
bool flag6 : 1;
bool flag7 : 1;
};
Or for example:
struct Flags {
bool flag0 : 1;
bool flag1 : 1:
int x_cord : 3;
int y_cord : 3;
};
just like electronic components, they sell the gates by the chip with multiple gates in them because it's cheaper
Are you telling me that no compiler optimizes this? Why?
It would be slower to read the value if you had to also do bitwise operations to get the value.
But you can also define your own bitfield types to store booleans packed together if you really need to. I would much rather that than have the compiler do it automatically for me.
Well there are containers that store booleans in single bits (e.g. std::vector<bool>
- which was famously a big mistake).
But in the general case you don't want that because it would be slower.
This reminds me that I actually once made a class to store bools packed in uint8 array to save bytes.
Had forgotten that. I think i have to update the list of top 10 dumbest things i ever did.