With the pervasiveness of 64-bit architectures, it’s more important than ever that your Linux software be 64-bit ready. Learn how to avoid portability pitfalls when making declarations and assignments, bit shifting, typing, formatting strings, and more.
one day you have to “port” (or rewrite) your apps to a 128 bits cpu and then to a 256 bits platform and so on and on.
why just dont start specifying some int128 or long256 already?
um, #include <stdint.h>, use (unsigned) (long) (long) int. You don’t have to use int8, int16, int32, int64, intOMG, it’s typedefed for you.
Because more bits makes code bigger and even slower. The reason we now have 64-bit architectures is that 32-bit machines can access only 4 GB of memory.
The biggest problem is assumptions, and the article makes that pretty clear:
1.) People assume | int | == | void * |. This is true, on x86. This isn’t true on x64.
2.) People assume a certain endianness. This is a nasty assumption.
3.) People assume that because it just worked, the code works. That’s not necessarily true at all…
I don’t foresee a move to 128bit processors anytime soon… There’s not a lot of reason. Special routines exist to work with larger numbers today, you can use a 32bit machine for numbers that wouldn’t fit in a 128bit computer with a constant time added for every operation (and yes, you’d add constant time for a 128bit machine too, just less of it).
And 64bit machines allow for a lot of memory. 2^32 times as much as 4GB . That’s 4 billion * 4 billion — ish. Or 16 Quintillian I think?
Sure, people will have that much RAM eventually, but will we even still use silicon transistors then?
But I might be being shortsighted. Maybe there will be another reason to move off 64bit to something higher.
I think the title is somewhat misleading.As long as you have the source you’re in princype 64-bit ready.Isn’þ a 32 bit app per definition able to run on x86_64?
On my Ubuntu box ijust enter “apt-get build-dep kradio” in order to get all the development packages needed to build a 64-bit *.deb package from “32-bit” source code.Than a “fakeroot apt-get install -b source kradio” is enough to build a fresh 64-bit kradio package.
Isn’þ a 32 bit app per definition able to run on x86_64?
Able to run, yes. Run correctly? That depends. If the programmer made certain assumptions about how large certain data types are, those assumptions may not hold on a 64-bit system.
Honestly, something with a good clean code base shouldn’t have any trouble. Programs written in higher level languages shouldn’t either, but there are quite a few C/C++ programs out there that rely on some ugly code which might do the wrong thing on 64-bit systems.
There are more 64 bit architectures than x86_64, and those may not support backwards-compatibility.
There’s more than x86 out there, remember.
Yip.
um, #include <stdint.h>, use (unsigned) (long) (long) int. You don’t have to use int8, int16, int32, int64, intOMG, it’s typedefed for you.
Ignore this comment, meant to reply to PipoDeClown.
Don’t drink and post kids.
You shouldn’t use stdint.h, because it isn’t portable. Use inttypes.h instead.
.. it is not needed?
*Snaps*
Why the hell there’s so many people bashing C and C++ around here!!!!?
Can’t these <noun here> just keep coding in their <insert high level language here> and leave c/c++ programmers and aprentices alone..?
Why must we specify a size for an int, cant we just treat an integer as an integer regardless of size? This is the problem one gets when using low level SYSTEM languages for applications no?
unfortunately, your code can only perform when you understand how it executes.
Any impedance mismatch between the platform and your code/language is punished by performance degradation.
This does not imply you have to use low level, system languages.
See ocaml as an example of a higher (than c) level, efficient language.
Often, absolute best performance is not the overriding goal. Correctness, and ease of development are at least as important.
Just fall back to C for the parts of the system that must be low level, system access.
You’re supposed to. Ever written 20,000 lines of c before? I bet you’d mess it up occasionally too .
Sorry I cannot edit this board editing is broke.
The underlaying size of a type is the job of the runtime, use a HIGHER LEVEL language for HIGHER LEVEL applications. Are you writing a compiler? an OS? NO! Its a bad choice of tools that make you have these migration problems later on.
Ironically, the day all the applications use high level languages, you will probably be the first to start complaining because of the massive mamory usage and inefficinecy of all of the programs, and you’ll be arguing that people should have been using a little bit lower level languages.
I think every language has its place, but the solution is not to abandon lower level languages except where its impossible to use higher level languages. Higher level languages are great in some places, such as for plugins for programs as they are not likely to be packaged and prepared for all of the distributions and architectures. Its also great where rapid development and deployment is one of the top priorities, but for long term solutions that aren’t rapidly changing and all that stuff, lower level languages are preferable due to the lower performance footprint IMO.
There is also a stigma around mixing languages for a solution, alot of people think, oh I have to code this in C so it ALL must be in C!. We develop blindy and by fear, look at any project comming up to RTM, we darnt change things because we do not know what the result will be, Dev by FEAR anti-pattern
The problem with your HIGHER LEVEL language idea is that it will perform like crap.
Higher level languages like Perl or Java do not use variable sized integers by default (although it’s an option), because it adds a huge amount of complexity to every addition.
You can go ahead and do what you want but no one will run your program because it’ll be so slow.
Not true. As a previous poster said you’re not obliged to develop _all_ your app with a higher level language, you can mix the solutions and have performances _AND_ productivity.
Python and Ruby are damn slow for CPU bound tasks, this doesn’t mean your app will be slow for such operations but that you have to profile and substitute the bottlenecks with C or C++.
I’m sorry I wasn’t clear. I meant that forcing a high level language to use variable-sized integers for all operations will perform like crap.
Well with applications that require user intervention alot then speed is not an issue. Computer performance doubles so quickly, shelf life of PCs is 2 months in the store. Get a faster machine if you really want that killer app. If my machine runs slow, then I get a better one. Its the same old excuse, MUST CODE C, it will be uber fast but what are you really saving, not alot for user based apps.
Computer performance doubles so quickly, shelf life of PCs is 2 months in the store.
Maybe, but you can still get by with a computer you bought 6 years ago. Unless, of course, you’re playing games … or using a lot of Java apps
right tools for the right job. imagine coding a website in C, or an OS in java… *shudders*
> right tools for the right job. imagine
> coding a website in C, or an OS in java… *shudders*
http://www.jnode.org
Just because it can’t imagine something doesn’t mean it’s impossible or even a bad idea.
I’m not sure it’s a good idea to use Moore’s law to excuse heavier applications. It tends to get offset by Parkinson’s Law and Gates’s Law I for one find that a shame.
Gates’s Law –
“The speed of software halves every 18 months.”
Parkinson’s Law –
“Data expands to fill the space available for storage”
Well the main reason is Apple only really seem to support Objective-C anything else its down to the shell and hacks.
Just because we DESCRIBE our application at a higher level and without regard to sizes of types doesnt mean it has to be INEFFICIENT. A smart compiler and runtime will be able to manage these efficiently. We need smarter tools. We have been trapped at this level of development for decades.
The real problem is with the reuse.
We need to parse an xml configuration file so we reuse an XML parser with support for namespaces, validation and all.
You design an OS for a mobile phone around Linux by adding layers instead of removing unused parts.
Sure, it saves up development time and cost.
The result, however, is bloat and slower code.
This calls for frameworks where you pick and choose the parts you want. frameworks that are flexible because you can compile them with the needed functionality, or leave the stuff you don’t need at the link phase.
You have the perfect system when there is nothing to remove, not when there is nothing more to add.
You know, this might have been an interesting article (as others on IBM DeveloperWorks have been) if only it were readable. As it is, I found line lengths exceeding 160 characters; human-factors research (which they have at IBM!) indicates that readability is best at line lengths of 55-65 characters, and in any case should not exceed 75 characters. Readability degrades rapidly at longer line lengths.
As far as tolerating 160+ character lines: Life is too short!
“You know, this might have been an interesting article (as others on IBM DeveloperWorks have been) if only it were readable. As it is, I found line lengths exceeding 160 characters; human-factors research (which they have at IBM!) indicates that readability is best at line lengths of 55-65 characters, and in any case should not exceed 75 characters. Readability degrades rapidly at longer line lengths.”
“As far as tolerating 160+ character lines: Life is too short!”
I’m sure you did the right thing… Scrolled to the bottom of the article to the “Rate this page” section, and submitted this to the folks at IBM. You know, the people who can actually do something about it. I’m sure you’re not just complaining
‘m sure you did the right thing… Scrolled to the bottom of the article to the “Rate this page” section, and submitted this to the folks at IBM…
Yes. And not for the first time, either.
–cjcoats
This article is aimed at programmers and it’s obvious alot of the people responding aren’t. There not talking about recompiling something in your stupid linux distro.
C & C++ is a high level language, it’s a high level compiled language, but it is a high level language. Java is a high level language as well but is usually run in a VM and therefore is useless for many things.
The nice thing about C and C++ as many faults as it has is you can acomplish anything with it pretty much anything. And if you can’t then look, there’s assembly, and if you still can’t then look there’s some ants. It’s an easily portable language that if writen efficiantly isn’t that much worse off than assembly.
The lesson here is don’t assume that an unfixed-bit type it’s going to be a fixed-bit size. Define your types just like many embedded engineers do with every variable. It doesn’t add much work to programming and makes code a hell of alot clearer and more precise. And don’t respond to articles that you quite obviously can’t wrap your head around.
This is a quite good article discussing some very good programming practices. Mod me down if you must but realize you an ignorant p***k if you do, and that this is post is atleast partially directed at you.
> C & C++ is a high level language, it’s a high level compiled language, but
> it is a high level language.
Compared with other languages available today, C is pretty low-level. I would call it a good, somewhat platform-abstracting assembler. That’s exactly why people use it for optimization.
> Java is a high level language as well but is usually run in a VM and
> therefore is useless for many things.
You say “usually” but treat it as “always”. The JVM is a specification for the interface between the compiler front-end and the back-end, which can be either a compiler or an interpreter. Writing a back-end that compiles class files to native code is compiler-writing 101.
> It’s an easily portable language that if writen
> efficiantly isn’t that much worse off than assembly.
Right, C is so portable that you see many C programs cluttered with #define’s to deal with the pecularities of each implementation. You should stop for a moment and think why this isn’t the case with Java.
http://en.wikipedia.org/wiki/High_level_language#Relative_meaning
I wouldn’t consider c or c++ to be high level. I’d call them systems languages.
You can accomplish the same set of tasks with all turing complete languages, that’s the point . It’s just that some are _MUCH_ better suited for some tasks than others.
But you’re definitely right about the article. It is good, and it is intended for programmers.
Programmers should choose the max int size supported by the CPU for most things. For data in structures and arrays, they should choose a smaller size.
> Programmers should choose the max int size supported by the CPU for most things.
No, programmers should always choose the smallest possible int size for the task at hand. For example, to store a number between 0 and 20000, use a 16-bit int (<15 bits cannot store all possible values; 15 bits cannot be handled by the platform; 16 bits are ok). Exception: Some CPUs, when computing their native size, are faster than with smaller sizes – in that case use the native size.
If the range of possible values is unknown at design time – tough luck, use a variable-sized integer. Too many programmers use a fixed-size integer that is “big enough” because they are too lazy or too uninformed, and because they were not the ones how blew up the Ariane 5 with exactly such a mistake.
I forsee a CPU which has a 64-bit stack and a RISC instruction set which handles ints and floats like the FPU. It’s native will be 64-bit. I don’t see sizes larger than 64-bit emmerging. 64-bit is the end of the line. I’ve written an operating system and compiler with this in mind–LoseThos. (http://www.justrighteous.org) Picking-and-choosing sizes just leads to errors. Obviously, when many instances of a variable are present–arrays, you pick the appropriate size.
> Picking-and-choosing sizes just leads to errors.
I don’t mind having the compiler choose an appropriate size – in fact, this is very nice. However, just picking the biggest size available on a machine for all tasks clouds one’s vision for the fact that “biggest” is sometimes not big enough – i.e. it leads people to carelessly choose the native size when a variable-sized integer that could grow above 1000 bits would be appropriate. It’s also space-consuming (memory) and time-consuming (memory transfers) to use the native size where a smaller size is sufficient, but as you said, this is only important when many instances are present.
I thought the article was useful because of two things I learned about 64 bit programming while reading:
1) There’s more than one data model for 64 bit being used. I actually had the narrow-minded view that LP64 was the only model.
2) Integer literals are 32 bit by default and must be suffixed with L to make them 64 bit. The bit shifting example was an eye-opener. (1<<38 is invalid and needs to be 1L<<38)
Luckily, I haven’t done any programming for 64-bit archs yet.
C++ is so great thats why Apple chose Objective-C which inherits from SmallTalk as opposed to C++ which takes alot from Simula and is a bloated peice of shat.
You talk about bloaty apps but what about the actual language C++! Its major bloat! If you want to achive anything decent you have to play with hacks and know the fine details of those hacks. I have seen too many bugs which can be put down misuse of the language. Simple, people who use those toys either abuse them or just dont know how to play with them so we take them away and go higher level, the time of hacking C++ is over unless very very specific reasons for going to that level.