“Since I left my job at Amazon I have spent a lot of time reading great source code. Having exhausted the insanely good idSoftware pool, the next thing to read was one of the greatest game of all time: Duke Nukem 3D and the engine powering it named ‘Build’. It turned out to be a difficult experience: The engine delivered great value and ranked high in terms of speed, stability and memory consumption but my enthousiasm met a source code controversial in terms of organization, best practices and comments/documentation. This reading session taught me a lot about code legacy and what helps a software live long.” Hail to the king, baby.
“Note : If a file/variable name features a number: it is probably not a good name !”
Says who? Variables have used numbers for subscripts for as long as I can remember. Coordinates are aptly named x1,y1,z1, x2,y2,z2 … Given their frequency in a graphics codebase I think it’s far better than the alternatives I can think of.
Anyways articles like this always spike my interest, would have like more technical depth, but still neat to read about it.
If anyone were to ever review my code from 15 years ago, I’m sure that I’d get a lot of flak Anyone heard of Advo Sched? No…oh well.
I have a love/hate relationship with articles like these.
On the one hand it’s great to promote readable code, but sometimes I think people get too hung up on what they deem as perfect syntax and can sometimes miss the point of those clever optimisations.
It’s a bit like grammar Nazi’s who love to correct peoples comments but rarely read deep enough into those posts digest the content being explained.
So I guess what I’m trying to say is that it’s good to be reminded about preferred programming practices, just as long as we don’t forget the other criteria (usability, performance (if applicable), deadlines)
Edited 2013-02-15 13:59 UTC
After a few sessions of off-shore code review, one really stops caring about code quality as long as what was requested works.
Really? Well,I guess I’ve never read outsourced code that did what was required…
When companies I’ve worked for have out sourced work, we just expect to spend a certain amount of time cleaning it up/ rewriting sections.
Moreover, code that works by black magic or absolute blind luck is no use to anyone. Especially when the quality is so poor, the original offshore developers are incapable of maintaining it and adding new features without creating giant bugs.
Oh yeah, I’ve seen that before. The code filled with hacks to just meet 80% of the requirements. Send it back to get the other 20% and it comes back missing some of the original requirements it did meet in the first iteration, so it now meets 60%.
Oh if I was allowed to speak what I know….
You are right, one of my specialities in consulting is clean up off-shore code. Yes there are such type of projects.
But there is only so much you can do when you have teams of > 30 developers producing new code every day.
So we only focus on the critical components and test the remaining ones as a black box.
Welcome to the Fortune 500 consulting world…
moondevil,
“After a few sessions of off-shore code review, one really stops caring about code quality as long as what was requested works.”
With all due respect, -10
I’ve dealt with too much bad code to write it off as ok as long as it works…
Obligatory link to the obfuscated c contests:
http://ioccc.org/years.html
Incidentally, I wish these contests would focus more on algorithmic obfuscation rather than source code obfuscation. I am not impressed with code that’s unreadable because of shortened variable names and whitespace elimination. The best obfuscated code is code which is still incomprehensible even though it uses proper whitespace conventions and comments
In my defence there is not much you can do about it, when you have an offshore team of 60 – 100 developers producing new code every day, while you have 5 guys doing development on the customer site.
There is no way to keep up with it.
IOCCC is in fact one of those contests. You won’t win anything in IOCCC if your code is obvious after running it through indent.
BTW, I think my favourite from this contest is:
http://www.ioccc.org/years.html#1998_banks
christian,
I took a look, I wanted to see if I could recognize the pittsburgh file. It segfaults for me every time and I’m not about to debug it. But I searched for and found this screenshot:
http://blog.aerojockey.com/post/iocccsim
I can definitely recognize the point!
I can segfault it if I feed all the .sc files in, as I understand it has a fixed size internal array for the vertices. Using just horizon.sc and pittsburgh.sc, it works fine with Ubuntu 12.04 (32-bit mode). Perhaps there is some less than 64-bit clean code?
Well said but you have to remember that your first point about readable code has a direct relationship with usability and performance, which you mention at the end. I’m not a programmer but as a scientist I use other people’s code for my work and the readability of their code has a great effect on the usability and performance of my work.
This might apply more often to science than game development but I still think it’s a relevant point.
Similarly, a well structured post with good grammar and punctuation makes it much more coherent and thus easier to convey your thoughts.
You’re using performance and usability in a difference context to what I meant (I was talking in terms of the end result from a compiled binary). However you do raise a valid point I hadn’t considered.
Since you brought it up, it should be “Nazis” and “people’s” and you missed a “to” in “those posts digest”.
To me the measure of whether code is “good” or “bad” is really VERY simple: Does it do the job as intended? Does it waste resources or use them well? is it stable?
And I would have to say no matter how “pretty” the code is Ken Silverman’s Build engine, which not only powered Duke but 2 of my fav games of all time, Redneck rampage (who couldn’t love a game where you shoot a titty gun and fling dynamite while drinking beer and eating moonpies while listening to Mojo Nixon?) and BLOOD, which riffs all the cheesy horror tropes of the 80s (you even start in the Phantasm funeral home) while giving in both insanely huge levels with tons of secrets
I’m sure everybody has seen that diagram showing the diff between today’s shooters and the games then, how then it was huge expanses filled with secrets while today you are led by the nose from one set piece to the next in a straight line? Well anybody who likes the former should BUY NOW Redneck, duke 3D and Blood, they can be had from GOG quite cheaply and just for joining you can get something like 6 games for free just as a thank you for trying them out which includes warsow and Ultima 4.
These games are still a blast, I still fire up all 3 and have a blast, and since its GOG and DOSBox you can run it on Linux and Mac and anyplace else DOSBox will run so go and get ’em and have a ball!
I think reasons for that are varied… might be also technical, the old-style engine necessitating more “compacted” (more ~square) maps. Or it reflects what players mostly want, just to kill monsters and bad guys; also more varied scenery (instead of going back and forth on one map all the time – more quickly boring scenery in a way).
But I remember reviews from 2 decades ago – mostly filled with walkthroughs, and… lists of secrets (even worse with adventure game walkhroughs). That wasn’t the best way to game.
Overall it’s what… we wanted back then, the general public acceptance of our games, for them to have mainstream appeal. And/or you’re getting old – old times are always better
Warsow was always a free game BTW.
When looking at code of running software, it is generally “ugly”; but at difference of the beautiful code, it showed it works and scales.
We always forget the fact that we are reviewing code that has been several times modified by several people in order to add new functionality, fix some issues, work in other scenarios or improve the performance.
“Grammar Nazis”! (no apostrophe needed here)
Regards,
Grammar Nazi
(you didn’t see this coming, did you?)
*twitch*. You did that one purpose, didn’t you?
Eh,I’ve always preferred using arrays rather than numerically named vars. It usually is easier that way.
Bill Shooter of Bul,
“Eh,I’ve always preferred using arrays rather than numerically named vars. It usually is easier that way.”
I don’t prefer arrays, but I guess I can see why you would.
My concern would that in many languages the arrays will have an additional performance cost though, requiring more indirection and more calls to alloc/free, and maybe more bounds checking as well. Even with C, the array might conceivably force the compiler to use a specific memory structure when the variable solution might have allowed more register optimizations. Its very hard to predict compiler’s optimization behavior these days though.
Edit: How would you formulate a draw line function using arrays?
drawline(single x1, single y1, single x2, single y2);
drawline(single x[2], single y[2]);
// Above is too weird for me, structures would be a bit better.
drawline(coord p1, coord p2)
drawline(coord p[2]);
Edited 2013-02-15 15:11 UTC
Yeah, I’d use structs too. High performance graphics code isn’t my specialty so maybe in some scenarios its a bad idea, but I still have a hard time believing that the performance hit outweighs the utility of an organized data structure of some sort. I mean, if its really performance critical, use assembly.
These days compilers are so good at optimising that you’re more likely to write better performing code in C than assembly.
Laurence,
“These days compilers are so good at optimising that you’re more likely to write better performing code in C than assembly.”
There are still times when I find GCC didn’t optimize something as well as it could have, especially when the C code doesn’t translate directly to the desired architecture opcode (many bit manipulation, overflow flags, and multi-word-size instructions are completely absent in C). The roundabout algorithm in C will sometimes result in a roundabout assembly optimization.
That said, most of us have thrown in the towel because the business case for hand optimization skills is virtually non-existent. Most businesses are willing to let grossly inefficient code slide by if they can trade off developer costs with better hardware.
Every time this topic comes up someone else points out the the importance of optimizing the algorithm before resorting to assembly, let me preempt this by saying “I agree”.
The processors have also become too complex:
– out of order execution
– parallel execution units
– branch prediction
– multiple cache levels
– opcode rewriting
– SIMD
– NUMA
– (put you favourite feature here)
You need to be super human to really optimize for a given processor given all the variables, and when you manage to do it, it is only for a specific model.
Only on the embedded space it is still an advantage to code directly in assembly.
Good point. Its a lot more difficult than the days of the 486
It sure is complex, but humans can still have the upper hand in some of the cases you’ve mentioned. Compilers are often given more credit than they actually deserve. We treat them as though they’re god like, but in reality they can be pretty dumb sometimes.
I think compilers will have to gain more artificial intelligence before they can persistently match & beat the best hand optimizations. I believe this will happen, but it just hasn’t happened yet.
Edit: By my criteria, this will have happened once a compiler can consistently generate code listings which no human is able to optimize any further for the given target.
Edited 2013-02-15 22:40 UTC
As I said it all depends how complex it is the processor you are trying to target.
I doubt anyone can write better Assembly for the Itanium than what the compiler generates, given the processor’s design decisions.
moondevil,
“As I said it all depends how complex it is the processor you are trying to target.”
I hear that, but most compilers don’t attempt to optimize for all the things your talking about and they still need your help with setting the right flags and runtime profiling when they do. Some of them are still doing a bad job, for example we tested GCC’s SIMD vectorization last year and it did not do a great job of it. It would be neat to try that again and see if it’s improved. But all too often people simply assume that compiler output is optimal without even doing any benchmarks. While this might be just fine for their purposes, it’s most definitely wrong to make assertions about it since they’re not the people who know much about optimizing in the first place.
At times I’m able to beat GCC’s output, I’d rate my skill as above average but you’d be right if you said that it’s usually too much work for too little gain. It’s not usually worthwhile especially when non-portable code is the result.
Actually when I think about optimizing compilers, I am thinking about Portland Group, Intel and Codeplay compilers.
How is your experience with them?
I don’t do any low level programming myself since around 2000, so in a way I am also curious how the reality is.
moondevil,
“Actually when I think about optimizing compilers, I am thinking about Portland Group, Intel and Codeplay compilers.”
Intel’s is the only one I’ve used, but that might have been five years ago. The other two don’t offer versions for the public. Codeplay’s website seems to indicate they specialize with game accelloration on cuda and PS3.
“I don’t do any low level programming myself since around 2000, so in a way I am also curious how the reality is.”
I wish i could say I did it professionally, but I don’t. I just do it for my own pet projects. Not long I managed to beat the GMP arbitrary size arithmatic library on divisions for lengths of around 4 words. I was a bit surprised since my algorithm is straitforward (though it did require inline assembly optimization). I want to confirm that my results are valid in the general case on multiple machines. Also I need to confirm that GMP was in fact using it’s fastest algorithm; GMP does automatically choose between many of them so maybe the selector was poorly tuned.
It would be very interesting to see whether the assembly optimizations could be eliminated under ICC, maybe I’ll give it a shot. This is all on the back burner now since I only get paid to do web stuff.
The last time I did real number crushing was at CERN, but everything was C++ and Fortran on my group.
Assembly was only used at driver level for the custom network protocols.
Nowadays I am only paid for JVM/.NET languages.
This reminds me – 80+% of drivers think they are above average, too ;P
sometimes the compiler “cheats” – like when Intel compiler turned off optimisations on non-Intel CPUs (a bunch of links: http://www.hydrogenaudio.org/forums/index.php?showtopic=77523 …seems Intel is at it again with AVX)
zima,
“This reminds me – 80+% of drivers think they are above average, too ;P”
Haha. Wikipedia has an article on the phenomenon:
http://en.wikipedia.org/wiki/Illusory_superiority
I found this interesting:
“One of the main effects of illusory superiority in IQ is the Downing effect. This describes the tendency of people with a below average IQ to overestimate their IQ, and of people with an above average IQ to underestimate their IQ”
Here’s someone else’s take:
http://agoraphilia.blogspot.com/2007/01/more-than-50-percent-can-be…
“sometimes the compiler ‘cheats’ – like when Intel compiler turned off optimisations on non-Intel CPUs..”
I remember them being sued over it, it’s definitely a very sneaky thing for intel to do.
Edited 2013-02-19 21:09 UTC
Something analogous to the Downing effect, but for drivers, can be especially ~scary – it would suggest that the few % who say they are below average …are actually more likely to be good drivers.
Alas, AMD dropped litigation of Intel in exchange for $1+ billion, without admitting any wrongdoing. Seems they’ll largely get away with it.
It’s like: hey we (the processor manufacturer) have inserted a little data flow engine in your processor, this is sadly only for values in registers – memory stuff still is mostly in order. And the programmer say okay, that’s nice, I guess I don’t have to work as hard to make instructions start in parallel and instead optimize data flows.
See above.
Okay, care to explain how this could make any difference when coding? This is a mechanism that applies predictions from dynamic patters when executing code, not something that have to be coded. Current x86 processors doesn’t even support hints.
Unless you code Fortran I don’t think you’ll ever see your compiler optimize towards this. But yes multi-dimensional cache blocking/tiling is a pain in the ass and making it dynamically adapt to the platform cache hierarchy almost require runtime code generation.
Which your standard compiler wont do.
Don’t know what you mean, perhaps fusing? Not a problem even for the beginner.
Macro op fusing -> CMP/TEST+Bcc is treated as one instruction.
Micro op fusing -> Intel finally stopped splitting stuff that never needed to be split. This mostly affects the instruction scheduler as load+execute instructions doesn’t take two scheduler slots.
MOV elimination -> Some MOV instructions are executed in the renaming stage instead of requiring execution resources.
Can be a problem if one wants to get near optimal execution on several generations of processors. However thinking the compiler will make it easier is in most cases wrong, yes compilers can generate several versions of code and dynamically select which code paths should be used. But really critical code often require changes in data structures to fit the underlying hardware which the compilers will not do.
Doing the same in assembly language isn’t a problem.
So you think your compiler will make the code NUMA aware?
This is something affected by algorithm choices.
No it simply requires knowledge of software-hardware interaction. Assembly language programmers are also better when optimizing e.g. C code as they know that under the abstraction it’s still a Von Neumann machine.
Most embedded code is C so I guess you are wrong here too.
There’s also a dependency analysis. Your expressions shouldn’t be too interwired.
Flatness. Don’t nest too deep. In C++, I’ve seen a simple loop iterating over an std::vector compiled to really fine SSE code.
This goes hand in hand with above and with speculative execution. Generally, your code should be as predictable as possible, i.e. the sooner the branch-condition is known, the better.
There’s also the defaults when the CPU discovers a branch for the first time: If the CPU by default assumes “branch taken”, you should structure your code so that it is less expensive to enter the body of an if statement.
Then, especially in tight loops and branches within them, the branch-conditions shouldn’t vary for each iteration, because the CPU has a branch target buffer.
Enter the world of cache oblivious algorithms.
It is not that simple actually. I’ve seen assembly level programmers who really know what they do, and they do it well, however, that usually was at the micro-optimization level, which doesn’t scale to long-lived and/or big softwares.
I see more potential on the algorithmic level, e.g., mentioned already, cache oblivious algorithms, knowing the right algorithms, your domain, and even being able to decide at runtime which algorithm to use.
All in all I am not sure if you are evangelizing Assembly, C or compilers in your post.
phreck,
“It is not that simple actually. I’ve seen assembly level programmers who really know what they do, and they do it well, however, that usually was at the micro-optimization level, which doesn’t scale to long-lived and/or big softwares.”
This is it in a nutshell, large projects cannot be written in assembly any more. But it can still be used effectively to tune pieces of a larger project.
“All in all I am not sure if you are evangelizing Assembly, C or compilers in your post.”
You weren’t addressing me, but I strongly prefer C. The only time I ever consider assembly is with code that I’m already benchmarking and have identified performance deficiencies. But I try to get the most out of the C version before resorting to assembly code.
Yeah, its difficult to talk about this in the abstract. If I were in charge of a project, I’d want to see a real performance test between code written with proper data structures and one without to see how much of speed up really exists and how critical that speed up really is to the project. Because often developers write code with prejudiced from past experience that may not be applicable to the current task.
But even today there is a wide range of output from various compilers for different programs. Some do a better job than others for different tasks. If it was really critical ( I agree there aren’t many of these situations today), I’d look at the output for different compilers and choose the best output for the task.
Arguing over coding styles is many times just as stupid as arguing over which OS is best. Whether “you” like it or not, what matters most is (as previously stated) whether the program does what it’s meant to do and is stable.
If nobody needs to read the code, who cares how readable it is. If nobody needs to add things to it, who cares how readable it is. If nobody needs to maintain it, who cares how readable it is.
In all honestly, a lot of the people who whine about coding style have terrible coding style of their own. Not to mention don’t have design/vision to really optimize anything.
Presumable it’s going to be read by at least one person.
No, because programming is engineering and there are right ways of doing things and wrong ways of doing things.
Yes, the person writing it. I don’t see a big risk there that he has no clue how to read his own work.
I see this argument all the time… One group (which I lean towards), says if it’s stable and works then it’s stable and works and you can not argue with that. The other group says no it’s wrong because they disagree with how something was done.
You can’t get around the fact that if the software works it has been engineered correctly because it’s doing what it was designed to do. One of the worst things people do is try to fix things that aren’t broken. Unless there’s an actual need to rewrite something, it’s a complete waste to do so.
Uh no. That’s like saying Trabant was a good car because it worked. Working is not a measurement of good engineering, for numerous reasons. For one, badly written code is hard to verify by both manual or automatic means so it’s hard to know if it actually works and under what circumstances it will work or fail. Secondly, what works right now might not work tomorrow or next week. We don’t live in a static world and it’s a lot easier to adapt well-written and structured code to changing requirements.
In my experience, most people who argue that the end, “working” programs, justifies the means are those who write terrible and fragile code.
Sure but it’s a completely different thing and we’re not talking about rewriting code. We’re talking about doing it right in the first place.
You’re assuming something other than using the software is required to know if it works. You’re assuming the software working is conditional. You’re playing “what if” by suggesting it may not work tomorrow. In _some cases_ you have a valid point. However, in other cases it simply doesn’t apply or is untrue.
Also, your idea of what is “well-written and structured” can be and is very different from what other people deem as the same. One huge thing you seem to be missing is that personal opinion plays a large role in making judgments about someones code. I know several great coders who are often times in disagreement about code-related issues. To put it simply, coding is not black & white the way you think it is. It’s not correct or incorrect. It’s not right or wrong. There is a lot of grey area and more than one way to skin a cat.
In many cases that may be true. In many cases it is not.
Again, what’s “right” is often times a matter of opinion therefore there is no right or wrong — only working and non-working.
Making assumptions about things that don’t apply (when they don’t apply) is pointless. Unnecessarily over-complicating things is pointless. “Fixing” something that isn’t broken is pointless. Deeming software to be “wrong” because it doesn’t follow your own coding philosophies is pointless.
Earlier you pointed that we don’t live in a static world. In addition to that, coding `rightness` and `wrongness` isn’t always static either.
ilovebeer,
“You can’t get around the fact that if the software works it has been engineered correctly because it’s doing what it was designed to do.”
It works…until it doesn’t.
As long as it’s limited to personal code, and the original developer doesn’t care about how it’s engineered, then maybe it doesn’t matter to him. But if he’s working in a corporate setting, some other poor sole will inherit that code and he’ll be none-to-pleased to learn that the earlier developer completely disregarded code quality in his work.
“Also, your idea of what is ‘well-written and structured’ can be and is very different from what other people deem as the same. One huge thing you seem to be missing is that personal opinion plays a large role in making judgments about someones code. I know several great coders who are often times in disagreement about code-related issues.”
Of course there are different opinions, especially with details that come down to different tastes. But ask them if they think the engineering doesn’t matter at all as long as the end product works. The good software engineers I know are proud of good engineering.
Or, …. it works fine without a single problem until the software is no longer needed.
Or, … regardless of the personal or corporate environment, somebody else assumes responsibility of maintaining it and has no problem understanding or adapting to the previous coders “quality” and coding style.
Or, … the code, regardless of environment is easily maintained because its duties simple and how “beautiful” or “crap” is in someones opinion is irrelevant.
Or, … the code doesn’t even need to be maintained.
I never said engineering doesn’t matter at all. I said if the software works as intended and is stable, it has been engineered correctly thus implying the engineering does in fact matter.
Since you guys seems to completely disagree with me, what is the exact deciding factor as to whether or not code is “good”? What is the exact deciding factor as to whether or not the engineering is “good”.
I am saying code quality is not black & white and you agree that the judgment of code is often a matter of opinion, yet you insist code is either good or bad. You can’t have it both ways, either there is grey or it is black & white only.
What “you” deem as crap code may be perfectly stable and function without any issue for as long as needed. That begs to ask, if that is true, how can the code actually be crap? Such cases are just as much in the realm of possibility as anyone in disagreement with me has suggested.
Being handed someone elses code to work with can be a nightmare, or it can be a piece of cake or non-issue. Neither the former nor the latter is more true than the other.
I will say it again, … Judgments about the quality of someones code must be made on a case-by-case basis and you have to consider more than just you own personal opinion and preference. Remember, I said if it works, it works and there’s no getting around it.. I did not say if it work but has problems, it works.
ilovebeer,
“I am saying code quality is not black & white and you agree that the judgment of code is often a matter of opinion, yet you insist code is either good or bad. You can’t have it both ways, either there is grey or it is black & white only.”
Who said it’s black and white? It’s full of greys. You should visualize it as a multidimensional venn diagram that often contains many good solutions (as well as some that are on the fence or just plain bad). There’s always a certain degree of subjectivity, but you are trying to extend this subjectivity well beyond what is reasonable in order to rationalize bad code.
“I never said engineering doesn’t matter at all. I said if the software works as intended and is stable, it has been engineered correctly thus implying the engineering does in fact matter.”
“…Remember, I said if it works, it works and there’s no getting around it.. I did not say if it work but has problems, it works.”
It’s a fallacy to treat all code as being identical in quality even though it works. Here’s a trivial example using a multiplication function, but imagine that it’s thousands of lines of complex production code instead.
int A(int a, int b) {
return a*b;
}
int B(int a, int b) {
int r=0;
for(;a>0;a–) r+=b;
for(;a<0;a++) r-=b;
return r;
}
int C(int a, int b) {
if (a==0) return 0;
if (a<0) return -C(-a,b);
int* m=malloc(a*sizeof(int));
for(int i=0; i<a; i++) m[i] = b;
for(int i=0; i<a-1; i++) m[i+1] += m[i];
return m[a-1];
}
etc…
Hopefully this convinces you that just because something happens to work, it doesn’t make it good code. Common attributes of bad code are lack of clarity, unnecessary complexity, insane difficulty to maintain, latent bugs which might not show up for years, etc. When good developers are faced with bad code, we’re often tempted to rewrite the whole damn thing, but nature of our jobs sometimes means we have to ‘fix’ the bad code instead. This will technically work, but it still remains bad code. Fixing the memory leak in function C obviously won’t make it good code.
No, … I’m saying the judgment that code is “bad” is often merely subjective. I’m not defending code that is broken, buggy, etc. For some reason that seems to have gone over some heads here.
lack of clarity: Disagree because it’s too subjective. While code might be confusing to “you”, it may be perfectly clear to another more able or experienced coder.
unnecessary complexity: Agreed.
insane difficulty to maintain: Generally agree. However, maintenance difficulty could be too influenced by a persons ability or lack thereof. If someone has trouble maintaining code it could be that they just aren’t a very capable coder and not because the code itself is terrible. Remember, just because a person gets paid to code, doesn’t mean they’re good at it.
bugs: Agreed.
When the original developer leaves you have an maintainance nightmare.
Welcome to my world.
Hi ilovebeer.
I agree with your first point that arguing against some coding styles sometimes will not make any sense. But coding style and the choice of language sometimes matters even if you think at the time of your code writing that nobody will going to peruse your code. I think it is important because we need to maintain the code, you have to go to your code again and again to debug/modify something and it helps when you put things
in order. It is better to have our own coding conventions as our guide.
There doesn’t exist a situation where code doesn’t need maintenance. Assuming so is a stunning display of a lack of forethought.
I’ve been in many a situation where I’ve had to maintain code where the original writer was no longer apart of the company.
It is also important to make a distinction between people who want thoughtful and concise design vs people who complain about the spacing of tabs and the line the curly brace goes on.
The latter group are just a bunch of annoying pedants, but the former is an important end to strive towards.
Of course there is. Just like there are situations where code needs little maintenance, and where code needs considerable maintenance. Assuming all code needs maintenance is the true stunning display of poor assumption. I guess there are people who haven’t heard of backwards-compatibility, not breaking user-space as rule of thumb, ABI, etc. Or that the code is simplistic enough that no changes have been needed in ages (if ever).
Yeah, some of the worst, creakiest, ugliest legacy code I’ve had the unfortunate job of maintaining is code written by people who think exactly like you.
Code should adhere to good design, period. This isn’t about legacy code, or compatibility. The code would be written in the present, and as such, good design could be encouraged from the start.
The goal is to reduce substantially the amount of said ugly code, all while increasing the reliability and maintainability of the code base as a whole.
As a company often we don’t control how old code is designed, we do control new code.
Some of the worst, creakiest, ugliest legacy code you’ve had to maintain was written by people who think like me.. So, people who think there’s a lot of grey area when it comes to what’s good and what’s bad code. And that often times a codes “goodness” or “badness” is simply subjective opinion. I hope you don’t consider yourself a good coder if you disagree with what I just said.
“Good design” is too much a matter of opinion. Maybe you should choose different wording to better express your viewpoint. For example, providing specifics such as Alfman has.
You disliking how something was done doesn’t automatically make it unreliable or hard to maintain. Unreliability is the result of bad programming. Code being difficult to maintain can be the result of a bad programmer doing the job, where the code itself isn’t the actual problem but rather your lack of ability is.
ilovebeer,
“Code being difficult to maintain can be the result of a bad programmer doing the job, where the code itself isn’t the actual problem but rather your lack of ability is.”
You are throwing around these underhanded insults implying that we’re bad coders if a piece of code gives us trouble, but the truth is that even good coders have trouble with bad code. What’s the deal with wanting to defend bad code?
I’ve done no such thing. If you’ve taken offense then it’s due to your own lack of confidence in your ability because I at _no point_ called into question your ability, or any other specific individual here. That is why I quote “you”. That means the comment is not directed towards any single individual, but rather anyone who the comment applies to. No need to get upset Alfman, unless the shoe fits. But then that wouldn’t be my problem.
And again, I’m not defending “bad” code. I’m stating the fact that “bad” code is often merely subjective. And the fact that sometimes a persons difficulty in maintaining code is the result of the maintainers inability or inexperience as a coder in general or in a specific area, and not the actual code itself. No sane person with half a clue would disagree with either of those statements.
ilovebeer,
“I’ve done no such thing. If you’ve taken offense then it’s due to your own lack of confidence in your ability because I at _no point_ called into question your ability, or any other specific individual here. That is why I quote ‘you’. That means the comment is not directed towards any single individual, but rather anyone who the comment applies to. No need to get upset Alfman, unless the shoe fits. But then that wouldn’t be my problem.”
You can quit this stupid double speak now.
“And the fact that sometimes a persons difficulty in maintaining code is the result of the maintainers inability or inexperience as a coder in general or in a specific area, and not the actual code itself. No sane person with half a clue would disagree with either of those statements.”
*Sometimes* that is true, but your initial claim that code and designs can’t be bad as long as they work was completely naive. There’s nothing more to say here since this is obviously true to everyone else.
Umm… What are you rambling about? If telling you directly I wasn’t calling you out specifically doesn’t soothe your butthurt then I guess your confidence really is shot and the shoe fits. Unfortunately lashing out won’t solve either of those problems.
That’s nice. Only one problem… I never said designs can’t be bad as long as they work. I said if they work as intended and are stable then they were engineered correctly.
I don’t know why you try to be deceiving but whatever the reason, it’s pretty dumb. If it’s a matter of not paying attention to what’s actually being said, maybe you should start….
ilovebeer,
“Umm… What are you rambling about? If telling you directly I wasn’t calling you out specifically doesn’t soothe your butthurt then I guess your confidence really is shot and the shoe fits. Unfortunately lashing out won’t solve either of those problems.”
You’re still doing it, it’s totally immature, but then you already know that and it was your intention all along. Grow up. I try to take the discussions seriously and here you are continuing to fling insults without cause. I’ve been more patient than you deserve; it’s just very disappointing to have this caliper of discussion on osnews. Obviously your just going to reply with more unprovoked insults, it’s totally lame.
If you took discussions seriously then you would make more of an effort to pay attention to detail and is actually being said. You wouldn’t intentionally misquote or misrepresent what has been said. Being deceitful is not how to have good debate. You claim I’m naive, then immediately agree with me. You put on a display after a general comment hits too close to home for you (I guess), and after being told it wasn’t directed at you specifically, you put on an even bigger display. You’re guilty of all this and more, but pretend you haven’t provoked anything.
Your deceitfulness and erratic behavior is what’s disappointing because you don’t seem to be an idiot. Then again it gets you more attention than you would get otherwise, which may have been your issue all along — a need for attention. Go ahead making false claims and lashing out. Sadly it’s what I’ve come to expect from you.
ilovebeer,
I haven’t lashed out or deceived, unless you consider my pointing out your insults to be lashing out.
It almost seems like the logic you’ve been using would be used by an offshore development company against a disappointed client: “What do you mean our code is crap? How can it be crap if it does what it is supposed to? Hmm? Maybe ‘you’ are too inexperienced to read it…if the shoe fits…”
So Alfman calls you on your painfully-transparent attempts at underhanded insults, and what’s your response? Deny it… and then immediately fall back into the same weaselly, passive-aggression that he called you on in the first place (all in the same paragraph, no less). If nothing else, I do have to admire that level of shameless audacity.
Apparently you’ve never worked with polygons or 3d math — array indexes are slow so if you have four 2d coordinates representing a square what would make more sense than x1, x2, x3, x4 and y1, y2, y3 and y4 for it’s corners… it’s not like top/left/right/bottom have any meaning when you’re constantly changing the orientation. Particularly true when unrolling for performance since a hardcoded offset from DS:BP is going to be faster than using dynamic indexes on something like an array. (see why pointered lists can often outperform arrays on complex data)
The code section used in the article that the author has some sort of noodle-doodle problem with seems completely normal and rational to me given it’s implementing fixed point arithmetic with shifts… Just what in blue blazes would you do instead there? Lemme guess, make it run three times slower by moving the like bits of code into functions?
No offense, but that article to me reeks of someone looking at code they’re too stupid to understand.
Of course that there is NO meaningful breakdown of the actual code or anything more than flowchart theory asshattery makes this little more than a fluff piece — it sure as shine-ola is NOT a “code review”!
— edit — my bad, there’s more than one page of this — MEIN GOTT that websites navigation is horrible.
Edited 2013-02-18 02:33 UTC
Guidelines are loosely applicable, so while its great you can find a corner case, it doesn’t really mean much.
No guideline applies 100% in every situation, there are always exceptions. The programmer needs to be aware of the constraints he operates in and adjust accordingly.
Similarly in resource strapped environments like mobile, sometimes you sacrifice good design to achieve acceptable performance. Its not nice, and pie in the sky academics probably scoff at it, but its reality.