I like this one: “By definition, a program is an entity that is run by the computer. It talks directly to the CPU and the OS. Code that does not talk directly to the CPU and the OS, but is instead run by some other program that does talk directly to the CPU and the OS, is not a program; it’s a script.” Here’s the other eleven.
So if you run any OS in an VM (not paravirtualized) it suddenly becomes a script?
Lots of these opinions are “controversial” because they are a load of hogwash.
I don’t think there was any effort to actually filter out rubbish answers.
I think you could say that it is only programming if you don’t use a compiler (or even assembler), and be equally (in)valid.
Hogwash and/or gibberish. For example:
What does that even mean?
So I assume this guy programs by waving a magnetised needle over the platter. Hey, wouldn’t want to be using somebodies else’s work now!
It means the guy who said that can’t code to save his life.
Notice he was also the guy that came up with “If you have to use the scroll bar to see all of your class, your class is too big.” I hope I never have to work with that guy’s code.
UPDATE: Oh dear god… that guy used to be on Microsoft’s C# team.
Edited 2012-09-04 13:32 UTC
I can agree with that, if you are using a 30″ monitor and a smallish font.
TBH I think it is a trite way of saying
“Make sure you aren’t putting any unnecessary shit in there.”
I see a lot of massive classes where some things could have been broken down better via inheritance.
I would hope that’s not controversial.
Ah but don’t you know? OOP is evil incarnate.
I always love sweeping statements when it comes to programming languages (except for PHP that does deserve a lot of hate IMHO).
x86 assembler is scripting too.
Because It’s not native at all. All x86 machine code is translated internally to microcode and that microcode is the native language of today CPUs.
x86 instruction set is just a compatibility layer. So all what We write are scripts.
There are significant downsides to treating the hardware instruction set as a stable programming interface. Logic processors have been stuck on the Von Neumann architecture since the dawn of time because we use static optimizing compilers to target a particular hardware instruction set, and in the case of x86, it’s not even a particularly convenient instruction set to implement in hardware.
Contrast this with the evolution of graphics processors, which support high-level programming interfaces via runtime interpreters plugged into the kernel device driver system which target unstable hardware architectures.
Over time, the hardware interface moved up the stack as certain operations, such as vertex shading, were factored out of the drivers and implemented in hardware. The programming model didn’t change much, but the hardware changed radically.
Modern graphics processors understand compound data types like pixels, vertices, polygons, textures, and frames. Modern processors are remarkably good at integer arithmetic and boolean logic, but they don’t understand the generalized sorted mapping type which dominates the architecture of most software systems.
Everything from C and UNIX filesystems to Ruby and Cassandra are based on sorted maps. At some point in the development or runtime process, each reference is translated into a key on a sorted map, which is then translated into an address in a flat linear array.
If not for statically compiled native machine code, the hardware would have evolved closer to the structured programming models that have prevailed since ALGOL, and modern processors would implement an instruction set closer to lua than x86.
Well, he didn’t said that the CPU or the OS can’t be virtual and guest, respectively, but only that the program should be *directly* talking to the CPU and the OS. By such definition, a program whose instructions are *directly* talking to a virtual CPU and an guest OS is still a program, not a script…
Anyway, less controversial than pseudo-elitism IMHO.
Also is any program using a I/O library (ie. not talking to OS directly) a script?
“4. Copy/Pasting is not an antipattern, it fact it helps with not making more bugs”
I think what he/she means is that side effects are bad.
I actually kind of agree with their point. I just think they made their point terribly.
It does make a lot of sense to reuse existing code, not just because it’s quicker that writing entire methods from scratch, but because (hopefully) you’ve already tested that code as well.
Obviously there needs to be a balance though, eg if you’re having to kludge together code just to make it possible to paste source from another project, then you really should have been writing that code afresh instead of cutting corners.
But on the whole, programming is as much about reusing existing tools to fix new problems as it is about writing new tools too.
[edit]
————————————————–
hahahaha one of my old college lecturers actually used to teach this.
Edited 2012-09-03 22:48 UTC
Wow, second item and the bullshit-o-meter is already peaking. This is just stupid elitism.
Yeah, because making things artificially difficult is awesome. What language has “Repeat” anyway? Does he mean “for”?
I’m nor a fan of nor do I program in .net but this is just more elitist nonsense.
And the stupidity just goes on and on. We don’t have to code the same way we did in the 80’s anymore. Not every language is C. It’s called progress.
More elitist nonsense.
Yes, readable code is a very bad idea. Or not. Brackets are apparently awesome though.
Because everyone’s screen is the same size. So if I have a wide monitor standing on the side it’s perfectly ok to have a class that’s really long?
This whole thing is just some C programmer(s) that got butthurt.
Edited 2012-09-03 22:48 UTC
Yes, we are living in the future, so stop using exceptions and use maybes/tuples/multiple return values. Or really anything that doesn’t turn your execution order into spaghetti.
Also the OOP opinion was actually very reasonable. The author was lamenting the simple fact that OO is too vague. It’s hard to have rationale, scientific discussions about something when the proponents of it can’t even agree what they’re talking about.
Good thing that is impossible to do with return values, eh?
Seriously, what does it matter? As long as the OO model in the language you use makes sense and you know when to use and when not to.
None of the things I mentioned do it by default (you have to make the decision to have spaghetti code.) try/catch/finally does it by default. That’s the whole reason you people use it.
The definition of OO matters when we want to have serious discussions about it. It’s sort of hard to judge the merits of something that lacks a solid definition.
“you people”?
Try/catch does not automatically create spaghetti code any more than return values and message structures.
Most try/catch (in Java especially) are for exceptions that normally people don’t care enough about anyway, and so they catch them where needed and ignore.
So in terms of ignoring errors, they’re not different. I’d prefer exceptions though if I do want to handle errors.
Using try/catch for normal control flow is the worst idea ever.
As some new to code. I’d like an explanation or a link to one, sounds interesting..
About what? Error handling? OO being a poorly defined concept that no two people agree on? (well, that’s an exaggeration, I’m sure you can find two people who agree; the point is simply that there’s a lot of dispute even between proponents.)
Pseudo-elitism.
Any interpreted functional language like LISP or Haskell is way harder and sophisticated than most native procedural language like C or even ASM.
What language has “Repeat” anyway?
Pascal.
Oh and in C that would be the do..while cycle.
Edited 2012-09-03 23:28 UTC
Oh right, I had forgot about. Been a long time since I used Pascal.
I certainly wouldn’t say that is a better language construct than “while” though.
Nah, that has the evil word “while” in it. We can’t have that
Edited 2012-09-03 23:29 UTC
And is not the same, do..while(repeat) will execute the cycle at least once, the while cycle may not get to be executed at all, obviously the guy making such statement is clueless.
Yeah, the guy is obviously clueless or trolling to be controversial.
From stackoverflow:
That’s a common mistake? If that’s the kind of people he works with I’m afraid of what kind of software they would create.
To be honest, I thought that was the whole point of the Stack Overflow thread and the reason for the Bill The Lizard follow up too.
Nobody said those opinions had to be valid; only that they had to be controversial.
Well, you can be controversial and make a good point (like 2 of those suggestions) or you can be completely in the wrong and controversial for the sake of being controversial.
That doesn’t make any sense either. If you don’t need to make sense just for the sake of being controversial, then how about these?
1. Everyone should always write code using two toes of their choice but only from their left foot.
2. No function should ever be more than 82 lines of code, except on a Tuesday when they should be no more than 81 and 83 lines long.
3. Every single programmer should learn COBOL. No exceptions.
4. Always name your variables as anagrams of the word “hexaphene”
5. Never compile your code after dark.
6. The letter ‘y’ should be banned from use in Java.
To be fair, you might have valid point with #5. 😉
As a C++ apologist, I strongly disagree. I didn’t start out on C/C++ and I sure as hell bet that all C/C++ proponents started out on simpler languages, even if they refuse to remember or acknowledge the first time they dabbled.
I started out on BASIC, Q and Casio, and through it learned how to structure complex programs even if the language doesn’t seemingly allow it. Then I was forced to learn Pascal during high school because the teacher didn’t know C. Of course, also learnt a bit of javascript along the way since making dynamic webpages were the thing to do.
By this idiot’s logic, people should start with assembler. Assembler for modern architectures have a lot of structured programming extensions too. But pedagogy is not the same as bragging rights. The real issue with pedagogy in any language, programming or human, is how to express ideas succintly and concisely.
I admit that it was learning Java at university (course standard, if not requirement) and then Python (self taught) that I understood many important programming and design principles. But those principles were what drew me BACK to C++. Once you read the stuff by Stroustrup and Sutter, they open your eyes to what C++’s proper usage is and how to choose which features not to use for which problems.
I think all the people who hate C++ are simply reacting out of fear. Early in their experience, they came across a problem they couldn’t fix right away. Rather than facing their fears, they just reacted and cower for safety with garbage collection and inheritance and factories up the yin-yang.
In my experience neither Java or C# solves any problems in C++. They just have better IDEs and graphical debuggers, but that’s about it.
I’m a C proponent who started in C++…
Personally, my (biggest) issue with C++ is that it’s simply too large of a language. If you get 10 people who say they know C++ chances are good that the only overlap is going to be the subset of C++ that is C.
C++ also reinvents a bunch of things from C (such as I/O) and it’s debatable whether it actually improved them.
tl;dr. C++’s flexibility is both its strength and its weakness. It allows you to do all sorts of crazy things, but at the end of the day they’re still crazy.
Yes, but the people who are really proficient with C++ will use the STL, which has no overlap with C, because C simply does not have the facility.
The biggest issue with C++ are people who come from it from the C perspective and don’t use the language features that promote safety. The easiest thing, first of all, is to use vectors and strings rather than arrays. The next easiest is to use RAII rather than dynamic allocation.
I wouldn’t call it reinvention. They just OO’d some of the tedious bits.
I haven’t had to do a crazy thing in C++ at all. Have a look at Qt. Aside from the MOC (which translates to standard C++ anyway), there is nothing crazy with Qt.
Except the STL is almost entirely duplicate functionality. list and map are the only new containers. Everything else is either a duplicate or a minor change away from being a duplicate.
Vectors don’t really make life easier. And RAII is only useful if you’re doing OO.
The C I/O functions work at least as well as C++’s, even on objects.
Honestly, I think Qt is crazy, but most GUI programming is.
Set, queue, deque, unordered (hash), heap. If by “minor change” you mean lack of manual memory management, then yeah. But I would call that an improvement.
Bunch of standard algorithms. Smart pointers. Concurrency.
Let’s not forget that C++ allows things like Boost to happen.
So you like having to explicitly allocate and deallocate memory? Even with C99 variable length arrays, you can’t easily push or pop or insert or delete items or resize without the possibility of introducing bugs. Then you also have to manually keep track of the length of the array in case you don’t use the whole thing.
And?
Then in no way can you say anything about C++. C++ is designed for complicated things, like GUI programming for example.
All those containers are duplicates of other containers in the STL. Also, Boost is a nightmare, not a benefit.
If you can’t manage an array, then you have no right to be talking about “complicated things.” Arrays are not difficult things to work with. Push, pop, insert, and delete are all straight forward functions (that C should have defined, but I digress.) Keeping track of the length is also not difficult (and can be managed by the push/pop/insert/delete functions.)
GUI programming is not crazy because it’s complicated, it simply isn’t complicated. It’s tedious. But more to the point, the current style of GUI programming is crazy because it ignores good coding conventions )such as reduction of coupling.) This is practically a requirement of most GUI frameworks, because they insist on ignoring the fact that they’re dealing with a reactive system. Instead of making themselves reactive, they use the event model (which only poorly emulates the desired properties, because it requires explicit handling of events.)
Only because it’s obvious you have no idea how to code C++ properly.
And yet the number one security defect continues to be caused by buffer overruns and other low level array management. It’s not about difficulty, it’s about SAFETY.
If you’re more concerned about bragging rights rather than doing the Right Thing, you have no right to be programming for a job at all. Why the hell should people waste time writing the same array management stuff over and over again, introducing bugs (even the best of us still do), when we already have a library that does it.
Sets, queues, and stacks are all lists. They have slight restrictions, but that’s the very definition of “minimal change.” Boost is quite commonly regarded as a major example of terrible software (by the non-C++ using community.)
It is about difficulty, because I’ve already enumerated how to use arrays safely. Pointing out that lots of developers have bad habits is simply a red herring. I’ve already mentioned that I wish array manipulation had been included in the stdlib, but you can find/make a library for it as well. The only reason that buffer overflows are still so common is that many developers have NIH syndrome (or are simply ignorant about what’s available to them, a problem that C++ doesn’t solve.)
Because the C standard lists sets and queues and stacks as part of the libraries…
Sets are lists? Really? Lists have the concept of unique membership? Everything is either a chunk of memory or linked chunks of memory if you go down low enough. Not only do you seem to not understand C++, you obviously never understood the data structure unit of your programming course.
Well, no shit. Therefore it makes your statement useless. Boost is terrible for web programmers. Boost is terrible for Python programmers. Boost is terrible for limited embedded systems.
And yet, pointing out the bad habits of C++ developers is not a red herring when criticising C++…
I’m not even talking about bad habits. I’m talking about SECURITY and SAFETY. Everyone makes mistakes. It’s not about bad habits.
You better not be responsible for any high security systems. You do not have the right attitude for them.
Do you not read? I quite clearly stated that a set is a minor modification to a list. Since you’re too dense to understand that, I’ll spell out the change:
Take your list’s add method and wrap it in the following if:
if(!list.contains(item))
I haven’t been pointing out bad habits of C++ developers. I’ve been talking about how C++ is too wide a target for team building. I’ve been talking about how C++ reinvented I/O tools. I’ve been talking about how the STL is mostly duplicate work. No where have I been talking about C++ developers.
See but everyone makes mistakes applies just as much to C++. Maybe vectors stop you from causing buffer overflows, but there’s still plenty of logic errors left over. And considering vectors are harder to use (with C++ iterators clouding things up even more) the chance of logic errors increases. (I’m sure if I brought up the fact that there’s plenty of buffer overflows in C++ programs, you’d just deflect it as them not using the correct container. Which is un-testable either way.)
If you’re really about security and aren’t just putting on a security circus, you’d realize that any security critical software must be triple checked (at least) by other developers. This is plenty of opportunity to check that the original dev. was using correct practices (as I’ve already enumerated.) This is plenty of time to check for logic errors. Just having a language provide you with “safe” containers does not make their usage safe. If you believe it does, then I’m afraid to tell you, but it’s you that has the wrong attitude.
Lets agree on one thing – proficient and intelligent people will use the best available tools and abuse their strengths. This is language independent.
A proficient and intelligent person will take a turd and make diamonds out of the carbon it it.
Problem with C++ is that it’s so wide that finding a group of people that think the same way about C++ is virtually impossible.
He said allows, it doesn’t mean that everyone is doing crazy things. Though C++ developers tend to veer off quite often into crazy stuff.
I fail to see why this is a problem. The multi-style nature of C++ is better than trying to cram every problem into an OO or functional or message passing style.
Conversely, I think you highlighted the best thing about C++ and is certainly one of the design goals Stroustrup had in mind. It allows the programmers to choose the style that fits a problem best.
I would hope that a person using C++ to write a webserver would have a completely different idea of how to use it to a person who uses C++ to write a game, or a realtime controller, or a stock exchange, or a highly parallel physics simulator, or a compiler.
A better solution is to group styles into different languages.
That way you have clear definitions of what features to expect. If you have two devs who say “I know X”, it means you can be reasonably sure that they can check each other’s work.
Additionally, say we have a section of code that should be pure (functional.) We can do that in C++, but there’s no guarantee that someone won’t come by and make it impure. If we instead require a pure language, then this issue never comes up. The restrictions of each language give you guarantees about the characteristics of the code, making it easier to reason about.
If you have every feature available all the time, then the code quickly and easily becomes more complicated than it has any right to be. You have to manually restrain yourself with C++; you simply don’t have to do that with simpler languages.
Now, I’ll admit that cross-talk between languages is currently kind of a pain. Most languages really only interface well with C… and even then not necessarily well. So maybe it makes sense to use C++, for now, but that doesn’t mean that C++ isn’t an issue.
I think I already expressed this from a perspective of a person that has to hire developers to develop and maintain a codebase.
When you are a lone star programmer – C++ is great.
When you have to work with a large group of people – then it becomes a problem.
And no, there are not two ways of doing things in C++, there are thousands. In short – too many to be good for assembling and maintaining a team.
And that’s the fault of the language is it?
As another person pointed out, there is a large overlap between C++ and C. So in practice, most people actually do have similar ideas about C++.
You’re confusing design with code. Programmers will always have different ideas about what DESIGN to use, whatever language it’s implemented in.
Compared to languages like Python and Lisp and Java, C++ is no worse off in the different language facilities that people think of using.
A problem doesn’t mean fault. (See hardware drivers for FOSS OS’es)
Also, you’re being defensive.
Can’t see how I can avoid being defensive when I’m writing in defense of C++…
I just can’t believe the fuzzy thinking that goes into tech criticism. Problems with programmers are confused with the problems of the language. Personal preferences are confused with problems of the language. What you call defensive, I call seeking to get people to understand what they’re saying rather than just “X is bad because of completely unrelated reasons”.
* Unlike most other geeks, I don’t have a problem with fuzzy thinking in general. I appreciate it for the creativity it can produce, but not for deductive reasoning.
Lack of rigorous definition is not a problem with OO. It is the fact that the real world does not play nice with neat concepts and principles. The real world does what it wants and you have to adapt.
OO is just one of many ways to adapt. Design patterns are another. OO and design patterns are almost orthogonal to each other, and are orthogonal to other concerns like performance or parallelism.
The design patterns a lot of Java frameworks go for is not because of getting around the limitations of its OO implementation, but because a few zealots spread the meme that everything needed a design pattern for every little thing. Java just happened to be co-opted for it because, in a way, it was designed with design patterns in mind.
There’s an oft ignored fact that design patterns are really just missing features. Each and every single GoF design pattern is just a language feature. For example, the Command pattern is completely pointless if you have first class functions. Composite and Decorator are just recursive unions (of differing multiplicity.) Adapter, Bridge, and Facade are just use cases for composition.
I can do this all day, I did a semester project on cross-language implementation comparisons using design patterns as a goal.
I should probably mention that all languages have “design patterns,” it’s completely insane to include every single possible feature in a language (such that you’d have a feature for every macro-structure you can concieve of.)
I remember being utterly confused after reading books on OOP back in ’90s. Authors often bundled other concepts with OOP, making it almost impossible to figure out what all the fuss is about.
In particular, OOP has nothing to do with:
– types or access modifiers – that’s encapsulation,
– inheritance/mixins – that’s code reuse,
– any particular coding style – that’s plumbing,
– patterns – that’s cooking recipes.
They are all useful techniques, it’s just they have nothing to do with OOP.
Object oriented programming is really about having a bunch of objects (separable entities “living their own lives”) communicating with each other. It’s really as simple as that, yet very few OO programs or frameworks fit this definition.
For example, in another post I mentioned Kay’s “setters are evil” rule. Why? For the same reason any mutation is evil. Calling a setter means taking control out of a target object and placing it in the calling one. So the target object is no longer a separate entity and becomes just a dumb storage of fields others may fiddle with – a data structure. This puts the calling object in business of coordinating changes and ensuring consistency of the target object state, which requires a lot more knowledge than just the API.
One of the worst confusion is that UML and OO are interchangeable. It makes it way too easy to over-design a solution to any problem.
The only UML diagram I think any one uses regularly is the UML class diagram. That is where the confusion lies.
I haven’t really used UML except for class diagrams since university.
We were told not to use Flow Diagrams at University, but they are pretty good for non-developers to see a high level flow of logic and developers can just follow it through in a debugger if need be (Our codebase is very bad).
Edited 2012-09-04 20:59 UTC
Strangely, I think UML class diagrams are perhaps the most useless UML diagram. They’re pain to make, class design changes all the time, and thus they’re a pain to update.
Regarding flow diagrams, I find UML sequence diagrams to be a better replacement.
For me, the trick with UML and any diagramming language is not to go into any detail. Anything more than a brief outline is even more ridiculous to maintain and keep accurate than code itself.
The IDE can be be configured to provide a set.
I don’t use them for large projects with several thousand files, I used them as documentation for a particular component or set of components.
They don’t necessarily have to be totally up-2-date. As long as I see how the classes fit together it is better than “Find All References” in VS and trying to fit it all together in my head.
For non-technical people such as Project Managers or a Higher a flow diagram is quite easy to understand.
I mostly work with .NET and Front End Web Tech. I find UML mostly redundant for what I do unless it is specifying either class diagrams or part of the ORM for the Data Access Layer.
Not particularly controversial but not widely adopted either:
Harold Abelson
Alan Kay
Any sweeping rules should at most be treated as guidelines. What ultimately matters is the result, not the process.
My suggestion: post it to /. and go get a big bowl of popcorn
I agree that exceptions are often ignored when they should be handled with an intelligible error message.
But I disagree that using error structures is a solution. Firstly it is not reasonable to rely on tooling to verify that all error values are tested. Second it is just awkward to return an error structure with each function.
At least, the stack trace printed by a logged exception can be used by developers to locate the issue. Not great, but much better than continuing with erroneous data (in the case of an ignored error value)
I don’t think singletons should be used. If you think you need one, use a function which returns an unique object- at least you have the option to generate more of this object if needed, while if you use a real singleton this is not possible.