Take No Prisoners

12 Bonehead Misconceptions of Computer Science Professors

The poster-child for what’s wrong with postsecondary education is the computer science program. Despite the enormous need for competent programmers, database administrators, systems administrators, IT specialists and a host of other technical professionals, computer science programs seem to explicitly ignore the professional skills of which western society has growing deficiency and proceed with materials and teaching styles that are outdated, ineffective, useless and just plain wrong. This is due to the absurd misconceptions held by computer science faculty members across many universities.

I have personally met computer science professors who believe each of the following things. I make no claims as to how widespread these beliefs are; you can judge that for yourself.

1. Java is a good first teaching language

I don’t know how many computer science programs start teaching programming using Java, but there are more than a few, and that’s too many. When you’re going over variables, loops and conditionals, the object-oriented overhead of a language like java is unnecessary and confusing. Inquisitive students can’t just memorize things (i.e. public static void main (String args[])) without demanding to know what it means and why it’s there.

2. Machine language is “basic”

Comp Sci people seem to be terribly confused about what ‘basic’ means. When one learns to drive a car, starting the car, making a right turn, a left turn, parking, etc. is basic. Building a parallel gas-electric hybrid engine is not basic. Driving a car is more basic than building one because the latter requires significantly more expert knowledge than the former. In the same way, using a simple scripting language requires less depth of understanding that writing in machine language; therefore, computer science education should start with higher level languages and proceed to lower level ones, not vice versa.

3. You should write code on paper before you write it on a computer

Writing code by hand is stupid. It is entirely inconsistent with the interactive and iterative design process that comes naturally to hackers and painters alike. Professional software developers make extensive use of API documentation, reference guides, forum discussions, etc. to make troubleshoot problems and make their code more efficient and effective. Writing code by hand tests your ability to write trivially simple software without making errors. Real programmers must be capable of making complex software and detecting their errors with a variety of automated tools. Teaching or testing coding using pencil and paper is inconsistent with both the natural mode of human action and the practical realities of software development.

4. Lectures are an effective method of teaching programming

Programming is like algebra. You can’t learn how to write code by watching someone write code on a blackboard or listening to elaborate explanations from professors. You can’t learn math from watching someone do math. You learn to do things by doing them.

5. Algorithm design is learned by reading existing algorithms

Designing algorithms is about finding innovative solutions to difficult problems. Algorithm design courses are about studying existing solutions to rather simple problems. Learning how a particular problem can be solves provides approximately zero insight into how to solve problems you’ve never encountered before.

6. You can just ‘pick up’ prolog in a week for a course

There’s this crazy belief among Comp Sci. faculty that all languages are basically the same, so after learning the principles behind languages you can use whatever. This is bullshit. This is like claiming that since someone studied Spannish grammar in grade school, they can speak Spanish fluently, in any of Spanish, Mexican or Columbian accents. The leap between structured and object-oriented programming is huge, and it pales in comparison to the leap between object-oriented languages and declarative languages.

7. Exams measure understanding of programming

Teams of professional programmers spends months and years building intricate software systems in response to poorly-understood, ill-defined and changing problems. To accomplish this, they employ API documentation, online tutorials and forum discussions, team problem-solving sessions, reference books and an infinite number of phone-a-friend lifelines. Exams test your ability to write simple code to solve a trivial, well-defined static problems, without consulting and references. One is about resourcefulness, the other about memory. Exams test the wrong thing.

8. GUI’s are not an important aspect of learning to code

At the university where I did my undergrad, it was easy to finish a B.Sc. in computer science without ever building a graphical interface. While I agree that many software projects do not have graphical components (e.g., developer APIs), to marginalize GUIs as some kind of specialty endeavor is short-bus crazy!

9. Programming Requires Calculus

I have been told that development involving sophisticated work with graphics and animation involves calculus. Outside of this particular subfield, however, I haven’t seen much calculus in software development. Certainly I’ve seen a lot more GUI development than graphics.

10. Linux will rapidly overtake Windows among consumers

Comp. Sci. profs have been saying this for years. Hasn’t happened. And it’s not going to happen until Ubuntu and company take the dicking around out of computing the way Apple has.

11. LaTeX will overtake WYSIWYG text editors because LaTeX gives you more control

Yes, believe it or not, a computer science prof said this during one of my classes in undergrad. It goes directly to a deeper misunderstanding among Comp. Sci. academics that power and control are the primary factors driving adoption. They’re not. Simplicity and ease of use are far more important.

12. You can buy gates at RadioShack

The same idiot who thought LaTeX was the future also told his class to go buy gates (the things transistors are made of) at RadioShack and play with them to see how they work. Again, this evidences how completely out of touch some of these people are. Gates are microscopic. You can’t go buy them at an electronics store.

Update (25MAR2011): As so many helpful readers have pointed out, 1) gates are made of transistors, not the other way around, and you can now buy gates at Radio Shack online. However, the prof in question told me to go buy gates at a physical Radio Shack store in 2001, and they had no such thing. I don’t know what I was thinking when I wrote “the things transistors are made of.”


I have long argued that society needs a professional certification for software developers and that universities need undergraduate programs dedicated to training people for these certifications. It’s worked for accounting, engineering and medicine. There’s no reason it can’t work for software development. One of the primary barriers to this sort of progress is the raging incompetence of academics in computer science, computer engineering, management information systems and related disciplines.

Have one or a few to add? Comment away.

Related Posts
Why on Earth do Business Schools Teach Microsoft Access?
Abolish Universities?
Nine Reasons why Bad Grades Don’t Mean Squat


  • Brian S.
    Posted October 22, 2009 at 12:52 am | Permalink

    I haven’t read this blog in awhile but just remembered it. Anyways I am going to have to agree with almost point you have made here but mainly: 1,3, and 7. I am a comp sci major currently and am also a freshman. I started programming in high school with Java and am now continuing to use it in the first course. Its frustrating to not know how things work such as the main method and exceptions which we suppose to cover later. Also i cannot write code for paper on shit or on exams. I can do the very long homework assignments with ease but like you said testing on paper is a bunch of BS. Good post.

    • I K
      Posted February 7, 2010 at 8:21 am | Permalink

      This blog sounds like the typical comp-sci geek rant going through college/university. While I agree with most things that you stated, I think that are a few things that you’re missing. The most important thing is: education gives you a foundation for coding, but experience teaches you the rest. I learnt to program in many languages on my own between the age of 14 and 19, and if I mus say, I am good at it. When I took some computer science courses in college, I realized that there were a few BASIC things that I missed that would make me a more “elite” coder. We had a lot of nerds in my courses, and I was one of them in disguise. They thought they knew it all because of their “pesonal/1337″ experience. Ask them to code simple algorithms for simple problems and you will see the foundation they lack. They’ll write the most horendous, poorly structured, and non-functional code you’ve ever seen.

  • Kavan Wolfe
    Posted October 22, 2009 at 10:16 am | Permalink

    @Brian, I’ve got bad news for you: it only gets worse.

  • Zander
    Posted October 22, 2009 at 5:07 pm | Permalink

    I disagree with the example you get from number 4. Mathematics require lectures to give you all the information about the techniques available to solve problems. However, their are books which can do the same. Unfortunately it is all about interpretation and explanation of the material which helps students the most. In short, I watch the instructor do math, take notes about the method and try on my own. Hence we learn from failure, at least I do. I would expect the same would work in computer science.

  • Kavan Wolfe
    Posted October 26, 2009 at 12:26 am | Permalink

    @Zander, I agree that mathematics involves a certain conceptual toolbox. I disagree, however, that lecturing is the most efficient method of adding to a students toolbox. I also disagree that interpretation and explanation are key. In addition, many students don’t get much out of elaborate explanations of tacit knowledge. Programming, like math, is internalized through repetition.

  • jason
    Posted November 18, 2009 at 8:48 am | Permalink

    is anyone going to write another article for this website? taking the month off is bullshit

  • Rajishimo
    Posted November 26, 2009 at 2:20 am | Permalink

    Sooo… based on your analysis, I really think that I might just skip college altogether. I’ve already aced AP Computer Science AB. And calculus, physics, etc.
    I’ve taught myself Java, C, C++, C#, Python, Front End programming/Web Design, back end programming/Server setup and maintenance, and a little debugging and assembly.

    [not-important] One of my friends is a GOD in memory analysis, debugging, low level/FPU assembly, reverse-engineering, and network systems hacking. Not that script kiddy shit, real stuff: packet interception, code injection, bot clients, etc. While his business practices are questionable (hacking WoW accounts, selling gold, gear, etc), I do commend him for doing everything on his own, and making enough to BUY a car if he really wanted. He’s a bit of an elitist asshole though, he comments how everyone is beneath him and how school is a waste of time (truth). He doesn’t study for anything, though, he somehow aces every test he has come against. [/not-important]

    [rant] Anyways, thanks to the fact that most HR departments for most businesses have the intelligence of a centipede and think that you’re only hireable if you have a fucking degree in something, I might have to get that stupid piece of paper anyway.
    Try to get an interview, or bid for a contract when you’re just a senior in highschool and people just laugh saying “ha, ha, you’re just a dumb kid, you can’t do anything.” Then they show their incompetence while trying to figure out why their computer isn’t turned on when it’s clearly THE FUCKING MONITOR THAT’S JUST OFF.(true story) [/rant]

    Ahem, I do sincerely apologize for that outburst good sir. It’s just that I would rather not be prejudiced against because of my age.
    Especially when it’s clear that I can program better then some of the half-wits that graduate with a BS in CS and couldn’t program their way out of a paper bag. Now, I realize not all graduates are like that, but it’s a bit disconcerting when a company would rather keep those employees than let me at least intern or hire me for minimum wage. (also a true story)
    [PS] As you can clearly see, proper formatting is a must for me. [/PS]

    • R.Pinity
      Posted January 24, 2010 at 7:00 am | Permalink

      I’m willing to bet that your attitude probably has something to do with them not hiring you too. Yes, it really does often seem silly that a lot of companies hiring techs/programmers don’t have a more rigorous hiring process involving a CTO or somebody else with technical background, but that’s how it is. There are however, a number of certifications in various fields which can be obtained at a cost significantly lower than the cost of college or other post-secondary education. If what you say is true, then passing such should be a breeze. Combined with some volunteer experience (there are so many non-profit organizations that need tech volunteers, it’s absurd,) a little attitude adjustment, and a bit of perseverance and you’d probably find yourself in a job that pays well, has decent people, and involves doing something you love.

      As for the article, and having worked at a place that sells computers and software at academic rates for university affiliates, I can pretty well verify that a large portion of faculty at the local university, including the head of the IT department, know a lot less about computers, software and networking than the position implies. What kind of sadistic bastard (with a large annual budget) buys macs for the IT department?

      It has consistently baffled me that Universities (American, in particular,) have this bad habit of training aspiring creatives and hackers that there’s only one, bass-ackwards, often proprietary way to do or learn things “because it’s the industry standard” or “because it’s the way it’s supposed to be taught” without considering the impact it has in creating a linear, often crippling view of a given industry. I can’t help but wonder where the internet would be these days if thousands of undergrads weren’t told that in order to do anything web-oriented they’d have to purchase copies of the programs in the Adobe Creative Suite line of products.

  • The Prof
    Posted December 10, 2009 at 5:35 pm | Permalink
    • Nathan
      Posted January 18, 2010 at 2:59 pm | Permalink

      Thank you for beating me to posting this..

    • lee
      Posted January 25, 2010 at 2:35 pm | Permalink

      gates are made of transistors not transistors from gates. logic gates consist of a transistor in a particular orientation to other transistors in order to embody some electronic logic (ie ttl) along with some other transistors to speed up the electronic drain, and even more for buffering outputs. You can buy them at radioshack as stated above, but i wouldn’t recommend it because rs thinks 1000% markup is acceptable for carry such an obscure part.

    • Aeiluindae
      Posted February 20, 2010 at 7:18 am | Permalink

      Odds are, they are not available at most radio shack stores, any more. disappointing, because they are easier to find than dedicated electronic component retailers.

  • Jason
    Posted December 10, 2009 at 5:43 pm | Permalink

    For the most part, this a good list (coming from the mouth of a current CS major). I think that part of the problem is that the worlds of academic and real-world computer science are very different domains that happen to use some of the same tools.

    I feel obliged, however, to point out one error here. Transistors are not made out of gates, gates are built from transistors. Actually, any construct that can produce the proper boolean result from a give set of inputs is a logic gate. Heck, if you were so inclined, you could probably construct gates (and in an extreme case, and entire computer) from toys http://www.youtube.com/watch?v=H-53TVR9EOw . You can, in fact, buy DIP packaged logic gates at RadioShak, in both the TTL and CMOS variety. The generally come with 4 to 6 gates per chip.

    As I said at the beginning, I wholeheartedly agree with most of this article. I just thought you and your readers might appreciate some clarification on one point.

  • Required
    Posted December 10, 2009 at 6:15 pm | Permalink

    This is all true, except for the last part. You CAN buy gates from radio shack: There’s a little thingie called Integrated Circuit, or IC. And no, that’s not the thing transistors are made of. Gates are actually made by associating transistors. Plus, messing around with logic gates WILL help your logic understanding in many levels. Of course, for that to work right, you’d need at least a good book on the subject of digital electronics. The author was going so well, too bad he decided to rant about things he doesn’t know crap about.

    • Steve
      Posted January 18, 2010 at 8:57 pm | Permalink

      The OP had a definition confusion. The professor’s “gate” is a device that imnplements one of the primative Boolean functions. Those come in integrated circuits like the famous 7400 series. “Gate” can also refer to the control electrode of a field-effect transistor, which is the definition the OP fixated on.

    • jkuehn
      Posted February 23, 2010 at 1:57 am | Permalink

      You can still purchase a FET at radioshack and make a logic gate. The OP didn’t know what he was talking about. Yes, there are cheaper places. No, the prof’s advice didn’t make a lot of sense because configuring a logic gate at the transistor level relies on electronics experience that most CS guys don’t have.

  • Matt
    Posted December 10, 2009 at 6:24 pm | Permalink

    I must be one of the lucky ones, but none of my professors have made claims 6, 10 or 11.

    6. I’ve had to quickly learn new languages that employ completely different paradigms (i.e. prolog, ML) for courses, but none of my teachers claimed it would be easy. Instead they told us, “prepare to tear your hair out because this stuff is hard to wrap your mind around… but we still expect you to learn it quickly because we have a lot of material to cover”.

    10. I’m pretty sure all of my teachers are sane enough to realize that windows/mac operating systems aren’t going away any time soon.
    Some of my professors have made a weaker claim though: “The masses will always continue to use whatever is already installed on their box. Those of us whose job involves installing/developing/configuring will adopt *nix”.

    11. None of my professors are expecting TeX to become mainstream, but they want us to learn it because it’s a useful tool if you ever plan to publish research papers.

  • bob
    Posted December 10, 2009 at 6:53 pm | Permalink

    I agree with the majority of your points, but if you want to be taken seriously, or even quoted, check your spelling and grammar there champ.

    Also Apple did not take “the dicking around out of computing”. They basically limited the way an end user can fuck it up.

  • bjones
    Posted December 10, 2009 at 8:57 pm | Permalink

    BJTs are made from doped semiconductor junctions. MOSFETS are made from the same stuff but include metal-oxide layers (and a different geometry) that help create channels for electronic/hole flow by biasing the gate (not a logic gate by the way).

  • bigT
    Posted December 10, 2009 at 9:00 pm | Permalink

    You actually can purchase logic gates they usually come with several gates on one chip


  • John Gates
    Posted December 12, 2009 at 4:00 am | Permalink

    That’s why I dropped out of CS and switched to Electrical Engineering :)

    PS: I found writing code by hand particularly stupid.

  • George
    Posted December 14, 2009 at 10:51 pm | Permalink

    I might have to disagree with you on 5.

    Studying algorithms, how they are designed, and how the problem is solved (or unsolved) teaches you the right tools and the proper mentality in figuring out a new and complex problem. Algorithms class emphasizes on efficiency. It’s not about just solving the most complex computational problems of today, analysis of algorithms is about how to solve the most complex computational problems faster.

    Also, exams might miss the point of improving your hacking skills, but they are a good tool used by Universities to weed out talented but hell lazy and frivolous hacker-wannabe’s. Just take the damn exam, ace it, and move on.

  • Posted December 24, 2009 at 9:07 pm | Permalink

    1. I started college in a computer science major with only HTML under my belt, and we started with C, then C++, then Java. Since then the policy has changed to start with Java. One of my professors would say that she was really teaching design principles, but had to teach us a lot of C in the process. Progressing from C to Java made sense while I was doing it. C seemed pretty basic for learning how to play with code blocks; C++ was similar to C but had new and interesting problems and tools; Java was similar to C++ but had new and interesting problems and tools. By now I’m used to Java, but I don’t think it would have made as much sense to me as a freshman, but I can’t say for sure. Things that are easy to do in Java are frustratingly complicated in C, and vice versa.

    2. I learned assembly language after Java, and it killed me. Gave me a new appreciation for engineers I guess, but mostly frustrated the hell out of me since it took me twelve lines of code to do something I could do with only one in C. I would much rather have started with assembly and worked my way out. However, I may be in the minority on that one. It just seems a more logical progression to the way my brain works.

  • Tim
    Posted January 10, 2010 at 12:37 pm | Permalink

    I mostly agree with this list, with a couple caveats.

    As mentioned before, the statement about gates is wrong.

    As for (3), you must have had a bad professor. Having a large class take programming tests on computers usually isn’t viable, and paper is a PERFECTLY fine metric for measuring programming ability. You mention that it’s about writing error-free code, which is wrong. My classes that have done that, for instance, only assign a couple points of the entire problem to syntax. Most (95% or more) of the points are based on if the idea and execution of the problem are correct.

    And for (1), I don’t see java being a particularly bad starting language. I think it’s easier to gloss over a few inconsistencies than to jump into something like C, where people will get lost in the details instead of trying to solve problems. Computer Science (in my opinion), isn’t as much about being able to delve around in pointers, but rather to learn how to use your language to solve problems. Doing this in java or another high level language is much easier, and then you can go back and explain the fuzzy details.

  • Anonymous
    Posted January 10, 2010 at 12:53 pm | Permalink

    I agree that Java’s a lousy platform to start a CompSci syllabus on. In fact, Java, like Visual Basic and C#, are all around really bad languages. C and C++ aren’t that hard languages to get started on. My high school’s CompSci leapt right into C++, for example.

    Completely disagree that Linux won’t overtake Windows since usage statistics are strongly indicating its starting to happen, albeit on a slower pace. Windows usage is less than 89% and dropping rapidly. Linux adoption on the desktop? Increasing very rapidly.

    I also disagree that the ability to dick around with the operating system stops it from being successful. In fact, I think there are two reasons why Macs never really sold nearly as well as PCs: 1. Their hardware is inflexible and overpriced, 2. The operating system is far too limited. Now Windows is continuing the feature-strip cycle it began with Vista and continued with 7. KDE on Linux is heading the opposite direction, lush with usability features that put Windows and OS X to shame (If you don’t believe me, see the KDE 4.4 previews. Come February of 2010, Linux will officially exceed OS X and Windows in usability.)

  • Enigma
    Posted January 10, 2010 at 1:37 pm | Permalink

    Haha! I remember taking an Intro to Computer Science course freshman year in high school (a course everyone in the school has to take), and they did everything you say is wrong. We learned Java, spent a long time on making GUIs, and they made us write out code by hand first. We also had long lectures and were tested frequently. The upside is I became a life-long (so far at least) game developer and I am currently in our high school’s computer science research lab (all seniors are required to work on a research project).

  • Casey
    Posted January 10, 2010 at 3:00 pm | Permalink

    Of course it is better to start with a high level language before learning low level ones. I don’t know why people start teaching with Java… If I were to teach an intro programming course I would probably start with something like Python. (It does force you to indent properly after all… lots of beginner programmers have trouble with this). Probably go into C/C++ after that.

    Might as well use books instead of lectures if you can’t ask questions. But if you can ask questions, then lectures are much much better for learning then books. Maybe not for you, but remember that different people learn differently. I like having someone explain something to me and then be able to ask for clarification if there is something I don’t quite understand. I can’t tell you how many times I read a book about how to do something and come away confused; because the authors are not there to clarify a point. And OF COURSE you need to learn by using. That is what assignments and exercises are for.

  • Casey
    Posted January 10, 2010 at 3:06 pm | Permalink

    And also, you can buy gates at radioshack. How the FUCK do you not know this?

  • fail
    Posted January 10, 2010 at 6:01 pm | Permalink

    gates are made from transistors not the other way around

  • CE student
    Posted January 10, 2010 at 8:14 pm | Permalink

    Being a Computer Engineering major I have to repeat that #12 is completely wrong. Transistors are used to make gates, not the other way around. Gates can be purchased from Radio Shack in an IC.

    I’m not sure if I agree with #1 or not and have struggled with the Java debate. I think starting off with C would be better but I don’t feel Java is a “bad” language to start with. Java does have a lot of overhead but you don’t need to use object oriented techniques when using Java. Most students have no problems memorizing public static void main(args []) and System.out.println without going into details. Or they have no problem understanding it if they insist on knowing (it’s not the complicated…).

    I really don’t think exams are a bad way to test knowledge but it is tricky to write a good CS related exam which is probably more of the problem here. As long as the prof is willing to accept psuedocode-ish answers.

    The only other thing I disagree with (other than #12) is #4. I feel lectures are extremely helpful in understanding coding. Your problem is that you had bad professors. Luckily I am in a small private college which seems to lack bad professors :) I know for sure I would have spent a lot more time teaching myself C/C++ pointers if my prof had not lectured on it. Or the differences between passing objects by value, pointer, or reference.

    Lectures are a starting point so you know what to do on your own when you tackle a problem by yourself. They won’t make you an expert but they give you something to work off of.

  • PohTayToez
    Posted January 10, 2010 at 8:24 pm | Permalink

    In response to #2, I feel that it might be you that is confused as to what basic means. It seems you have it confused with “simple”, when basic means fundamental, or “serving as the basis of”. Other higher level languages build upon machine language, so I do not see how calling machine language basic would be incorrect.

    • RyanM
      Posted January 18, 2010 at 12:07 pm | Permalink

      Oh, sorry, PohTayToez already said pretty much the same thing as me about #2. That’ll teach me not to read.

  • RyanM
    Posted January 18, 2010 at 12:03 pm | Permalink

    I have some quibbles.

    2. I think it’s you who’s confused about the word “basic”. Basic does not mean easy. It means that it’s fundamental. Fundamental to how higher level programs are compiled. I highly doubt that any competent CS professor would suggest machine code as an effective way to program. I was certainly told by my CS professors that machine code is “painfully idiotic” for humans to program in, and that assembly is only slightly better. We learned it so that we could understand compilers, and how the CPU executes programs, but we didn’t actually do much programming in them.

    3. This does strike at the heart of a serious problem, but I don’t think I’ve ever met a CS professor who would say that writing code down is “better” than doing it on the computer. It’s just a necessity because CS exams can’t usually be done on a computer. Sometimes they reccomended that we handwrite assignments for practice for exams, but it was never something we had to do.

    6. I’ve never tried prolog, but in my experience it usually doesn’t take me more than a week to learn a new language enough to do CS assignments in it, unless the language is radically different from others that I’ve used. What I don’t know I can figure out as I go along. However, I’ve never had a CS professor who actually required me to learn some specific language that I’d never used before in a week (or two weeks), on my own. Maybe it’s just me though.

    10. I’ve never heard a single CS professor say this. In fact, most of my professors (although they use Linux as well) are Mac or Windows users.

    11. Never heard a CS prof say this either. Sounds to me like that experience is specific to you.

    12. As has been stated many times, yes you can.

    • Izkata
      Posted January 30, 2010 at 2:02 am | Permalink

      2. Basic can mean either way. I agree with the OP on this one.

      6. Prolog *is* radically different from any other language I’ve ever seen before. Coming from a C/C++ background, it can take days to weeks to wrap your head around how a Prolog program will execute.

  • Brendan
    Posted January 18, 2010 at 2:36 pm | Permalink

    I’m going to have to be the first one to disagree with your first point. Java certainly has some problems, but as a first language it has one very VERY nice feature: a complete, well documented standard library. If you want to learn C or C++, you need to start pulling in other libraries if you want to do something like multithreading, any kind of GUI, or networking. In Java, these are all part of the standard library and so they’re easy to learn. It’s probably a good idea to move away from Java while at school, but it’s not a bad place to start.

    • codemenkey
      Posted January 26, 2010 at 6:02 am | Permalink

      who cares that it has everything but the kitchen sink? java is a bloated piece of crap, and perhaps the most convoluted and insane language i’ve ever had the misfortune of working with. i would never teach a beginner java, not because a beginner “wouldn’t get it,” but because the language itself brings with it its own set of problems that a more sane language wouldn’t present.

      my own school just recently switched to java. i’m pissed. ;)

    • Izkata
      Posted January 30, 2010 at 2:04 am | Permalink

      No, Java teaches how to ride on someone else’s work. Virtually anything you want to do, you can import a library and call some functions, if you can find it.

      C/C++ on the other hand, you generally have to think about what you want to do, and how to make the language do it.

    • Posted February 19, 2010 at 12:42 am | Permalink

      I tend to agree with Brendan. Java’s not a bad first language (although Python would be better IMHO). It teaches your at a reasonably high level about object oriented programming without so many of the opportunities to shoot yourself in the foot as C++ provides.

      @Izkata: Why on earth would you want to waste time re-writing code which other people have already written for you? Do you implement string handling routines each time you write any software or do you use the STL? If CS degrees teach you one thing; that it’s okay to make use of other people’s hard work to help you solve really interesting problems then why waste time on the little things?

  • Jon
    Posted January 18, 2010 at 7:08 pm | Permalink

    I’d agree with all apart from #8

    As far as I’m concerned, a GUI is completely irrelevant to learning to code. As long as I understand the concepts of Human-Computer Interaction, I should be able to make a useful GUI. I find that in any given project, more time is spent fiddling with the UI than creating the actual functionality. In the real world, the GUI is hugely important… when learning, it’s just time consuming. Lots of tinkering, but very little actually code being touched.

  • abdullah
    Posted January 20, 2010 at 1:19 am | Permalink

    i disagree with 3 – writing code by hand. writing code on a piece of paper forces you to think about the logic of what you are doing. it ensures you have a basic understanding of variables and loops. this may not seem like much to an exprienced coder, but beginners really have trouble with the basic concepts like what it means to declare a variable and allocate it a value.

    i am not saying entire systems need to be hand-written, but hand-writing short snippets ensures the student doesn’t copy/paste pieces from different places, and also prevents the student taking the ‘shot-gun’ approach, where different things are tried until a combination comes up that works.

  • Alex
    Posted January 20, 2010 at 12:00 pm | Permalink

    i shall correct you on the last point
    a) gates can be bought at radioshack (to be precise , chips that contain gates , google 74HC series)
    b) Transistors aren’t made of gates, gates are made of transistors

  • Xheese
    Posted January 22, 2010 at 5:21 pm | Permalink

    Of course, anyone with half a brain will realize computer science is not about programming. Programming is simply a tool. Much like being an astronomer is not about tweaking a telescope, more about what you can do with it.

  • Adam
    Posted January 23, 2010 at 9:06 am | Permalink

    You’re pretty well writing from the perspective that computer science = programming, which is sort of like equating physics and engineering. One is about the theoretical foundations, and the other is about applications. Granted, programming is a popular career choice for people with computer science degrees, and the distinction is widely disregarded, but the distinction actually exists. If you want to study software engineering, study software engineering, rather than criticize computer science for not being software engineering, and criticizing computer science professors for not having the attitudes of software engineers.

  • BetterProgrammerThanYou
    Posted January 26, 2010 at 9:13 am | Permalink

    This so-called “article” is nothing but a load of crap wrapped up in a bitter shell.

    Honestly, one quick browse through your list can tell me some very important things:
    1) You are, most likely, a poor programmer.
    2) You are, most certainly, a poor student.
    3) You will be, in all likelihood, a poor teacher.


    The purpose of writing code by hand is not to “[test] your ability to write trivially simple software without making errors,” it’s about separating the idea of designing solutions from the language/IDE/tool-specific world of programming. Hand-written code, usually pseudo-code, is used to prototype ideas and identify needs and problems before any real development cost has been incurred.

    What, you’ve never seen developers write code on a whiteboard? You’ve never defined the rudiments and flow of a piece of software on a legal pad, comfortable on a couch whilst watching your favorite sporting event?


    “You can’t learn how to write code by watching someone write code on a blackboard or listening to elaborate explanations from professors.” Well, duh. You don’t go to school for four years to earn a degree in Computer SCIENCE to learn how to PROGRAM. Go buy yourself a copy of “Sam’s Teach Yourself Programming in 24 Hours” if that’s what you’re after.

    The professors are there to teach concepts at the beginning, and later they will bring common problems, models and solutions to the table.
    -“Merge sorts are faster than bubble sorts. Let me show you how.”
    -“In this situation you might want to use the Factory Model. Here’s why …”

    • Izkata
      Posted January 30, 2010 at 2:08 am | Permalink

      The only time code should every be handwritten is to teach a basic concept in an introductory course, and by the professor. There is no argument there.

      Developers don’t write code on whiteboards, and neither should students handwrite code anywhere. They symbolically write the algorithm – that’s almost like the difference between night and day.

      • elhombre
        Posted February 28, 2010 at 11:11 pm | Permalink

        Skip anything you like son, the real world will soon kick that 13 year old emo-fag attitude out of you. As smart as you think you are, it counts for nothing. The world is full of failed geniuses. I strongly suspect that companies turn you down because you’re an insufferable, immature little drama queen. Don’t feel bad, though, you will mature with time.

  • Posted February 1, 2010 at 3:28 pm | Permalink

    I’d like to add one more.
    Thinking that Blind and excessive use of UML leads to better software!.
    I wish they know that UML isn’t the “Silver bullet”, ultimately it’s the code that matters!
    I’m taking a “Software Engineering” class.
    We are told to use a “GRAMMATICAL PARSER” to determine the Usecases in UML. (What moron will use a “Grammatical Parser” instead of his head ?)
    Our professor thinks that coding is a monotonous task and can be replaced by Code Generators!
    And it is mandatory for us to use Java. (he won’t even listen for the alternatives (like Python) which i feel more productive using)

  • Lucky the cat
    Posted February 2, 2010 at 1:13 am | Permalink

    Apple didn’t take the “dicking about” out of computing. Essentially they don’t make all round machines, they limit the hardware used and limit the whole experience and charge an excessive amount for the “privilege”. Limiting choices and banning people from contributing might seem like a way of making things easier but im sure the same was said about communism next to Democracy

  • Johnny
    Posted February 2, 2010 at 10:52 am | Permalink

    You CAN buy gates at RadioShack

    The same idiot who thought LaTeX was the future also told his class to go buy gates (the things transistors are made of) at RadioShack and play with them to see how they work. Again, this evidences how completely out of touch some of these people are. Gates are microscopic. You can’t go buy them at an electronics store.

    Really? transistors are made up of gates? ‘Cause I thought gates, logic gates, that is, were made of transistors. I should try to make a transistor out of an x-or. I hope I can figure that one out. Of course we shouldn’t talk about “logic” with the author of this blog post.

  • Hamur
    Posted February 3, 2010 at 9:24 pm | Permalink

    1. Java’s a terrible first language. But by the time you get to college, if you want to be a programmer you should have already played around with it enough to do fine with Java.

    This brings me to my rant.
    x13. College students must suffer through at least 3 years of remedial “Mouse clicking 101″, “What is a variable 201″, and “If statements 301″ before they are allowed to begin to learn anything useful. We don’t expect English majors to have to take courses to learn the alphabet and how to read and write, so why require Comp Sci majors to take the equivalent?

    2. Depends on how your mind works. I’m glad I started low-level and worked my way up. Yes assembler’s more complex, but knowing what’s underneath everything explains a lot of the weirdnesses of computers and gives a solid foundation for everything else, making it much easier to understand and debug.

    3. Agreed. I wrote my first programs on a typewriter, but there’s just no need for that anymore. Diagrams yes, pseudocode, yes, those are design-time concepts, but not actual code graded on syntax.

    4. As a pro, you’ll go to presentations and conventions that are very similar to class lectures. It’s not as good as brainstorming with a team or being mentored by a more experienced programmer, but it’s part of the job, and good for big topics or big changes. Maybe if it were presented as such, with most of the classes being group brainstorming and/or mentoring while only a relative few were presentations to cover major points, it would work better.

    5. Total nonsense. Algorithm design usually involves the synthesis of existing algorithms (or pieces thereof) and/or adapting or improving algorithms that are somehow similar. Furthermore, it teaches you to think about how and why new algorithms need to be developed – the types of deficiencies that otherwise good algorithms have, so that you can think about and recognize possible deficiencies in your own, although they may address very different problems.

    6. Disagree strongly. Out in the real world, you need to be able to pick up new languages, libraries, protocols, and techniques very quickly. Otherwise, you’ll find yourself amongst the dinosaurs entering COBOL instructions on punchcards. Two weeks is not at all unreasonable for a new language. API libraries take longer. Becoming really proficient, of course, takes a lot longer, but for basic competence, a week or two for a language’s syntax and basic concepts is not at all unreasonable. Of course, if you’re going to a whole new paradigm (functional or logical from procedural), that’s very difficult. But that’s why programmers get paid well. If you don’t enjoy that or can’t do it, you may be in the wrong career.

    7. More importantly, exams test the ability to answer questions instead of the ability to design, develop, and maintain programs. Good if your career goal is to become a Jeopardy contestant, bad if it’s to be a programmer.

    8. Good call. It’s amazing how hard it is to find a decent class that covers Windows API programming, given the market share of the OS. Never mind the reams of materials in the industry about how programmers can’t design programs with good usability. This is why. College teaches us not to. Piping everything to and from STDIN and STDOUT, and using obscure hexadecimal error codes should be good enough for anyone, right?

    9. You’ll use algebra all the time, although you won’t even notice it. You may use geometry quite often. You won’t use calculus often, but it’ll help because of the different ways of viewing things that it gives you, and it’ll really come in handy when you do need it. Likewise for statistics. Programming isn’t math, but math is a way of thinking that really, really goes well with programming.

    10. Hah. At least your professors weren’t VAX fans.

    11. ASCII will overtake LaTeX because…well, that really needs no explanation.

  • fingerprint211b
    Posted February 4, 2010 at 2:34 pm | Permalink

    1. I don’t agree, really. Java is a very easy and beginner friendly language, and if you’re going to learn programming, you should learn it the right way from the start.
    2. I never heard of anyone “writing in machine language” since the punch cards. If you mean assembly languages, then I disagree.
    3. I agree.
    4. That’s right, but you won’t know what to do unless someone shows you.
    5. Usually, an example problem is an EXAMPLE, it’s used to show how an algorithm works, and how to use it. I’m not sure what your problem is.
    6. Languages that have similar syntax and belong to the same programming paradigm are generally easy to switch. For example, going from C# to Java is a piece of cake.
    9. You probably can get buy without math if you’re an IT engineer, but not if you’re a CS.
    12. Please don’t talk about things you don’t have a clue about. Your professor didn’t mean gate as in a part of a FET transistor, he meant a logic gate. Both transistors and gates come in different sizes, not just as parts of integrated circuits, so you CAN buy them in a radio shack.

  • dash
    Posted February 6, 2010 at 9:23 am | Permalink

    I have to take issue with #5. The only way to learn how to attack a problem that remains unsolved, is to see how other people attacked their problems. You’re not going to learn the solution to EVERYTHING by studying a previous example obviously, but you will learn how problem solve, which is a pretty important skill when it comes to designing algorithms.

    Most everything else I agree with. I hated learning Java for the exact reason you said, “What the hell does public static void(String args[]) mean?” to which my prof replied, “Don’t worry about that just yet”. Wouldn’t it be easier to learn how to do for loops and arrays in something like Python, or PHP?

  • Justin
    Posted February 7, 2010 at 3:11 pm | Permalink

    I disagree with many of your points; the reasons for which have mostly been stated by others. This come from my experience as I “did” my undergrad in CS with a software engineering specialization, as well as from teaching programming at the high school level. Java is a fine language to start students with. If you are intelligent and hard working enough to attend a decent school, you should be able to be taught what “static”, “void” and “main” mean pretty quickly. My 15 year old students don’t take all that long. I agree that lectures are not effective means to teach the process of programming. But that is only true if it is the only means of learning. I had an abundant amount of assignments to complete where I could apply what I picked up through lectures. I do agree that a GUI is secondary. At the University level the focus is on higher level concepts such as algorithm design, time and space complexities, etc. These are concerns that address every programming language. Learning to create a GUI is in some respects a language tethered skill. It is an application of skills probably better suited to earn in a college program. Human Computer Interaction (HCI) is a different matter. It involves understanding human psychology and the way in which we use a computer; knowledge very useful for GUI design. This is something suited to being taught in a classroom while learning to create a Jframe can be done later on your own time, or on the job.

  • Allen
    Posted February 24, 2010 at 2:29 pm | Permalink

    Mostly a reasonable position.
    I take exception to 8,9 and 12 though….

    8 GUI’s not important to learning to code.
    They are not important.
    Just like parsing is not important to learning to code.
    And algorithms are not important to coding.
    Knowing how to properly write a novel is not important to learning
    how to write.
    Algorithm’s are important tools. GUI’s are important tools.
    Neither of them is coding.
    Coding and design are separate but inter-related. I can design
    algorithms without knowing anything about coding. Coding is the
    process of converting the algorithm’s into executable form.

    9) Programming needs Calculus.
    Animations/graphics are not the only fields that need calculus.
    Advanced financial analysis, datamining, engineering, physics,
    advanced chemistry, protein folding all require calculus at
    some level. Some more than others.

    12 Gates at Radio Shack: Yes you can buy gates, those are the 7400 series devices that have 4 nand gates (or other things like DFF’s) in a 14 pin package. Though it is getting harder to find in many of the Radio Shacks as they cater less to the hobbyist. You can even buy experimentors borads and some simple “toy” learning kits. You can not actually see the gates but you can play with them and learn about them.

  • Bugong
    Posted March 7, 2010 at 10:19 pm | Permalink

    While i do agree with most of what you wrote, i have certain quibbles with some of them.

    1. I agree, Java is difficult to learn at first but it’s biggest advantage compared to many other languages is it has a very very well documented API. If you need anything, you can just look it up there. I cannot say the same for C even if. Also, most of the things you need are already provided like parsing libraries or linked list libraries which isn’t provided with C.

    3. While writing on paper may seem stupid, it does help to organize your thoughts. You don’t have to write perfectly correct code. It helps you see the logic flow. I write pseudocode on paper so i could see the flow. Then when I’ve got it down, i’ll write the proper code. This also useful when you’re in a team. If other people see the way your code works with the psuedocode, it will be easy for them to read your code and improve on it. That’s why there are whiteboards everywhere in Software Engineering companies.=D It reduces cost because when you’ve sorted the kinks out on paper first, you don’t have to redo everything if something happens.=D

    4. Lectures are needed. They teach you the programming concepts that you need to know. If you need actual programming lessons, internet tutorials are better. But again, they won’t teach you new algorithms.

    5. There aren’t really new algorithms. “New” algorithms are actually mashups of different existing algorithms. Unfortunately, most of them involves problems you’ve never seen before so it is unavoidable that you will have to learn an algorithm which solves a problem you think is stupid.

    6. You don’t need to pick it up immediately. Short “prolog” courses are there to get you started. Also, if two different programming languages follow more-or-less the same paradigm, it’s easy to switch in between, like C or Java

    9. Amazingly enough, I’ve seen a lot of math in programming. It’s just not as obvious as you’d think.

    All in all, i think you’re mixing up computer science and programming. Big difference. Programming’s the tool. Computer science is the knowledge needed to use it.

  • Posted May 27, 2010 at 1:24 am | Permalink

    You sir, are a god amongst insects. Not only did you pinpoint the bullshit in any given CS degree, but you just inflicted mass-butthurt on every “CS Graduate” that can’t come to terms that their education was a giant vat of ass butter.

    Protip: Your field isn’t a gift from the heavens. It’s not unique. Hell, it’s a bastardized combination of many different fields, dependent on what your specialty is. You make big boxes that store various objects that hold electrical charges shift currents and voltages around. Occasionally you make the big shiny box that people sit in front of flash different colors. Wooooo. 90% of the critics that posted probably can’t wrap their heads around a game loop.

    And I swear to god, the next person that says that MIPS is a viable road map for learning assembly is going to get shanked.

  • Chris
    Posted October 18, 2010 at 2:47 pm | Permalink

    I would remove #12 if I were you. First, gates do not make up transistors. It’s the other way around! Second, logic gates are indeed available at some Radio Shacks. You probably should do some research next time before accusing others of being idiots on a subject you don’t understand very well yourself. Out of touch indeed.

  • Jack White
    Posted March 24, 2011 at 6:56 pm | Permalink

    Good post! Point by point:

    1. I totally agree. While I think there has to be a certain amount of jumping in at the deep end when you start programming, Java makes things difficult because of the sheer amount of structural definition you have to do to get something basic to work. In order to really understand your first programme, you have to understand the basics of object orientation. The need for OO might seem reasonable to a more seasoned programmer, but your first steps should be understanding very basic things like the idea of an instruction.

    That needs to be coupled with sending people off in a direction that will lead them to their end-goal. So for engineering students, C is good, because there is little structural definition, but it’s easy to expand on the basics to include work with low-level hardware work (pointers, bit fields and working with memory). For a CS student, LISP is better, because it encourages the programmer to think algorithmically and in terms of data representation. For others something less powerful, but simpler, like Pascal or maybe Python.

    Nobody should touch the likes of Java or C++ as their first language – they really are bastard love-children of a variety of languages, and their impurities and idiosyncrasies make them conceptually hellish.

    2. I don’t agree so much with this. Learning about assembly and how that relates to machine code were the first steps I took in the world of programming. Although I still think there are better starting points, any programmer will benefit immensely from the ability to see how their high-level code is an abstraction and that many high-level ideas are grounded in how data and code are represented at a low level.

    3. It’s a fact that many programmes evolve rather than are planned. But it’s equally true that there are multitude of big name softwares that suffer immensely from this approach. The trouble with teaching programming by paper design is that the kind of projects undergrads are typically asked to produce could be hacked together by any half-competent coder in a matter of hours. In these cases, it’s usually easier to do the hacking. But if you try to build a cast-iron email client, or (God forbid) a content management system in this way, you are so far up shit creek that you can see it welling up from underground.

    4. Yeah, programming lectures are a heap of shit. Ha ha. Heap.

    5. Algorithm design is not taught by reading algorithms, but it can be taught be *studying* algorithms and trying to understand the thought process behind the design of someone else’s algorithm.

    6. I don’t get this attitude either. I went to Glasgow University to do CS 10 years ago and any number of people said languages were essentially all the same. What a load of bollocks! I think this comes as a result of the two most popular working languages being C and C++, and then adding a load of C-like languages into the mix. There are massive benefits in using different languages in different circumstances and frankly if you use them the way they are meant to be used, then even the difference between C and C++ can be a really headfuck. If you start involving yourself with Haskell for example, then you actually need to unlearn a lot of what you picked up while learning C.

    7. I agree.

    8. Ehhhhh… I don’t really agree. Development of GUIs is still a tiny, tiny part of the world’s software development – it’s just that they’re always on display. Glasgow did quite a good first year course on visual design. I just think though that GUI design is for GUI designers and coding is for coders. The way you programme a GUI once it has been designed (by a designer) is unlikely to be terribly irksome. While they do need to be got right, and they’re important because they’re at the human/computer interface, they totally *are* a small speciality.

    9. Programming doesn’t necessarily require calculus, but a lot of it does. You should just regard algorithms for implementing calculus the same way you would any other algorithm – something you should know about for knowledge’s sake and that can be brought out of the toolbox if need be.

    10. I used to be totally into Linux. But since those days I’ve used a PC for more and more – newspaper design in particular, general admin work and typing up reports for an insurance broker. With those applications, you really, really appreciate what the likes of Microsoft and Apple have done for the microcomputer. Ease of use and good presentation rule the roost for most users – not power or customisability. While they have a way to go before getting it right, I believe that both Canonical (who coordinate Ubuntu) and Google have cottoned on to this. Canonical in particular now sees Apple as leading the way in user interface design and is making a concerted effort to advance Linux as a consumer operating system. I think the real power of Linux is that its (many, many) developers have the ability to place this layer of extreme usability *on top* of their geeky engineering OS, whereas Microsoft and Apple are institutionally unable to go in the reverse direction. If Ubuntu *does* become as usable as Windows, then it will start to go places, particularly in SOHO and SMEs.

    11. I sometimes use LaTeX. It’s good in a way, but rubbish in so many others. It is so much easier to accomplish layout as good as LaTeX *can* do, with something like InDesign or QuarkXpress.

    12. Like everyone above said, yes, gates are small, but there’s nothing to stop someone just putting two of them on a consumer chip and flogging that to people like us. I used some in my first year of my degree.

  • Ira
    Posted March 24, 2011 at 9:43 pm | Permalink

    1) Java is not a good language to learn to code on: however, its a great language to learn OOD on. Much better than C++. But learn to program in basic. why? because its really simple. then learn structured programming in Pascal. then learn Object Oriented Programming in Java. Then learn C++ because then people won’t think you’re an idiot.

    2) If you’re a CSE or want to write your own OS or compiler, you should learn assembly. Otherwise, don’t waste your time.

    3) don’t write code on paper. but do learn to write Pseudocode, and learn flowcharts, and most importantly, LEARN HOW TO WRITE A DESIGN DOCUMENT. Far too few programmers know how to write decent documentation. And its what separates projects that fail from projects that don’t. For that matter, learn how to read requirements, too.

    4) Lectures are effective at teaching anything unless you can’t learn through them, and then they are useless no matter what subject they are in. However, a good computer course should be 50/50 lectures and computer work. Believe it or not, Computer Science is a science. If nothing else, learning logic helps.

    5) Who designs their own Algorithms. OTOH, its nice to be able to discuss the merits of different sorting algorithms with a new hire without getting that blank look.

    6) Well, some people can. But seriously, some curriculum’s need to realize that the number of programming languages you learn isn’t nearly as important as what those programming languages are. Ruby on Rails will get you hired. Snobol will get you laughed at.

    7) I always loved exams, but they don’t measure anything except that you can think under pressure. Hmmm…. not a bad life skill. Programming is usually not a high pressure job. Usually.

    8) Gui’s can be or not depending on the job. However, a new Comp Sci should know everything about Gui’s, so they can be as clueless about them as they are about everything else.

    9) Everything requires Calculus. Everyone should know calculus, because, well… Look, you are a science major, and every other science major learns it, even the Biology people who become veterinarians. If I had my druthers everyone who earns a Bachelors degree would have taken Calc 1. Good thing I’m not a dean.

    10) Don’t look now. Ubuntu is gathering some momentum. Still not going to happen. Because Microsoft wields too much economic might to go anywhere.

    11) yeah, that’s just funny. I wrote my thesis in LaTex, hated every minute of it. There’s a reason its not used outside of textbooks. But formula editor in word sucks beans.

    12) I buy gates at home depot. Hang them myself. oh, you mean the things on computer chips. um. No opinion.

    Conclusion: Anyone who thinks you go to college to learn how to program needs to actually realize what going to college really teaches you. It teaches you how to think, how to study, how to take notes, how to do all that crap that high school didn’t teach you. You learn how to program by sitting at your computer at home writing code, and by sitting in an office writing code, and by sitting on a beach writing code, and by sitting in a board room being told how crappy your code is by someone who’s been writing code for 20 years, and by standing at the water cooler with the same guy, and by reading his code, and by reading the code of the guy who got fired for downloading too much porn on the work machine, and by debugging code from a vendor who didn’t know what they were doing because they all got Bachelors degrees in Computer Science.

  • Matt
    Posted March 24, 2011 at 10:30 pm | Permalink

    Misconceptions by this author:

    Software development is the only thing you can do with a computer science degree.

    This article should be titled “12 Reasons how Computer Science Professors don’t prepare students for Software development”

    You are just complaining because Computer Science professors didn’t teach you how to write code. Computer science is way more broad then just writing programs. Just because you’re professors didn’t prepare you for software development where you work doesn’t mean that they aren’t good, or aren’t doing the right thing. Most of what you said might be true for you’re job where you work but it in no way describes computer science in general. The major is computer science, not software engineering.

  • TP
    Posted March 24, 2011 at 10:40 pm | Permalink

    Wow, you stepped on a lot of toes!
    Nice job!
    I’ve seen people learn to code so many different ways
    its not funny. Teachers are always comfortable with a book
    and an approved method, it freaks them out to learn some kid hacker
    can out code them. But coding is so easily learned by way of the internet
    that any interested halfway intellegent person can learn it.

  • Peter Lind
    Posted March 25, 2011 at 3:07 am | Permalink

    First of, I am a computer science instructor, so I might be more than a little biased.

    You touch on some interessting points, but are not quite right in all of them. Let me take your “misconceptions” one at a time.

    1. Java – you are absolutely right, Java is a complicated first language. Python og Ruby would be better to learn about variables and programflow. However Java is much better than C or C++, and it has an immediate “real-world” feeling, since the students can actually see it being used and requested by the industry.

    2. Machine Language – when we say that it is “basic”, or low-level, we mean from the computers point of view, not from the students. Machine Language is the basic “workhorse” in the computer, whereas high-level languages have to be compiled or interpreted. High-level are however easier to learn for humans …

    3. Write code on paper – I’ve seen other instructors make this mistake too. The idea is that you write your program, not your code, on paper before sitting in front of the machine. Working with paper allows you to design your program, to think about the algorithm without all the nitty-gritty of the language you use. Experienced programmers can do it in their heads, or in comments, but students need yo use paper. Unfortunately some instructors seem to think that you need to write compilable code on paper, and that is indeed a bonehead misconception.

    4. Lectures – you are absolutely right, you can only learn programming by programming.

    5. Algorithm design – hmm, well, it IS a good idea to learn to read existing code, and to understand how they solve their problems. But it certainly isn’t for beginners.

    6. Picking up other languages – I kind of agree. While it is informative to show students that they can learn a new language using the same principles they have already learned once, it always ends in confusion. In the long term you will pick this up by yourselves.

    7. Exams – yeah, they are problematic, but we need some way to measure your individual understanding of the area. I would love to see some changes there, but haven’t the faintest idea as how it could be done better.

    8. GUIs are tools of the API – like working with filesystems, XML, databases, encryption, sound, threads, networking etc. It is not part of learning to program, and it requires more learning a specific API, than learning basic theory, and I guess that is why it is often left to specialty courses. I would like to spend more time on GUI-programming, but it requires that you first understand basic programming, algorithm design, program flow, multithreading and event-controlled programmming.

    9. Calculus – you are right, it doesn’t. Programming doesn’t require any specific math-skills, it only requires that you are logical and systematic – traits often seen in higher mathematics, but the connection between high-school math and programming is a nuisance that I would like to get rid of.

    10 and 11 sounds like personal opinions, like whether Mac or Windows is best, and whether everyone should use Open Office or some other system. It is an endless debate, and no one is correct.

    12. You are mistaken – or your professor has taught you poorly. Gates are built from transistors, and you can buy integrated circuits with four NAND, EXOR, AND og OR gates in them, and play around with at home. Boolean logic and how it is implemented in electronic circuits are a very important aspect of computer science, but maybe not so much in programming.

  • Kavan Wolfe
    Posted March 25, 2011 at 6:15 am | Permalink

    Thanks to everyone who left a comment. To fully understand the perspective of this article, one needs awareness of the paradox at the heart of computer science. Each year, western society requires hundreds of thousands of new software developers, and only a few thousand new computer scientists. Unfortunately, there are very few software engineering programs, so most students who want to become developers take computer science. However, the people who control the computer science programs are largely computer scientists, not software developers. This results in a fundamental mismatch between what society needs and what these programs deliver.

  • Posted March 25, 2011 at 7:16 am | Permalink

    Java isn’t really a good first taught programming language, I mean why do we have to put those public static void main in there anyways.

  • Posted March 25, 2011 at 4:21 pm | Permalink

    Programmer here. As in passionate, obsessed, want to make new software do amazing stuff programmer not the lately seen stereotype of “oh i heard it’s well payed, i want to be a programmer”. In any case, there is nothing i would love more than to see some proper IT teachers in Universities, especially in the Programming courses.

    Just as a small insider, in my university (which shall remain unnamed) our VB.NET programming teacher is such a bloody genius he wrote in nice big font on the projector Dim Integer as Integer and couldn’t figure out what the problem was for about half an hour.

    But when i wrote a string to 7-bit octet text conversion function (the one used to convert string to SMS PDU format) i got a 60 in my report because, and i quote “you did not use arrays in your code”. Then what the bloody hell do you consider a string when i refer to it with brackets (or parentheses in the case of VB) …. IDIOT!

  • Ash
    Posted March 28, 2011 at 6:31 am | Permalink

    I agree with everything except point 6, I found that after I learned C++ I could pick up nearly any language quite easily.

    I know a kid studying CS now and he is having issues with binary and I told him that he doesn’t need to really know that just get past it and forget it, it is redundant to modern Programmers.

  • Kavan Wolfe
    Posted March 30, 2011 at 10:04 am | Permalink


    C++ to Java and other object-oriented or scripting languages isn’t so bad, but try Prolog and LISP.

  • Aphotic
    Posted April 8, 2011 at 10:54 pm | Permalink

    I’ve bought some AND and OR and NOT gate chips at RadioShack last semester.

  • Posted May 12, 2011 at 3:22 pm | Permalink

    Python should be the first language taught, very easy to learn.

    As much as I try to get rid of Windows I always go back, I love linux but I don’t know if will ever be good for the regular user.

  • Jake
    Posted August 4, 2011 at 10:50 am | Permalink

    “The same idiot who thought LaTeX was the future also told his class to go buy gates (the things transistors are made of) at RadioShack and play with them to see how they work. Again, this evidences how completely out of touch some of these people are. Gates are microscopic. You can’t go buy them at an electronics store.”

    What!? I’m sorry, transistors are made of gates? lol This guy obviously never took a real hardware course in his life. If you can’t buy them at the store then what in the hell is the Ti 7400 series!?

    He makes a couple of decent points but its shrouded with bag grammar and personal biasing. This guy sounds like some kind of code monkey that would be better suited for technical school and not a real University.

2 Trackbacks

Post a Comment

Your email is kept private. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>