The “lack of innovation” comment was interesting. Open source being innovative would kill it, from the perspective of getting people to actually use the software.
The continued success of what is popular today, and the monumental failure of those wonderful technologies that were completely ahead of their time, have proven to me that people do not want innovation. They do not want to do their work better and faster than the thought possible. Rather, they want to be gently lead down a well-traveled path by hype-mongerers and marketing departments, never taking more than baby-steps at a time, for fear of falling and scraping their shins.
“Programming must change — but how? At a reunion of coding pioneers, answers abound.”
I predict this article will soon have a lot of posts.
Maybe programming can learn something from engineering about accountability? A building is built and there’s an independent agency to review it. A plane crashes and there’s a complete review of how, and why then edicts are declared to all the makers. And hopefull the next generation of planes are better.
Also I think that our software is brittle because our hardware is brittle.
today’s programming indeed seems to be very brittle as no vendor is ready to gurantee a bug free program!. You pay them and still don’t get warranty that you wont be attacked or your system won’t do malacious things which is a pity considering the money those software companies make
“Gates went on to say that young programmers don’t need computer science degrees: “The best way to prepare is to write programs, and to study great programs that other people have written.”
Stupid, stupid, stupid. That’s a sure-fire way to make bad programmers. I’d make a snarky Windows joke, but we all know Bill hasn’t coded in decades…
Accountability are certainly good to increase the reliability of large software projects, but not small-medium projects nor shrink-wrap software (e.g. Microsoft’s products). It will also dramatically increase cost.
Check that. Not necessarily the “not getting a CS degree” part, but the overall show of disrespect for the academic side of programming. Such an attitude leads to a constant reinventing of the wheel, which is not a good thing.
The Computer Science degree is the wrong one to get if you want to be a career programmer. CS degrees are for computer scientists, who don’t have to worry about writing secure or user-friendly software (unless that is their specialty), or about delivering software within time and budget constraints.
No, the right degree for most programmers is a Bachelor’s or Master’s in Software Engineering (BSSE/MSSE). Read the article Software Engineering, Not Computer Science by Steve McConnell (http://www.stevemcconnell.com/SeIsNotCs.pdf) or the article it is based on, Software Engineering Programms are not Computer Science Programms by David L. Parnas (http://portal.acm.org/citation.cfm?id=314615&dl=ACM&coll=portal) for more arguments against CS degrees.
While it’s true that you can be a good programmer without either degree, both will make you a better programmer than you would be without one.
OSS can and does solve many of those problems. IMHO the problem is that every company basicly reinvents the wheel many times. Also there is the ida of ‘many eyes’. It is common in oss to see bug reports in bugzilla of a user that ran the program through vaingrad or a different set of error finding tools.
The problem is that OSS is still very immature. Currently they have to ‘copy’ interfaces or behavior just to get used. It is very hard to innovate when most of the world hasn’t ‘innovated’ yet even over to OSS. True there are small time innovations (such as the amazing O(1) scheduler in linux 2.6, or some of the good UI that goes into gnome and kde), but they arn’t revolutionary.
The fact is that people want things that ‘work’, so unless you innovation makes people work better, people wont use it.
If you want, put in four years at a college (or more at a graduate school). This will give you access to some jobs that require credentials, and it will give you a deeper understanding of the field, but if you don’t enjoy school, you can (with some dedication) get similar experience on the job. In any case, book learning alone won’t be enough. “Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter” says Eric Raymond, author of The New Hacker’s Dictionary. One of the best programmers I ever hired had only a High School degree; he’s produced a lot of great software, has his own news group, and through stock options is no doubt much richer than I’ll ever be.
True there are small time innovations (such as the amazing O(1) scheduler in linux 2.6
That wasn’t even remotely innovative, such technology was present in SunOS long ago. KDE and Gnome likewise have yet to do anything that hasn’t been seen before elsewhere.
I don’t think innovation is overrated, but the term is ridiculously overused (especially by Mac fans) to the point that the word has become debased and quite meaningless (the same goes for the word “intuitive”). By the way, just because something isn’t “innovative” doesn’t mean it can’t be well-designed, easy-to-use, pleasant in apperance or performance etc 🙂
as someone who has a computer science (w/ business studies) degree. I vaguely remember doing java and C++ ( I can’t code to save my life). But what I do remember from Software engineering courses is that one of the problems with programming was actually the rate of advance in the industry (and not the reverse!).
Basiclly, new languages and technologies means that no language or technology is ever exploited to its full potential as they are always superceded by newer things which effectively start from zero. I.e the “innovation” in programming necessetates a constant re-invention of the wheel.
All opinions are mine unless otherwise stated. Terms and conditions apply. Offer subject to status and limitted availibilty.
Because software is such a broad field, convering all others in some way, it’s probably best to keep that in mind.
For medical and control software that could actually kill or injure a person, the software makers must be accountable. But productivity software and games? I don’t think so. Operating systems is a gray area, it depends on what situation you use them in.
I personally don’t care much about innovation in the current software scene. What is more useful to all consumners is more reliable and easy-to-use (intuitive), to paraphrase AMH, software. Innovation will truly come from using AI more often in software, as well as newer techologies that will make the current desktop paradigm seem old-fashioned.
I also agree with lots of the other posts before me that many of these words are overused, probably to create more hype around their products. My guess is that these kinds of terms are easier to sell than detailed technical merits/features that spans a few (hundred) pages and few people understand. I would also like to add I actually haven’t read the article because I wasn’t linked to it even after selecting that *stupid* one-day pass nonsense.
I don’t think innovation is overrated, but the term is ridiculously overused (especially by Mac fans)
Hm. Most of the time I hear the word innovative it’s from Gates or one of his lawyer minions explaining why they should be allowed to crush the competition.
In some ways an engineering education may be more useful if it was structured like an MBA. Give the programmer 2 years of CS education – learning languages, data structures, etc… Then let them work for a couple of years. After which, they could go back to school for a year or two to study how exactly projects should be run. I would’ve been better equipped to understand what was being taught (and probably remembered more) in my Software Engineering classes if I had some work experience under my belt.
I’ve seen this right throughout the computer world (this is a view from an insider). The inability by those in IT to step out of the IT focused roll and look at the situation from the end users perspective. The ability to understand that the end user doesn’t *care* about the details. The software/hardware combination is like a hammer.
The employee/end user use the software and hardware to accomplish a task, the technology aspects they are not interested in, if they were, you would be out of a job and they would simply purchase a barebones piece of software and customise it from the ground up into something unique. They pay a premium for the convience you bring by developing software to meet their needs.
The most common thing I hear from customers is not the occasional crash. Most people are able to shrug it off and say, “oh well, Murpheys Law”. That does get people peeved is when they follow the instructions in a manual or a help file and something goes wrong, or worse still, the language in the help file doesn’t explain the problem and solution in a clear and precise manor.
Example of this; I was writing a Macro for Excel, I needed help with a function, the information in the help file was completely bloody useless. Here is another example. I was teaching myself C# to see what it was like. I wanted to open a new window requesting the user information. Using VB the function (IIRC) is:
dim strInputMe as string
stInputMe = InputBox(“Please enter some foo for bah please!”)
In the VB world, that would simply grab the input, throw it into a variable for me to meerily use some where else. Now, this is where the fun starts. I looked through the MSDN library. On would assume that if they were trying to migrate VB’ers to C#, wouldn’t be great to have a reference saying, “InputBox is a VB function call, HOWEVER, click here for the steps required to accomplish the equivilant result in C#”, but there wasn’t. This was then made worse whe I questioned the so-called “Microsoft C# guru” at the Canberra HQ who was clueless to the InputBox() function in VB, let alone simulating the same sort of results using a C# call.
Basically, new languages and technologies means that no language or technology is ever exploited to its full potential as they are always superceded by newer things which effectively start from zero. I.e the “innovation” in programming necessetates a constant re-invention of the wheel.
Are you saying that C or any assembler languages haven’t been exploited to anywhere near their full potential?
I think your statement is a sweeping generalisation that is not applicable to plenty of languages.
Couldn’t agree more. C should be taught on the street corner where it belongs, like sex. I agreed with Stallman and Gates on something and now I have to go change. Really agree about doing small devices is a lot like the good old days.
UNIX is the ultimate programmers creative medium, why would we build something else? It’s instant gratification.
No offense to the panel but it’s a lot easier to innovate when the world is a clean slate. Now a days it’s a tough time to find a clean spot of wall that hasn’t been done several times over.
The continued success of what is popular today, and the monumental failure of those wonderful technologies that were completely ahead of their time, have proven to me that people do not want innovation. They do not want to do their work better and faster than the[y] thought possible. Rather, they want to be gently lead down a well-traveled path by hype-mongerers and marketing departments, never taking more than baby-steps at a time, for fear of falling and scraping their shins.
Very cynical; a trait I generally share.
But in reality; most people would be happy to work better and faster, and will gladly adopt obviously superior technologies. We use Word Processing software rather than typewriters, Page Layout software rather than hot-metal typesetting machines, digital camcorders with editing software rather than cutting film, etc.
You may have some other examples in mind, but perhaps the failed innovators were not markedly better than what we already have, and at greater cost.
One problem with software innovation, is that it is so easily copied
I can at least imagine the originators of a technology going unrewarded, because the public may assume that if a new technology in Product X is worthwhile, it will be added to Product Y also at some future date. And so the public waits, and ultimately Microsoft is rewarded
Many computer science programs across the country are really software engineering programs (undergraduate, not graduate). This is particularly evident where the computer science department is in an engineering school rather than mixed in with the math department. I think these programs should change the degree names, but at least some schools are getting the message that software engineering matter a lot more to most graduates that old-school computer science.
The unforutunate conception out there is that large scale programming isn’t different from small scale programming. This is pure bunk. “The hard part about programming is not making a mess of it” (or something like that) This little snippet of wisdom is both often quoted and often ignored. It all comes down to discipline, which is what a good “software engineering” program teaches. Good software discipline leads to all sorts of necessary, but often tedious, things like detailed requirements and designs, design reviews, code inspections, and process review.
BTW, I don’t agree with all of this about holding software makers accountable. The simple truth is that software will be as bug-free as the customers require. We would all like to sue Microsoft for every time Windows or Word have frozen up on us and eaten a couple hours of work, BUT we keep buying their products (intentionally ignoring the monopoly aspect here). Making software more reliable costs money, just like adding features. Until people make purchasing decisions based on reliability, we won’t get reliable software.
Industries where reliability really matters (medical, aerospace) get reliable sofware because of the dore consequences of bugs (lawsuits). However, customers do pay a LARGE premium for that reliability.
The business world is not necessarily interested in the most robust bug-free apps. The shocker for me just out of college (many moons ago!) was the “just get it done now” mentality. The trick for me was to find a balance between the “right way” of doing something and the ASAP business mentality. There are trade-offs in everything.
As been mentioned before a common misconception about software development is that it is just like a physical engineering project (building a skyscraper etc). In my experience software projects are alot more squishy & dynamic. User Requirements/Needs can change dramatically during the course of development (as more things are “learned” and new information becomes available). Severe problems can arise when this type of dynamism is not taken into account.
Another big issue is the disconnect between programmers and their target users. The tendancy for any dev firm is to hide the programmer in the background allowing only “Account Managers” or “Account Liason Officers” to handle communication and decision making. Unfortunately this results in an increase in miscommunications and misunderstandings. An idea here might be to make a programmer more visible, accountable AND responsible to their clients.
But in reality; most people would be happy to work better and faster, and will gladly adopt obviously superior technologies.
Only if they are a small, well-defined, easily understood improvement over existing technologies.
We use Word Processing software rather than typewriters
But they’ll still stick with QWERTY keyboard layouts. They’ll still waste time manually laying-out and formatting pages, when they’d be immensely more productive using a proper typesetting language like LaTeX (or a visual equivilent thereof).
Page Layout software rather than hot-metal typesetting machines, digital camcorders with editing software rather than cutting film, etc.
Yet, they’ll still use UIs encumbered by “real world” paradigms, instead of realizing that an efficient UI for a computer is far removed from the real world.
You may have some other examples in mind, but perhaps the failed innovators were not markedly better than what we already have, and at greater cost.
Aside from in networking, I’d say that there are very few things that have been done in this last decade that weren’t done already, and better in the previous decade.
I can at least imagine the originators of a technology going unrewarded, because the public may assume that if a new technology in Product X is worthwhile, it will be added to Product Y also at some future date. And so the public waits, and ultimately Microsoft is rewarded
That is true. There is a strong tendency to wait for a market behemoth like Microsoft to “legitimize” a new technology before it becomes widespread. This is inherently harmful, given that Microsoft tends to be rather conservative in introducing new technologies.
Gosling recently went back to Java at SUN after working at Simonyi’s company. If because Simonyi’s project is going nowhere then too bad.
Merging the design and implementation process – isn’t that what CASE was about? Automatic code generation from the design? I think Rational Rose (now IBM) allows this but only to a point after which hand editing is still required.
It’s like programming is in need of a quantum leap. Like what the mouse was for the UI. After 40-50 years, it’s now at OOP. But it has all been refinement’s, nothing fundamentally new. And programming is still a task of a lot of typing. “if” and “else” must be the most prevalent words. And they take 6 key presses every time. It’s like programming needs to get out of the straight jacket of using the human language.
But if the day comes that the designer can program by manipulating some boxes on the screen – like a car designer models a car on the screen – or by stepping through a wizard or whatever then a lot of programmer’s will be doing design or looking for something else to do.
Gates is not right about programmers not needing a degree. I think it confirms that he’s is not a scientist or engineer but a business man. He is not taking the discipline seriously.
Gates is exactly right about not needing a computer science degree. The most important requirement of a programmer is good problem solving skills and the ability to be able to switch from a detailed view to a high-level view of some code at a moments notice. And for those that are too young to remember or never read Programmers at Work, Gates was a very good assembly language programmer at the time and was very involved in most of the compiler projects for years.
Note that in computing science, many academic input were so disconnected from ‘real world needs’ that they failed.
Examples? Because I’ve seen lots of examples of the opposite. Consider some of the current big fields of study:
– Formally provable statically typed languages (Haskell, Clean)
– Concurrent calculi for highly concurrent languages
The first one is tremendously useful in a networked world. For example, in the wireless world, you’re often subject to FCC regulations that say something along the lines of “though shalt not transmit at more than X dbm or thou shalt be beheaded.” Make a “trusted kernel” written in Haskell that has full control over the transmit unit, then use formal analysis tools on the code to provide evidence to the FCC that your design is sound. The better your evidence is, the less painful the FCC-approval process is, and the faster you get to market.
The second one is also tremendously useful. Processors won’t keep scaling to infinitely high clock-speeds. They are already 3GHz monsters sucked down 100W of power. At some point, it becomes much more affordable to take several processors and put them in parallel. Or, take dozens of machines, and put them in parallel on a network. The new Cell processor design from IBM works along those lines. For these machines, multithreading will just not do. Its too error-prone and fragile. However, concurrent languages could be immensely useful in this domain.
I agree with the others here who say that ignoring the academic side of things is the wrong way to go. Perhaps programmers don’t need CS degrees, but they certainly should have software engineering degrees.
Getting by without either is being content with limiting oneself to simple scripting style tasks. Without understanding the basics of programming concepts such as recursion, testing, design, and in some cases complexity, you cannot produce good code. Of course there are some exceptions to this rule, but gifted individuals who become good without degrees basically end up learning the same stuff anyways.
erktrek said “The business world is not necessarily interested in the most robust bug-free apps. The shocker for me just out of college (many moons ago!) was the “just get it done now” mentality. The trick for me was to find a balance between the “right way” of doing something and the ASAP business mentality. There are trade-offs in everything.”
That’s for sure! I ran into that many times. The boss comes in and asks when it will be ready. I might say something like 8 months to testing if things go well. Then he says “Well, we just put out ads saying it’ll be shipping in 3 months. Get busy!”
They didn’t care that rushing a product that much means you can kiss any quality good-bye. That could be dealt with in an update at some future date… unless we were busy rushing the next piece of junk out the door because the boss made unrealistic ship dates on that too. Bosses must be programmers too, and must be involved in any projects to the point were they know the state of the projects under development. Unfortunately, most bosses need help turning on their computer.
Regarding accountability, while I would certainly love to see some form of it in software, it is almost impossible for an application writer to gaurentee, almost anything they do eventually depends on the underlying operating system. It’s hard to make any kind of gaurentee when you don’t control the foundation being built upon.
Stop undervaluing and belittling the usefulness of a good higher education and schooling. It’s simply just plain stupid to do that, and programmers who do that are stupidly cutting their own branch under them.
Programming and computer stuff is no different from other work fields. You think that we would have good medical specialists, teachers, polimen, soldiers, bankers etc. without them having had a good education in their particular field? When you sick, just go to anyone who says that he/she is a real expert on medicine and who doesn’t ask too much money for his/her expertise, right…??
Sure some bright individuals are able to learn some things by themselves, like a few programming languages. But quite often they may lack the knowledge of many essential things that a good education could give them. Self-learned experts may not have the time and often even interest to learn anything but what is interesting to them personally and what is immediately needed in some current work.
So many programmers who are these so-called self-learned “experts” may end-up programming, for example, shitty user interfaces, because they don’t undertand and care to learn stuff like usability design properly. We have already seen that oh-so many times. Or they don’t care to document their work enough. Or they don’t follow the standards enough etc. In their self-learned expertism they just don’t care to pay attention to anything but what they themselves see worth their attention.
Higher education is, and should be, able to give an enough wide and true expertism on some particular field. Of course, that doesn’t mean that degrees would be everything, and that self-learning and talent would not be valuable too.
Also, valuing some professional education highly usually means also that that professional work field is also highly valued.
If more and more people would think that it doesn’t really matter too much what education programmers have, employers would start to think that way more too. Try to think a bit more what that could mean to salary levels, programming work requirements etc.
Do you really want programming to become a wild work field where education degrees mean nothing and employers just choose their programmers where they get them cheapest etc? What could that mean to the quality of programming in the long run?
Also as computing becomes more and more complicated (at least in many fields of programming), higher education in programming becomes more and more important too.
Like I said abobe, you don’t get good medical doctors by undervaluing the meaning of their good professional medical education. Then would you want, for example, to trust your life to some badly programmed medical surgical computer program, etc.?
I don’t know what Bill Gates may have meant when he was belittling the maning of degrees. Maybe he would just want to hire as good programmers for as cheap price as possible? But frankly, I haven’t heard too many other smart statements from that man for a long time too.
IMHO the folks who make the best programmers (you can substitute, chemists, salesman, electrician, engineer) are those who have a passion for the field. They enjoy it so much that they will learn regardless of educational background. They will spend many extra hours developing their skills and knowledge of the field.
A formal education is a good way to jump start the entire learning process. A person is introduced to a wide scope view of the field that is hard to get on your own efforts (not impossible) as well as depth of exposure in a good program. This still does not preclude one from doing it without the formal training. However, they face a steeper and longer path for their goal.
I have known, and taught, many people to “program” but only a very few showed the necessary passion and creativity to be good at the craft. (I have also taught chemistry students with the same results). The rest could generate lots of code to get the job done but it was like listening to a monotone rendition of a rhapsody written by someone else.
Let’s face it. Good programming style requires more than hard work coding. Some of the better accounting packages have been written by accountants who picked up programming on the side. They used their knowledge of accounting where a programmer, who does not have much knowledge, would not understand the depth and breadth of the problem.
This implies that good programmers are also those who are most immersed in the area of the application for which they are developing and have good programming skills. Anyone can produce code, lots of code, that ends up being worthless to the end user.
The biggest mistake that companies make is to separate the programmer from the problem to be solved. IT departments often over regulate the process and then assign programmers to bits of code. Maybe they should take note of the car manufacturers that decided to use teams to build the entire automobile. The job was no longer mundane and the team started innovating ways to improve the process.
How much is that question related to problems that have nothing to do with programming itself? How much could it be related to things like programmers only trying to fulfill demands of others and not having much power to govern their own work?
Maybe these things have also quite a much to do with the popularity of the free software and open source movements? In the free software world programmers often can and do govern what they do, maybe even 100 %. On the other hand, in commercial, closed source world programmers often just code some pieces of work that they have no real control of otherwise.
Maybe programmers should unite more, join forces, have unions etc… So that not only employers like some corporate bosses would write the rules that govern what programming is about, but that also programmers themselves could take part in writing those rules that govern their own daily work.
What is it with people thinking that higher education will give them any good job oppertunities by default? Total nonsense. In fact, there are already to many individuals with degrees. Period.
Take me for example. I have a B.Sc. an M.Sc. and soon a Ph.D. in Systems and Control Theory. Cool? Well, are you in for a bummer.
I have been applying for jobs in industry and none of my “thinking skills” are really required. There is not one employer who is looking for proofs whether my vector fields are in involution, or whether I could apply operators on Hilbert Spaces to solve their problems….
I’ll need to take a job as a taxi driver for some time, just to finish my dissertation, and I will go back to academia as soon as possible. Industry is way too pratical and does not need esoteric concepts and paradigms to get their problems solved.
At the end of the day, I just should have listened to my dad. He learned some COBOL programming while he was in the army. He has no formal education but has numerous programming languages under his belt by just working with them. There is too much work for people like him.
Sweet dreams, but nothing beats learning by doing. It took me three degrees to realise this.
I have been applying for jobs in industry and none of my “thinking skills” are really required. There is not one employer who is looking for proofs whether my vector fields are in involution
Don’t you think that there might be something wrong with the software industry and woking arragments too? If they don’t value good education, and if they, for example, just want to higher people for as cheap as possible?
Do you think that the situation is the same on some other, more established and older work fields? No. Do you think that, for example, hospitals hire people with little education as long as they are know a few special skills and are cheap enough work force? No, and it shouldn’t be so in computer industry either.
Think about this.
What do you programmers want to be, highly educated, highly paid, highly appreciated professionals who can have a word on what the work conditons, work requirements and programming work itself is like, or just some corporate “slaves” (though maybe even for a reasonable salary) who only do what they are told to do, try to keep up with some ridiculous time schedules and requirements, even if it means bad buggy sofware etc. and without too much power to govern what their work is like?
Time for some bigger changes in the software industry?
Sorry for too many comments above and also for all too many typos in my previous postings…
Anyway, if programming stinks, it may well be that it is mostly up to programmers themselves (and their will) to change those things that stink. First of all it is necesary to think what the problems really are (probably usually not so much about what programming languages are used etc…), and then, by joining forces, try to make the necessary changes in the industry etc.
Programming is certainly one of the most critical work fields today. Programmers could make a big change not only in their own work conditions but also in the whole industry if they really want and decide to.
“Programming is certainly one of the most critical work fields today. Programmers could make a big change not only in their own work conditions but also in the whole industry if they really want and decide to.”
Q: “Take me for example. I have a B.Sc. an M.Sc. and soon a Ph.D. in Systems and Control Theory. Cool? Well, are you in for a bummer. … not one employer who is looking for proofs…”
Sounds familiar. This seems to affect a lot of PhD students.
I only have a BSc in Computer Science / Software Engineering, but it has seen me good into two jobs now. All of the firms I applied to wanted a Bachelor’s level degree in something, but it didn’t necessarily have to be computing. However, computing training to BSc or MSc level was favoured.
I also know several people who have training to PhD level, in roughly three categories:
Group 1: They get their PhD, get a job, but don’t use any of the content from their PhD (and frequently their MSc).
Group 2: Carry on into research forever, where the PhD is useful.
Group 3: Struggle to find work due to being over qualified (bizarrely) or failing to find a job that will let them use anything related to their studies.
This seems to be the way of things.
Strangely, despite the presence of Group 3, there are many companies (such as the one I’m at currently) that will take people from non-computing degrees and train in programming providing they already have a minimal skill set. For instance, we’ve taken Maths and Physics PhD students with minimal Visual Basic skills and turned them into VC++ / Win32 API / COM programmers.
This does also have a happy flip side for those of us with computing training if, for instance, your previous job dealt mostly with proprietary technologies and has left 18 months of dead ground on your CV. In a sense, you can cross-train back into your own profession.
Minor “typo”. I realise Win32 is proprietary, it just happens to be proprietary in a very widespread way, instead of proprietary in a “only one company uses it” kind of way. Which was an improvement for me.
“It’s like programming is in need of a quantum leap. Like what the mouse was for the UI. After 40-50 years, it’s now at OOP. But it has all been refinement’s, nothing fundamentally new. And programming is still a task of a lot of typing. “if” and “else” must be the most prevalent words. And they take 6 key presses every time. It’s like programming needs to get out of the straight jacket of using the human language. ”
Maybe the problem with programming is too much complexity.
Strangely, despite the presence of Group 3, there are many companies (such as the one I’m at currently) that will take people from non-computing degrees and train in programming providing they already have a minimal skill set. For instance, we’ve taken Maths and Physics PhD students with minimal Visual Basic skills and turned them into VC++ / Win32 API / COM programmers.
If I have to choose a different profession, I would certainly choose computing. The problem is, though, that I’m slightly older than the BSc and MSc crowd due to my three years of additional PhD research. This may put off some of those employers to take me on for such training programs.
Now, as a PhD student, I have minimal coding skills. For example, for Computational Fluid Dynamics (CFD) I needed to code in FORTRAN 77 to solve certain Boundary Layer Convection Problems (Finite Element/Difference). However, this does not require heavy coding skills because one merely uses simple coding constructs; no esoteric design patterns are necessary to solve the numerical problems we are dealing with. That is to say that OO is an overkill and a procedural language does the job: FORTRAN is pretty straightforward and easy to code in, but C++ is a completely different animal and too convoluted to deal with numerical mathematics. Perhaps I should give C++ a try, but the learning curve is so steep and I won’t get any actual research done…
At any rate, one could say that having a degree with minimal coding skills may show an aptitude for computing. But this does not put us in the same position as the fully qualified programmer. It seems that if I convert to a coding related profession, I will never get the same qualities as the formal CS coder. It is like learning to play piano when you are an adult: you will never reach the level of those who started early.
Q, perhaps you are approaching your job interviews in the wrong way. If a company is looking for a “skilled coder who can start tomorrow” and you come along and say “Hi, I’ve done some stuff in FORTRAN77”, you probably won’t get the job. You probably don’t really want that kind of job, looking at your resume.
There are plenty of companies simply looking for outstanding smart people. Since you apparently are one of those, I wouldn’t start worrying as of yet.
Q, perhaps you are approaching your job interviews in the wrong way. If a company is looking for a “skilled coder who can start tomorrow” and you come along and say “Hi, I’ve done some stuff in FORTRAN77”, you probably won’t get the job. You probably don’t really want that kind of job, looking at your resume.
There are plenty of companies simply looking for outstanding smart people. Since you apparently are one of those, I wouldn’t start worrying as of yet.
You have a point. FORTRAN 77 is not that marketable compared to the heavy weight C/C++, Java, Python and Perl coders out there. Is FORTRAN being used in industry at all? Or are we talking about the nostalgic sentiments of a stuck-in-the-sixties-professor forcing his students to relive the past?
I’m curious what the future will bring and where my skills can be put to use outside academia. Since I cannot market my work experience, which I don’t have, I need to market myself
as someone “smart” who is willing to take on any challenge. Happy bidding folks ….
It’s taken centuries of religious, phylosophical, mathematical and political questioning and of course of material trial and error before todays world was possible.
Programming is probably evolving as fast as we can cope, both in terms of doing it and in terms of being able to learning to use software.
It took a lot of adjusting to just get used to have phones everywhere since the device was invented. And phones are really simple.
The “lack of innovation” comment was interesting. Open source being innovative would kill it, from the perspective of getting people to actually use the software.
The continued success of what is popular today, and the monumental failure of those wonderful technologies that were completely ahead of their time, have proven to me that people do not want innovation. They do not want to do their work better and faster than the thought possible. Rather, they want to be gently lead down a well-traveled path by hype-mongerers and marketing departments, never taking more than baby-steps at a time, for fear of falling and scraping their shins.
“Programming must change — but how? At a reunion of coding pioneers, answers abound.”
I predict this article will soon have a lot of posts.
Maybe programming can learn something from engineering about accountability? A building is built and there’s an independent agency to review it. A plane crashes and there’s a complete review of how, and why then edicts are declared to all the makers. And hopefull the next generation of planes are better.
Also I think that our software is brittle because our hardware is brittle.
today’s programming indeed seems to be very brittle as no vendor is ready to gurantee a bug free program!. You pay them and still don’t get warranty that you wont be attacked or your system won’t do malacious things which is a pity considering the money those software companies make
“Gates went on to say that young programmers don’t need computer science degrees: “The best way to prepare is to write programs, and to study great programs that other people have written.”
Stupid, stupid, stupid. That’s a sure-fire way to make bad programmers. I’d make a snarky Windows joke, but we all know Bill hasn’t coded in decades…
Accountability are certainly good to increase the reliability of large software projects, but not small-medium projects nor shrink-wrap software (e.g. Microsoft’s products). It will also dramatically increase cost.
Which part? Not getting a CS degree, or studying other people’s codes? Or both?
The not getting a CS degree. There is a whole lot more to being a good programmer than coding.
Check that. Not necessarily the “not getting a CS degree” part, but the overall show of disrespect for the academic side of programming. Such an attitude leads to a constant reinventing of the wheel, which is not a good thing.
The Computer Science degree is the wrong one to get if you want to be a career programmer. CS degrees are for computer scientists, who don’t have to worry about writing secure or user-friendly software (unless that is their specialty), or about delivering software within time and budget constraints.
No, the right degree for most programmers is a Bachelor’s or Master’s in Software Engineering (BSSE/MSSE). Read the article Software Engineering, Not Computer Science by Steve McConnell (http://www.stevemcconnell.com/SeIsNotCs.pdf) or the article it is based on, Software Engineering Programms are not Computer Science Programms by David L. Parnas (http://portal.acm.org/citation.cfm?id=314615&dl=ACM&coll=portal) for more arguments against CS degrees.
While it’s true that you can be a good programmer without either degree, both will make you a better programmer than you would be without one.
OSS can and does solve many of those problems. IMHO the problem is that every company basicly reinvents the wheel many times. Also there is the ida of ‘many eyes’. It is common in oss to see bug reports in bugzilla of a user that ran the program through vaingrad or a different set of error finding tools.
The problem is that OSS is still very immature. Currently they have to ‘copy’ interfaces or behavior just to get used. It is very hard to innovate when most of the world hasn’t ‘innovated’ yet even over to OSS. True there are small time innovations (such as the amazing O(1) scheduler in linux 2.6, or some of the good UI that goes into gnome and kde), but they arn’t revolutionary.
The fact is that people want things that ‘work’, so unless you innovation makes people work better, people wont use it.
Subscribe or watch an ad – Don’t you just love bookmarks? Next!
Peter Norvig, (Director of search quality at google) has a nice article over programming:
http://www.norvig.com/21-days.html
except:
”
If you want, put in four years at a college (or more at a graduate school). This will give you access to some jobs that require credentials, and it will give you a deeper understanding of the field, but if you don’t enjoy school, you can (with some dedication) get similar experience on the job. In any case, book learning alone won’t be enough. “Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter” says Eric Raymond, author of The New Hacker’s Dictionary. One of the best programmers I ever hired had only a High School degree; he’s produced a lot of great software, has his own news group, and through stock options is no doubt much richer than I’ll ever be.
“
True there are small time innovations (such as the amazing O(1) scheduler in linux 2.6
That wasn’t even remotely innovative, such technology was present in SunOS long ago. KDE and Gnome likewise have yet to do anything that hasn’t been seen before elsewhere.
Then what *is* revolutionary? Everything in computer land is just a (small) evolution of what others have already done before.
Innovation is overrated. Nobody truly innovates in computer land. Everything is built upon old stuff.
Innovation is overrated. Nobody truly innovates in computer land. Everything is built upon old stuff.
yep yep. innavation is just a marketing term these days.
> Such an attitude [disrespect for the academic side of programming] leads to a constant reinventing of the wheel, which is not a good thing.
Note that in computing science, many academic input were so disconnected from ‘real world needs’ that they failed..
So I’d say that academic disrespect for real world computer programmer induce a disrespect for the academic side of programming..
Innovation is overrated.
I don’t think innovation is overrated, but the term is ridiculously overused (especially by Mac fans) to the point that the word has become debased and quite meaningless (the same goes for the word “intuitive”). By the way, just because something isn’t “innovative” doesn’t mean it can’t be well-designed, easy-to-use, pleasant in apperance or performance etc 🙂
as someone who has a computer science (w/ business studies) degree. I vaguely remember doing java and C++ ( I can’t code to save my life). But what I do remember from Software engineering courses is that one of the problems with programming was actually the rate of advance in the industry (and not the reverse!).
Basiclly, new languages and technologies means that no language or technology is ever exploited to its full potential as they are always superceded by newer things which effectively start from zero. I.e the “innovation” in programming necessetates a constant re-invention of the wheel.
All opinions are mine unless otherwise stated. Terms and conditions apply. Offer subject to status and limitted availibilty.
Tuz
Because software is such a broad field, convering all others in some way, it’s probably best to keep that in mind.
For medical and control software that could actually kill or injure a person, the software makers must be accountable. But productivity software and games? I don’t think so. Operating systems is a gray area, it depends on what situation you use them in.
I personally don’t care much about innovation in the current software scene. What is more useful to all consumners is more reliable and easy-to-use (intuitive), to paraphrase AMH, software. Innovation will truly come from using AI more often in software, as well as newer techologies that will make the current desktop paradigm seem old-fashioned.
I also agree with lots of the other posts before me that many of these words are overused, probably to create more hype around their products. My guess is that these kinds of terms are easier to sell than detailed technical merits/features that spans a few (hundred) pages and few people understand. I would also like to add I actually haven’t read the article because I wasn’t linked to it even after selecting that *stupid* one-day pass nonsense.
I don’t think innovation is overrated, but the term is ridiculously overused (especially by Mac fans)
Hm. Most of the time I hear the word innovative it’s from Gates or one of his lawyer minions explaining why they should be allowed to crush the competition.
In some ways an engineering education may be more useful if it was structured like an MBA. Give the programmer 2 years of CS education – learning languages, data structures, etc… Then let them work for a couple of years. After which, they could go back to school for a year or two to study how exactly projects should be run. I would’ve been better equipped to understand what was being taught (and probably remembered more) in my Software Engineering classes if I had some work experience under my belt.
The problems with programmin will never disappear.
Nor will the problem with stupid bosses disappear.
The only thing that will remain is the will and knowledge by
individuals to do and produce, and that will always feel at home, and content, with OSS
It is hard to beat, and so what.
It is disruptive, and so what.
I’ve seen this right throughout the computer world (this is a view from an insider). The inability by those in IT to step out of the IT focused roll and look at the situation from the end users perspective. The ability to understand that the end user doesn’t *care* about the details. The software/hardware combination is like a hammer.
The employee/end user use the software and hardware to accomplish a task, the technology aspects they are not interested in, if they were, you would be out of a job and they would simply purchase a barebones piece of software and customise it from the ground up into something unique. They pay a premium for the convience you bring by developing software to meet their needs.
The most common thing I hear from customers is not the occasional crash. Most people are able to shrug it off and say, “oh well, Murpheys Law”. That does get people peeved is when they follow the instructions in a manual or a help file and something goes wrong, or worse still, the language in the help file doesn’t explain the problem and solution in a clear and precise manor.
Example of this; I was writing a Macro for Excel, I needed help with a function, the information in the help file was completely bloody useless. Here is another example. I was teaching myself C# to see what it was like. I wanted to open a new window requesting the user information. Using VB the function (IIRC) is:
dim strInputMe as string
stInputMe = InputBox(“Please enter some foo for bah please!”)
In the VB world, that would simply grab the input, throw it into a variable for me to meerily use some where else. Now, this is where the fun starts. I looked through the MSDN library. On would assume that if they were trying to migrate VB’ers to C#, wouldn’t be great to have a reference saying, “InputBox is a VB function call, HOWEVER, click here for the steps required to accomplish the equivilant result in C#”, but there wasn’t. This was then made worse whe I questioned the so-called “Microsoft C# guru” at the Canberra HQ who was clueless to the InputBox() function in VB, let alone simulating the same sort of results using a C# call.
Basically, new languages and technologies means that no language or technology is ever exploited to its full potential as they are always superceded by newer things which effectively start from zero. I.e the “innovation” in programming necessetates a constant re-invention of the wheel.
Are you saying that C or any assembler languages haven’t been exploited to anywhere near their full potential?
I think your statement is a sweeping generalisation that is not applicable to plenty of languages.
Couldn’t agree more. C should be taught on the street corner where it belongs, like sex. I agreed with Stallman and Gates on something and now I have to go change. Really agree about doing small devices is a lot like the good old days.
UNIX is the ultimate programmers creative medium, why would we build something else? It’s instant gratification.
No offense to the panel but it’s a lot easier to innovate when the world is a clean slate. Now a days it’s a tough time to find a clean spot of wall that hasn’t been done several times over.
The continued success of what is popular today, and the monumental failure of those wonderful technologies that were completely ahead of their time, have proven to me that people do not want innovation. They do not want to do their work better and faster than the[y] thought possible. Rather, they want to be gently lead down a well-traveled path by hype-mongerers and marketing departments, never taking more than baby-steps at a time, for fear of falling and scraping their shins.
Very cynical; a trait I generally share.
But in reality; most people would be happy to work better and faster, and will gladly adopt obviously superior technologies. We use Word Processing software rather than typewriters, Page Layout software rather than hot-metal typesetting machines, digital camcorders with editing software rather than cutting film, etc.
You may have some other examples in mind, but perhaps the failed innovators were not markedly better than what we already have, and at greater cost.
One problem with software innovation, is that it is so easily copied
I can at least imagine the originators of a technology going unrewarded, because the public may assume that if a new technology in Product X is worthwhile, it will be added to Product Y also at some future date. And so the public waits, and ultimately Microsoft is rewarded
Many computer science programs across the country are really software engineering programs (undergraduate, not graduate). This is particularly evident where the computer science department is in an engineering school rather than mixed in with the math department. I think these programs should change the degree names, but at least some schools are getting the message that software engineering matter a lot more to most graduates that old-school computer science.
The unforutunate conception out there is that large scale programming isn’t different from small scale programming. This is pure bunk. “The hard part about programming is not making a mess of it” (or something like that) This little snippet of wisdom is both often quoted and often ignored. It all comes down to discipline, which is what a good “software engineering” program teaches. Good software discipline leads to all sorts of necessary, but often tedious, things like detailed requirements and designs, design reviews, code inspections, and process review.
BTW, I don’t agree with all of this about holding software makers accountable. The simple truth is that software will be as bug-free as the customers require. We would all like to sue Microsoft for every time Windows or Word have frozen up on us and eaten a couple hours of work, BUT we keep buying their products (intentionally ignoring the monopoly aspect here). Making software more reliable costs money, just like adding features. Until people make purchasing decisions based on reliability, we won’t get reliable software.
Industries where reliability really matters (medical, aerospace) get reliable sofware because of the dore consequences of bugs (lawsuits). However, customers do pay a LARGE premium for that reliability.
The business world is not necessarily interested in the most robust bug-free apps. The shocker for me just out of college (many moons ago!) was the “just get it done now” mentality. The trick for me was to find a balance between the “right way” of doing something and the ASAP business mentality. There are trade-offs in everything.
As been mentioned before a common misconception about software development is that it is just like a physical engineering project (building a skyscraper etc). In my experience software projects are alot more squishy & dynamic. User Requirements/Needs can change dramatically during the course of development (as more things are “learned” and new information becomes available). Severe problems can arise when this type of dynamism is not taken into account.
Another big issue is the disconnect between programmers and their target users. The tendancy for any dev firm is to hide the programmer in the background allowing only “Account Managers” or “Account Liason Officers” to handle communication and decision making. Unfortunately this results in an increase in miscommunications and misunderstandings. An idea here might be to make a programmer more visible, accountable AND responsible to their clients.
E.
But in reality; most people would be happy to work better and faster, and will gladly adopt obviously superior technologies.
Only if they are a small, well-defined, easily understood improvement over existing technologies.
We use Word Processing software rather than typewriters
But they’ll still stick with QWERTY keyboard layouts. They’ll still waste time manually laying-out and formatting pages, when they’d be immensely more productive using a proper typesetting language like LaTeX (or a visual equivilent thereof).
Page Layout software rather than hot-metal typesetting machines, digital camcorders with editing software rather than cutting film, etc.
Yet, they’ll still use UIs encumbered by “real world” paradigms, instead of realizing that an efficient UI for a computer is far removed from the real world.
You may have some other examples in mind, but perhaps the failed innovators were not markedly better than what we already have, and at greater cost.
Aside from in networking, I’d say that there are very few things that have been done in this last decade that weren’t done already, and better in the previous decade.
I can at least imagine the originators of a technology going unrewarded, because the public may assume that if a new technology in Product X is worthwhile, it will be added to Product Y also at some future date. And so the public waits, and ultimately Microsoft is rewarded
That is true. There is a strong tendency to wait for a market behemoth like Microsoft to “legitimize” a new technology before it becomes widespread. This is inherently harmful, given that Microsoft tends to be rather conservative in introducing new technologies.
Gosling recently went back to Java at SUN after working at Simonyi’s company. If because Simonyi’s project is going nowhere then too bad.
Merging the design and implementation process – isn’t that what CASE was about? Automatic code generation from the design? I think Rational Rose (now IBM) allows this but only to a point after which hand editing is still required.
It’s like programming is in need of a quantum leap. Like what the mouse was for the UI. After 40-50 years, it’s now at OOP. But it has all been refinement’s, nothing fundamentally new. And programming is still a task of a lot of typing. “if” and “else” must be the most prevalent words. And they take 6 key presses every time. It’s like programming needs to get out of the straight jacket of using the human language.
But if the day comes that the designer can program by manipulating some boxes on the screen – like a car designer models a car on the screen – or by stepping through a wizard or whatever then a lot of programmer’s will be doing design or looking for something else to do.
Gates is not right about programmers not needing a degree. I think it confirms that he’s is not a scientist or engineer but a business man. He is not taking the discipline seriously.
Gates is exactly right about not needing a computer science degree. The most important requirement of a programmer is good problem solving skills and the ability to be able to switch from a detailed view to a high-level view of some code at a moments notice. And for those that are too young to remember or never read Programmers at Work, Gates was a very good assembly language programmer at the time and was very involved in most of the compiler projects for years.
Note that in computing science, many academic input were so disconnected from ‘real world needs’ that they failed.
Examples? Because I’ve seen lots of examples of the opposite. Consider some of the current big fields of study:
– Formally provable statically typed languages (Haskell, Clean)
– Concurrent calculi for highly concurrent languages
The first one is tremendously useful in a networked world. For example, in the wireless world, you’re often subject to FCC regulations that say something along the lines of “though shalt not transmit at more than X dbm or thou shalt be beheaded.” Make a “trusted kernel” written in Haskell that has full control over the transmit unit, then use formal analysis tools on the code to provide evidence to the FCC that your design is sound. The better your evidence is, the less painful the FCC-approval process is, and the faster you get to market.
The second one is also tremendously useful. Processors won’t keep scaling to infinitely high clock-speeds. They are already 3GHz monsters sucked down 100W of power. At some point, it becomes much more affordable to take several processors and put them in parallel. Or, take dozens of machines, and put them in parallel on a network. The new Cell processor design from IBM works along those lines. For these machines, multithreading will just not do. Its too error-prone and fragile. However, concurrent languages could be immensely useful in this domain.
I agree with the others here who say that ignoring the academic side of things is the wrong way to go. Perhaps programmers don’t need CS degrees, but they certainly should have software engineering degrees.
Getting by without either is being content with limiting oneself to simple scripting style tasks. Without understanding the basics of programming concepts such as recursion, testing, design, and in some cases complexity, you cannot produce good code. Of course there are some exceptions to this rule, but gifted individuals who become good without degrees basically end up learning the same stuff anyways.
Getting by without either is being content with limiting oneself to simple scripting style tasks
Better tell that to John Carmack and the other numerous great programmers that don’t have degrees.
but gifted individuals who become good without degrees basically end up learning the same stuff anyways
But usually at a faster pace than in an academic setting.
erktrek said “The business world is not necessarily interested in the most robust bug-free apps. The shocker for me just out of college (many moons ago!) was the “just get it done now” mentality. The trick for me was to find a balance between the “right way” of doing something and the ASAP business mentality. There are trade-offs in everything.”
That’s for sure! I ran into that many times. The boss comes in and asks when it will be ready. I might say something like 8 months to testing if things go well. Then he says “Well, we just put out ads saying it’ll be shipping in 3 months. Get busy!”
They didn’t care that rushing a product that much means you can kiss any quality good-bye. That could be dealt with in an update at some future date… unless we were busy rushing the next piece of junk out the door because the boss made unrealistic ship dates on that too. Bosses must be programmers too, and must be involved in any projects to the point were they know the state of the projects under development. Unfortunately, most bosses need help turning on their computer.
Regarding accountability, while I would certainly love to see some form of it in software, it is almost impossible for an application writer to gaurentee, almost anything they do eventually depends on the underlying operating system. It’s hard to make any kind of gaurentee when you don’t control the foundation being built upon.
Stop undervaluing and belittling the usefulness of a good higher education and schooling. It’s simply just plain stupid to do that, and programmers who do that are stupidly cutting their own branch under them.
Programming and computer stuff is no different from other work fields. You think that we would have good medical specialists, teachers, polimen, soldiers, bankers etc. without them having had a good education in their particular field? When you sick, just go to anyone who says that he/she is a real expert on medicine and who doesn’t ask too much money for his/her expertise, right…??
Sure some bright individuals are able to learn some things by themselves, like a few programming languages. But quite often they may lack the knowledge of many essential things that a good education could give them. Self-learned experts may not have the time and often even interest to learn anything but what is interesting to them personally and what is immediately needed in some current work.
So many programmers who are these so-called self-learned “experts” may end-up programming, for example, shitty user interfaces, because they don’t undertand and care to learn stuff like usability design properly. We have already seen that oh-so many times. Or they don’t care to document their work enough. Or they don’t follow the standards enough etc. In their self-learned expertism they just don’t care to pay attention to anything but what they themselves see worth their attention.
Higher education is, and should be, able to give an enough wide and true expertism on some particular field. Of course, that doesn’t mean that degrees would be everything, and that self-learning and talent would not be valuable too.
Also, valuing some professional education highly usually means also that that professional work field is also highly valued.
If more and more people would think that it doesn’t really matter too much what education programmers have, employers would start to think that way more too. Try to think a bit more what that could mean to salary levels, programming work requirements etc.
Do you really want programming to become a wild work field where education degrees mean nothing and employers just choose their programmers where they get them cheapest etc? What could that mean to the quality of programming in the long run?
Also as computing becomes more and more complicated (at least in many fields of programming), higher education in programming becomes more and more important too.
Like I said abobe, you don’t get good medical doctors by undervaluing the meaning of their good professional medical education. Then would you want, for example, to trust your life to some badly programmed medical surgical computer program, etc.?
I don’t know what Bill Gates may have meant when he was belittling the maning of degrees. Maybe he would just want to hire as good programmers for as cheap price as possible? But frankly, I haven’t heard too many other smart statements from that man for a long time too.
IMHO the folks who make the best programmers (you can substitute, chemists, salesman, electrician, engineer) are those who have a passion for the field. They enjoy it so much that they will learn regardless of educational background. They will spend many extra hours developing their skills and knowledge of the field.
A formal education is a good way to jump start the entire learning process. A person is introduced to a wide scope view of the field that is hard to get on your own efforts (not impossible) as well as depth of exposure in a good program. This still does not preclude one from doing it without the formal training. However, they face a steeper and longer path for their goal.
I have known, and taught, many people to “program” but only a very few showed the necessary passion and creativity to be good at the craft. (I have also taught chemistry students with the same results). The rest could generate lots of code to get the job done but it was like listening to a monotone rendition of a rhapsody written by someone else.
Let’s face it. Good programming style requires more than hard work coding. Some of the better accounting packages have been written by accountants who picked up programming on the side. They used their knowledge of accounting where a programmer, who does not have much knowledge, would not understand the depth and breadth of the problem.
This implies that good programmers are also those who are most immersed in the area of the application for which they are developing and have good programming skills. Anyone can produce code, lots of code, that ends up being worthless to the end user.
The biggest mistake that companies make is to separate the programmer from the problem to be solved. IT departments often over regulate the process and then assign programmers to bits of code. Maybe they should take note of the car manufacturers that decided to use teams to build the entire automobile. The job was no longer mundane and the team started innovating ways to improve the process.
Thanks for letting me sound off.
“Why Writing Software Still Stinks?”
How much is that question related to problems that have nothing to do with programming itself? How much could it be related to things like programmers only trying to fulfill demands of others and not having much power to govern their own work?
Maybe these things have also quite a much to do with the popularity of the free software and open source movements? In the free software world programmers often can and do govern what they do, maybe even 100 %. On the other hand, in commercial, closed source world programmers often just code some pieces of work that they have no real control of otherwise.
Maybe programmers should unite more, join forces, have unions etc… So that not only employers like some corporate bosses would write the rules that govern what programming is about, but that also programmers themselves could take part in writing those rules that govern their own daily work.
What is it with people thinking that higher education will give them any good job oppertunities by default? Total nonsense. In fact, there are already to many individuals with degrees. Period.
Take me for example. I have a B.Sc. an M.Sc. and soon a Ph.D. in Systems and Control Theory. Cool? Well, are you in for a bummer.
I have been applying for jobs in industry and none of my “thinking skills” are really required. There is not one employer who is looking for proofs whether my vector fields are in involution, or whether I could apply operators on Hilbert Spaces to solve their problems….
I’ll need to take a job as a taxi driver for some time, just to finish my dissertation, and I will go back to academia as soon as possible. Industry is way too pratical and does not need esoteric concepts and paradigms to get their problems solved.
At the end of the day, I just should have listened to my dad. He learned some COBOL programming while he was in the army. He has no formal education but has numerous programming languages under his belt by just working with them. There is too much work for people like him.
Sweet dreams, but nothing beats learning by doing. It took me three degrees to realise this.
Industry is way too pratical and does not need esoteric concepts and paradigms to get their problems solved.
Rather, industry is unpractical, and cannot see why these concepts and paradigms would help solve their problems.
/frustrated with software industry
I have been applying for jobs in industry and none of my “thinking skills” are really required. There is not one employer who is looking for proofs whether my vector fields are in involution
Don’t you think that there might be something wrong with the software industry and woking arragments too? If they don’t value good education, and if they, for example, just want to higher people for as cheap as possible?
Do you think that the situation is the same on some other, more established and older work fields? No. Do you think that, for example, hospitals hire people with little education as long as they are know a few special skills and are cheap enough work force? No, and it shouldn’t be so in computer industry either.
Think about this.
What do you programmers want to be, highly educated, highly paid, highly appreciated professionals who can have a word on what the work conditons, work requirements and programming work itself is like, or just some corporate “slaves” (though maybe even for a reasonable salary) who only do what they are told to do, try to keep up with some ridiculous time schedules and requirements, even if it means bad buggy sofware etc. and without too much power to govern what their work is like?
Time for some bigger changes in the software industry?
Sorry for too many comments above and also for all too many typos in my previous postings…
Anyway, if programming stinks, it may well be that it is mostly up to programmers themselves (and their will) to change those things that stink. First of all it is necesary to think what the problems really are (probably usually not so much about what programming languages are used etc…), and then, by joining forces, try to make the necessary changes in the industry etc.
Programming is certainly one of the most critical work fields today. Programmers could make a big change not only in their own work conditions but also in the whole industry if they really want and decide to.
“Programming is certainly one of the most critical work fields today. Programmers could make a big change not only in their own work conditions but also in the whole industry if they really want and decide to.”
Pre, or post outsourcing?
Q: “Take me for example. I have a B.Sc. an M.Sc. and soon a Ph.D. in Systems and Control Theory. Cool? Well, are you in for a bummer. … not one employer who is looking for proofs…”
Sounds familiar. This seems to affect a lot of PhD students.
I only have a BSc in Computer Science / Software Engineering, but it has seen me good into two jobs now. All of the firms I applied to wanted a Bachelor’s level degree in something, but it didn’t necessarily have to be computing. However, computing training to BSc or MSc level was favoured.
I also know several people who have training to PhD level, in roughly three categories:
Group 1: They get their PhD, get a job, but don’t use any of the content from their PhD (and frequently their MSc).
Group 2: Carry on into research forever, where the PhD is useful.
Group 3: Struggle to find work due to being over qualified (bizarrely) or failing to find a job that will let them use anything related to their studies.
This seems to be the way of things.
Strangely, despite the presence of Group 3, there are many companies (such as the one I’m at currently) that will take people from non-computing degrees and train in programming providing they already have a minimal skill set. For instance, we’ve taken Maths and Physics PhD students with minimal Visual Basic skills and turned them into VC++ / Win32 API / COM programmers.
This does also have a happy flip side for those of us with computing training if, for instance, your previous job dealt mostly with proprietary technologies and has left 18 months of dead ground on your CV. In a sense, you can cross-train back into your own profession.
Minor “typo”. I realise Win32 is proprietary, it just happens to be proprietary in a very widespread way, instead of proprietary in a “only one company uses it” kind of way. Which was an improvement for me.
“It’s like programming is in need of a quantum leap. Like what the mouse was for the UI. After 40-50 years, it’s now at OOP. But it has all been refinement’s, nothing fundamentally new. And programming is still a task of a lot of typing. “if” and “else” must be the most prevalent words. And they take 6 key presses every time. It’s like programming needs to get out of the straight jacket of using the human language. ”
Maybe the problem with programming is too much complexity.
http://gagne.homedns.org/~tgagne/
Or rather unnecessary complexity.
Strangely, despite the presence of Group 3, there are many companies (such as the one I’m at currently) that will take people from non-computing degrees and train in programming providing they already have a minimal skill set. For instance, we’ve taken Maths and Physics PhD students with minimal Visual Basic skills and turned them into VC++ / Win32 API / COM programmers.
If I have to choose a different profession, I would certainly choose computing. The problem is, though, that I’m slightly older than the BSc and MSc crowd due to my three years of additional PhD research. This may put off some of those employers to take me on for such training programs.
Now, as a PhD student, I have minimal coding skills. For example, for Computational Fluid Dynamics (CFD) I needed to code in FORTRAN 77 to solve certain Boundary Layer Convection Problems (Finite Element/Difference). However, this does not require heavy coding skills because one merely uses simple coding constructs; no esoteric design patterns are necessary to solve the numerical problems we are dealing with. That is to say that OO is an overkill and a procedural language does the job: FORTRAN is pretty straightforward and easy to code in, but C++ is a completely different animal and too convoluted to deal with numerical mathematics. Perhaps I should give C++ a try, but the learning curve is so steep and I won’t get any actual research done…
At any rate, one could say that having a degree with minimal coding skills may show an aptitude for computing. But this does not put us in the same position as the fully qualified programmer. It seems that if I convert to a coding related profession, I will never get the same qualities as the formal CS coder. It is like learning to play piano when you are an adult: you will never reach the level of those who started early.
Q, perhaps you are approaching your job interviews in the wrong way. If a company is looking for a “skilled coder who can start tomorrow” and you come along and say “Hi, I’ve done some stuff in FORTRAN77”, you probably won’t get the job. You probably don’t really want that kind of job, looking at your resume.
There are plenty of companies simply looking for outstanding smart people. Since you apparently are one of those, I wouldn’t start worrying as of yet.
Q, perhaps you are approaching your job interviews in the wrong way. If a company is looking for a “skilled coder who can start tomorrow” and you come along and say “Hi, I’ve done some stuff in FORTRAN77”, you probably won’t get the job. You probably don’t really want that kind of job, looking at your resume.
There are plenty of companies simply looking for outstanding smart people. Since you apparently are one of those, I wouldn’t start worrying as of yet.
You have a point. FORTRAN 77 is not that marketable compared to the heavy weight C/C++, Java, Python and Perl coders out there. Is FORTRAN being used in industry at all? Or are we talking about the nostalgic sentiments of a stuck-in-the-sixties-professor forcing his students to relive the past?
I’m curious what the future will bring and where my skills can be put to use outside academia. Since I cannot market my work experience, which I don’t have, I need to market myself
as someone “smart” who is willing to take on any challenge. Happy bidding folks ….
It’s taken centuries of religious, phylosophical, mathematical and political questioning and of course of material trial and error before todays world was possible.
Programming is probably evolving as fast as we can cope, both in terms of doing it and in terms of being able to learning to use software.
It took a lot of adjusting to just get used to have phones everywhere since the device was invented. And phones are really simple.