Tuesday, 27 May 2008
Having been one of those who grew up with the first batch of home computers in the 1980s, and therefore learnt to program in BASIC on an 8-bit home-computer, I feel ideally qualified to add my tuppence to the discussion.
I think BASIC was a crucial part of my early interactions with
computers. When you turned the computer on, it sat there expectantly,
with a prompt that said
Ready, and a blinking cursor
inviting you to type something. The possibilities were endless. Not
only that, but you could often view the source code of games, as many
of them were written in BASIC. This would allow you to learn from
others, and crucially hammered home the idea that you could do this
too: they were using BASIC just like you. This is a long way from the
experience of today's first-time computer users: the computer starts
up, and does all kinds of fancy things from the get-go. You don't type
in BASIC commands to make it do things, you click the mouse. Modern
computers don't even come with a programming language: you have to
install a compiler or interpreter first. I am concerned that the next
generation of programmers will be missing out because of this.
BASIC is not enough
However, BASIC is not enough. BASIC teaches you about the general
ideas of programming: variables, statements, expressions, etc., but
BASIC interpreters rarely featured much in the way of structured
programming techniques. Typically, all variables were generally
global, and there was often no such thing as a procedure or function
call: just about everything was done with
GOTO or maybe
GOSUB. BASIC learnt in isolation by a lone hobbyist
programmer, by cribbing bits from manuals, magazines, and other
people's source code, would not engender much in the way of good
programming habits. Though it did serve to separate
the programming sheep from the non-programming goats, I can see
why Dijkstra was so whipping of it. To be a good programmer, BASIC is
To learn good programming habits and really understand about the machine requires more than BASIC. For many, C is the path to such enlightenment: it provides functions and local variables, so you can learn about structured programming, and it's "close to the machine", so you have to deal with pointers and memory allocation. If you can truly grok programming in C, then it will improve your programming, whatever language you use.
I took another path. Not one that I would necessarily recommend to
others, but it certainly worked for me. You see, a home computer came
with not just one language but two: BASIC and machine
code. As time wore on, the BASIC listing of source code for games
would increasingly be a long list of
DATA statements with
seemingly random sequences of the digits 0-9 and the letters A-F,
along with a few lines of BASIC, at least one of which would feature
POKE command. This is where I learnt about
machine code and assembly language: these
contain the hexadecimal representation of the raw instructions that
the computer executes.
Real Programmers do it in hex
Tantalized, I acquired a book on Z80 assembly language, and I was hooked. I would spend hours writing out programs on pieces of paper and converting them into hex codes by looking up the mnemonics in the reference manual. I would calculate jump offsets by counting bytes. Over time I learnt the opcodes for most of the Z80 instruction set. Real Programmers don't need an assembler and certainly not a compiler; Real programmers can do it all by hand!
These days, I use a compiler and assembler like everyone else, but my point still stands, and it is this: by learning assembly language, I had to confront the raw machine at its most basic level. Binary and hexadecimal arithmetic, pointers, subroutines, stacks and registers. Good programming techniques follow naturally: if your loop is too long, the jump instruction at the end won't reach, as there is a limit of 128 bytes on conditional jumps. Duplicate code is not just a problem for maintenance: you have to convert it twice, and it consumes twice as much of your precious address space, so subroutines become an important basic technique. By the time I learnt C, I had already learnt much of the lessons around pointers and memory allocation that you can only get from a low-level language.
It's all in the details
BASIC was an important rite of passage for many of today's programmers: those who learnt programming on their home computer in the 1980s, but it is not enough. High-level programming languages such as C# or Java are a vast improvement on BASIC, but they don't provide programmers with the low-level knowledge that can be gained by really learning C or assembler.
It's the low level details that are important here. If you don't actively program in C, you don't have to learn C per-se, but something equivalently low-level. If you find the idea of writing a whole program in assembler and machine code interesting, go with that: I thoroughly enjoyed it, but it might not be your cup of tea.
C is not enough either
This actually ties in with the whole "learn a new programming language every year" idea: different programming languages bring different ideas and concepts to the mix. I have learnt a lot from looking at how programs are written in Haskell and Lisp, even though I never use them in my work, and I learnt much from Java and C# that I didn't learn from C and assembler. The same applies here: a low level programming language such as C provides a unique perspective that higher-level languages don't provide. Viewing things from this perspective can improve your code whatever language you write in. If you're striving to write elegant software, viewing it from multiple perspectives can only help.