[There's] a lot of interesting stuff both from thepoint of view of the content but also the historical context between, y' know”When were `for' loops invented?”. Well that's what Algol called them but priorto that FORTRAN called them DO loops. And prior to that they existed in assembler.So, first of all, what's the history and what does it get you when you can doloops, but when do you run out of steam, even with loops, and you have to use thisshock! horror! Pure Mathematicians thing – that computer scientists have to learnabout – recursion?! It was a real culture shock, it really was, in the roughly1940s, 1950s to suddenly find out that what the theoreticians had beendrivelling on about for years – about recursive .
Functions in mathematics – actually was ofmassive, massive importance for computer science. Back in the '40s and early'50s it was all Assembler – or a slightly dressed-up thing called a macroassembler, where you can have little routines full of, y' know, packagedassembler instructions which could be called up, as and when needed. So, thatsort of served people for quite some time. But probably one of the firsthigh-level languages to introduce loops was good old FORTRAN [shows textbook]. Even though that was published in '65 Fortran itself goes back, I think, for almost ten years beforethat. It was invented by John Backus and a large team of people at IBM in the1950s. Many of you will know it. It's an .
Excellent language for engineering andscientific calculations. It is low level. I mean, when you look at the nature of aFORTRAN loop it's almost like doing it in assembler – but not quite. They didn'tcall them for loops – they called them DO loops. What I'm saying here is – you package all thisup – where you're saying repeat the following sequence ofinstructions, which I've done with my wavy lines here. Keep doing them untilyou hit the statement with a numeric label on it of 180. The loop back fromthe statement labelled 180, back up to here to increment the loop counter, whichyou're all familiar with in languages like C. It wasn't done, as it wouldbe in C, by saying: “Here's my block of .
Stuff to be repeated it's inside thesecurly braces”. Here you can see it's a lot more like assembler, a lot more low-level.I mean there's nothing magic about “180”; it could be “72”; it depended on your labellingsystem. Implicitly here, in a simple thing like this, you'd start off [with the counter] at one and every time I returned back here it would reset [the counter] to be 2, 3, 4 and so on up toand including 10. It's comforting for those who were coming from assemblerinto a higher-level language to see something that was only slightly higherlevel, in sophistication, than assembler was. How did loops become more “powerful”,if you like? Well, again, even in assembler and even inFORTRAN, there's no reason why you .
Couldn't have a loop within a loop. So Imight have, outside of all this code, yet another layer of DO. What shall we say:”DO 200 J = 1, 20″. So, there might be some more statements between 180 and200, who knows, but again, you see, a numeric label. And can see what'shappening is that for every setting of J, which will start at 1 and go up to 20,for every single one of those J settings the inner loop will be running throughthe complete spectrum of settings of I going from 1 to 10. So you will have 200locations [that] are being affected here. Basically going through the rows andcolumns of a matrix. All sorts of calculations in physics, chemistry andparticularly engineering just rely on .
Two-dimensional arrays full of numbers- either integers or scientific numbers with a decimal point. and so on. Even hard-core assembly programmers had to admit if you weredoing heavy scientific programming it was nice to be a little bit more abstractand to have this sort of facility available to you. Now you might say: “Well,what came along to spoil the party then ?” or “How did people realize that this waswonderful but not quite enough?” The compiler of course has got to betolerant and has got to be capable of compiling nested DO loops correctly buthow deep would it let you nest them? Well, I'm guessing, I would suspect thatthe early FORTRAN compilers probably .
Wouldn't allow you to go more than about10 deep, maximum. And I think you and I Sean have just been looking up what are thecurrent limits in C? I seem to remember the earliest `gcc' was something like 32But Ithink we looked up this … some C++ nowadays allows you to do nested loops256 deep! And, of course, there are multi-dimensional problems that mightactually need that, because it it doesn't take much knowledge of higher maths torealize if you've got a loop within a loop the outer loop goes around n times; theinner loop is going around n times, you are then coping with an n-squaredproblem. If you put the third loop inside the other two you're coping with a cubic,three-dimensional, problem. So what we're .
Saying is all these multi-dimensionalpolynomial-going-on-exponential problems, that come up quite naturally, you cancope with them in nested for-loops so long as they don't need to be more thanpower-32 or power-256 or whatever it is. And you think, well, that should be enough foranybody! There's these multi-dimensional problems you can just do them by nesting`for' loops and surely [a depth of] 256 is enough for anybody? What kind of problemwouldn't it be enough for? Well, a lot of theoretical computer scientists of myknowledge amused me greatly when – those of them that will own up to this – back inthe 60s. People started going to lectures from mathematicians, theoreticians, people concerned .
With “Godel Computability” and so on. Andof course, those sort of people, were very familiar indeed, at a mathematical level,with Ackermann's function. Now, as you know – you and I – we've done that one: >> Sean: Was that “The most difficult … ?”>> DFB: “The most difficult number to compute, question mark”
Recursion thing, but a lot oftheoreticians admitted to me they saw a Ackermann's function and said: “I could try thatout in FORTRAN !”. Now what they perhaps didn't realize – but it became famous by 1960 – is: FORTRAN is wonderful, but originalFORTRAN did not do user-level recursion You could write a thing called ACK.You could actually get it to call itself in FORTRAN. But you might have beenexpecting that every time it called itself it would lay out a data area foreach recursive call they're called “stack frames” – we know that now. You get lots ofstack frames, one on top of another and as you come back through the recursionthey're deleted and thrown away and you .
Climb back into your main program.FORTRAN doesn't do that. It sets aside one stack frame. You keep callingyourself recursively it just tramples in its muddy gumboots over all yourdata area and you end up with total garbage. It no more gives you values of theAckermann function than fly to the moon! And people said: “I then realized theimportance of having user-level recursion, in programming languages, tocope with those really hard problems that fell outside nested for-loops”.Algol was famous in that its routines could call themselves recursively andcould get the right answer and, for limited low-order values of Ackermann'sfunction – very slow, very slow indeed – but .
It would come out with the right answer.>> Sean: Is there any need to think of an example of a problem, or program, because Ackermannfeels to me like it's the test-bed. You know, when you're testing out amotor-car you might take it on the track and see how fast it can go.But in day-to-day life that car might only get half that speed. What's thereal-world kind of equivalent? Is there such a thing?>> DFB: Real world equivalent?>> Sean: … of something that might need to use recursion … ?>> DFB: … of that complexity? Not many things is the answer to that. I mean, yes, it'strue that Ackermann, as you know, was David Hilbert's research student. And thechallenge was on to find something that .
Was so innately recursive that – rememberit was “generally recursive”, they called it – as opposed to “primitive recursive”. Andsimple things like factorial and indeed indeed Fibonacci, are primitive recursive.So I think you're right that you really are just making the point thateventually there are things that will kill you. I think the question in themiddle is: “Is there something out there – pieces of program you need to write -where non-trivial recursion, in a sense, is needed but not quite to thehorrendous degree that Ackermann did. And the answer is: “Yes, compilers is where it hitpeople”. Because although early FORTRAN did not provide user-level recursion, foryou and me, nevertheless John Backus and .
His team implemented it in the middle1950s I think at IBM. And Backus wrote articles afterwardsbasically saying: “We didn't know enough about recursion and even though wedidn't provide it for the users of our language, boy did we need it in thecompiler! And we ended up inventing it in all but name”The syntactic structures of what is legal, in a language, even at the leveljust of arithmetic statements can be quite recursive. Because you end up withbrackets within brackets within brackets all with a multiplier outside. And whichorder do you do the brackets in? And, you know, how how many levels of bracketnesting can you have. And if you don't .
Get things sorted out correctly thenyou'll get the wrong answer. But once again the problem could be that your userswould come up to you and present you with a problem just designed to test outyour compiler, and whether it was robust enough to be able to cope with a highdegree of nesting even just in arithmetic statements. So by 1960 inAlgol, yeah, the there were enough users, at the user level, who could see that amodicum of recursion, perhaps more complicated than factorial but not quiteup to full Ackermann capabilities would be very nice indeed to have within your language.
Said to me: “OK, you said that this is aninnately recursive problem and it just had to have general recursion capabilities? Well I …. “