This episode is supported by Hover. Hi, I'm Carrie Anne, and welcome to CrashCourse Computer Science! Computers in the 1940s and early 50s ran oneprogram at a time. A programmer would write one at their desk,for example, on punch cards. Then, they’d carry it to a room containinga room-sized computer, and hand it to a dedicated computer operator. That person would then feed the program intothe computer when it was next available. The computer would run it, spit out some output,and halt. This very manual process worked OK back whencomputers were slow, and running a program .
Often took hours, days or even weeks. But, as we discussed last episode, computersbecame faster… and faster… and faster – exponentially so! Pretty soon, having humans run around andinserting programs into readers was taking longer than running the actual programs themselves. We needed a way for computers to operate themselves,and so, operating systems were born. INTRO Operating systems, or OS’es for short, arejust programs. But, special privileges on the hardware letthem run and manage other programs. .
They’re typically the first one to startwhen a computer is turned on, and all subsequent programs are launched by the OS. They got their start in the 1950s, as computersbecame more widespread and more powerful. The very first OSes augmented the mundane,manual task of loading programs by hand. Instead of being given one program at a time,computers could be given batches. When the computer was done with one, it would automatically and near-instantly start the next. There was no downtime while someone scurriedaround an office to find the next program to run. This was called batch processing. .
While computers got faster, they also gotcheaper. So, they were popping up all over the world,especially in universities and government offices. Soon, people started sharing software. But there was a problem… In the era of one-off computers, like theHarvard Mark 1 or ENIAC, programmers only had to write code for that one single machine. The processor, punch card readers, and printerswere known and unchanging. But as computers became more widespread, theirconfigurations were not always identical, .
Like computers might have the same CPU, butnot the same printer. This was a huge pain for programmers. Not only did they have to worry about writingtheir program, but also how to interface with each and every model of printer, and all devicesconnected to a computer, what are called peripherals. Interfacing with early peripherals was verylow level, requiring programmers to know intimate hardware details about each device. On top of that, programmers rarely had access to every model of a peripheral to test their code on. So, they had to write code as best they could,often just by reading manuals, and hope it worked when shared. .
Things weren’t exactly plug-and-play backthen… more plug-and-pray. This was clearly terrible, so to make it easierfor programmers, Operating Systems stepped in as intermediaries between software programsand hardware peripherals. More specifically, they provided a softwareabstraction, through APIs, called device drivers. These allow programmers to talk to commoninput and output hardware, or I/O for short, using standardized mechanisms. For example, programmers could call a functionlike “print highscore”, and the OS would do the heavy lifting to get it onto paper. By the end of the 1950s, computers had gottenso fast, they were often idle waiting for .
Slow mechanical things, like printers andpunch card readers. While programs were blocked on I/O, the expensiveprocessor was just chillin’… not like a villain… you know, just relaxing. In the late 50’s, the University of Manchester,in the UK, started work on a supercomputer called Atlas, one of the first in the world. They knew it was going to be wicked fast,so they needed a way to make maximal use of the expensive machine. Their solution was a program called the AtlasSupervisor, finished in 1962. This operating system not only loaded programsautomatically, like earlier batch systems, .
But could also run several at the same timeon its single CPU. It did this through clever scheduling. Let’s say we have a game program runningon Atlas, and we call the function “print highscore” which instructs Atlas to printthe value of a variable named “highscore” onto paper to show our friends that we’rethe ultimate champion of virtual tiddlywinks. That function call is going to take a while,the equivalent of thousands of clock cycles, because mechanical printers are slow in comparisonto electronic CPUs. So instead of waiting for the I/O to finish,Atlas instead puts our program to sleep, then selects and runs another program that’swaiting and ready to run. .
Eventually, the printer will report back toAtlas that it finished printing the value of “highscore”. Atlas then marks our program as ready to go,and at some point, it will be scheduled to run again on the CPU, and continue onto thenext line of code following the print statement. In this way, Atlas could have one programrunning calculations on the CPU, while another was printing out data, and yet another readingin data from a punch tape. Atlas’ engineers doubled down on this idea,and outfitted their computer with 4 paper tape readers, 4 paper tape punches, and upto 8 magnetic tape drives. This allowed many programs to be in progressall at once, sharing time on a single CPU. .
This ability, enabled by the Operating System,is called multitasking. There’s one big catch to having many programs running simultaneously on a single computer, though. Each one is going to need some memory, andwe can’t lose that program’s data when we switch to another program. The solution is to allocate each program itsown block of memory. So, for example, let’s say a computer has10,000 memory locations in total. Program A might get allocated memory addresses0 through 999, and Program B might get 1000 through 1999, and so on. If a program asks for more memory, the operatingsystem decides if it can grant that request, .
And if so, what memory block to allocate next. This flexibility is great, but introducesa quirk. It means that Program A could end up beingallocated non-sequential blocks of memory, in say addresses 0 through 999, and 2000 through 2999. And this is just a simple example – a realprogram might be allocated dozens of blocks scattered all over memory. As you might imagine, this would get reallyconfusing for programmers to keep track of. Maybe there’s a long list of sales datain memory that a program has to total up at the end of the day, but this list is storedacross a bunch of different blocks of memory. .
To hide this complexity, Operating Systemsvirtualize memory locations. With Virtual Memory, programs can assume theirmemory always starts at address 0, keeping things simple and consistent. However, the actual, physical location incomputer memory is hidden and abstracted by the operating system. Just a new level of abstraction. Let’s take our example Program B, whichhas been allocated a block of memory from address 1000 to 1999. As far as Program B can tell, this appearsto be a block from 0 to 999. .
The OS and CPU handle the virtual-to-physicalmemory remapping automatically. So, if Program B requests memory location42, it really ends up reading address 1042. This virtualization of memory addresses iseven more useful for Program A, which in our example, has been allocated two blocks ofmemory that are separated from one another. This too is invisible to Program A. As far as it can tell, it’s been allocateda continuous block of 2000 addresses. When Program A reads memory address 999, thatdoes coincidentally map to physical memory address 999. But if Program A reads the very next valuein memory, at address 1000, that gets mapped .
Behind the scenes to physical memory address2000. This mechanism allows programs to have flexiblememory sizes, called dynamic memory allocation, that appear to be continuous to them. It simplifies everything and offers tremendousflexibility to the Operating System in running multiple programs simultaneously. Another upside of allocating each programits own memory, is that they’re better isolated from one another. So, if a buggy program goes awry, and startswriting gobbledygook, it can only trash its own memory, not that of other programs. .
This feature is called Memory Protection. This is also really useful in protecting againstmalicious software, like viruses. For example, we generally don’t want otherprograms to have the ability to read or modify the memory of, let say, our email, with thatkind of access, malware could send emails on your behalf and maybe steal personal information. Not good! Atlas had both virtual and protected memory. It was the first computer and OS to supportthese features! By the 1970s, computers were sufficientlyfast and cheap. .
Institutions like a university could buy acomputer and let students use it. It was not only fast enough to run severalprograms at once, but also give several users simultaneous, interactive access. This was done through a terminal, which isa keyboard and screen that connects to a big computer, but doesn’t contain any processingpower itself. A refrigerator-sized computer might have 50terminals connected to it, allowing up to 50 users. Now operating systems had to handle not justmultiple programs, but also multiple users. So that no one person could gobble up allof a computer's resources, operating systems .
Were developed that offered time-sharing. With time-sharing each individual user wasonly allowed to utilize a small fraction of the computer’s processor, memory, and soon. Because computers are so fast, even gettingjust 1/50th of its resources was enough for individuals to complete many tasks. The most influential of early time-sharingOperating Systems was Multics, or Multiplexed Information and Computing Service, releasedin 1969. Multics was the first major operatingsystem designed to be secure from the outset. Developers didn’t want mischievous usersaccessing data they shouldn't, like students .
Attempting to access the final exam on theirprofessor’s account. Features like this meant Multics was reallycomplicated for its time, using around 1 Megabit of memory, which was a lot back then! That might be half of a computer's memory,just to run the OS! Dennis Ritchie, one of the researchers workingon Multics, once said: “One of the obvious things that went wrongwith Multics as a commercial success was just that it was sort of over-engineered in a sense. There was just too much in it.” This lead Dennis, and another Multics researcher, .
Ken Thompson, to strike out on their own and build a new, lean operating system… called Unix. They wanted to separate the OS into two parts: First was the core functionality of the OS,things like memory management, multitasking, and dealing with I/O, which is called thekernel. The second part was a wide array of usefultools that came bundled with, but not part of the kernel, things like programs and libraries. Building a compact, lean kernel meant intentionallyleaving some functionality out. Tom Van Vleck, another Multics developer,recalled: “I remarked to Dennis that easily half thecode I was writing in Multics was error recovery .
Code.” He said, “We left all that stuff out of Unix. If there's an error, we have this routinecalled panic, and when it is called, the machine crashes, and you holler down the hall, 'Hey,reboot it.'”” You might have heard of kernel panics, Thisis where the term came from. It’s literally when the kernel crashes,has no recourse to recover, and so calls a function called “panic”. Originally, all it did was print the word“panic” and then enter an infinite loop. .
This simplicity meant that Unix could be runon cheaper and more diverse hardware, making it popular inside Bell Labs, where Dennisand Ken worked. As more developers started using Unix to buildand run their own programs, the number of contributed tools grew. Soon after its release in 1971, it gainedcompilers for different programming languages and even a word processor, quickly makingit one of the most popular OSes of the 1970s and 80s. At the same time, by the early 1980s, thecost of a basic computer had fallen to the point where individual people could affordone, called a personal or home computer. .
These were much simpler than the big mainframesfound at universities, corporations, and governments. So, their operating systems had to be equallysimple. For example, Microsoft’s Disk OperatingSystem, or MS-DOS, was just 160 kilobytes, allowing it to fit, as the name suggests,onto a single disk. First released in 1981, it became the mostpopular OS for early home computers, even though it lacked multitasking and protectedmemory. This meant that programs could, and would,regularly crash the system. While annoying, it was an acceptable tradeoff,as users could just turn their own computers off and on again! .
Even early versions of Windows, first releasedby Microsoft in 1985 and which dominated the OS scene throughout the 1990s, lacked strongmemory protection. When programs misbehaved, you could get theblue screen of death, a sign that a program had crashed so badly that it took down thewhole operating system. Luckily, newer versions of Windows have better protections and usually don't crash that often. Today, computers run modern operating systems,like Mac OS X, Windows 10, Linux, iOS and Android. Even though the computers we own are mostoften used by just a single person, you! their OSes all have multitasking and virtual andprotected memory. .
So, they can run many programs at once: youcan watch YouTube in your web browser, edit a photo in Photoshop, play music in Spotifyand sync Dropbox all at the same time. This wouldn’t be possible without thosedecades of research and development on Operating Systems, and of course the proper memory tostore those programs. Which we’ll get to next week. I’d like to thank Hover for sponsoring thisepisode. Hover is a service that helps you buy andmanage domain names. Hover has over 400 domain extensions to endyour domain with – including .com and .net. You can also get unique domains that are moreprofessional than a generic address. .
Here at Crash Course, we'd get the domainname “mongols.fans” but I think you know that already. Once you have your domain, you can set upyour custom email to forward to your existing email address — including Outlook or Gmailor whatever you already use. With Hover, you can get a custom domain andemail address for 10% off. Go to Hover.com/crashcourse today to createyour custom domain and help support our show!