Technological Change (in my life)

edit: 2020-11-20 (this is a stream-of-consciousness thing)

Epiphany 0: I slid backwards for a short while

Ontario's ST&T program for secondary schools (1967-1970) provided me with a strong working knowledge of vacuum tube based electronics (known as valves in the  British commonwealth). Conestoga College continued with a strong working knowledge of discrete semiconductor-based electronics (diodes, transistors, thyristors, chips, etc.)  I began working for Bell Canada in 1973 where I was surprised to discover Bell's switching equipment was still based upon an electromechanical technology known as step-by-step (SxS). Yikes!

Fork 1 (employment puts food on the table)

To be fair, our switching center (a.k.a. "central office") in Kitchener Ontario hosted 70,000 lines of which only 40,000 were based upon SxS. The other 30,000 were based upon a newer electromechanical technology known as number five crossbar (5XB). Crossbar employed one-or-more electromechanical computers, called a marker, to set up a path through the central office by operating various horizontal (select) and vertical (hold) magnets on a crossbar switch (which are connected to other crossbars in a matrix) or, ultimately, the called customer's line.

blast from the past: I met Gus Lorimer (a descendant of one of The Lorimer Brothers) while working in the Preston C.O. in 1976. I reiterated my usual complaint about how Bell Canada would be better off ditching relays for transistors. He looked at me glaringly then said "Anything you can do with transistors, I can do with relays" smile
comment: You should have seen the Lorimer call-through test set which was built into a cherry-wood cabinet (looked a lot like a vacuum-tube table-top radio). An electromechanical work of art that was never as good as the all-digital test set which replaced it back then.

It is not like the phone company didn't know about semiconductor technology (because Bell Labs invented the first working unijunction transistor in 1947 and the bipolar junction transistor in 1948), they only intended to squeeze every invested dollar out of the current technology until they were forced to move to the next generation. So while big central offices in Canada were employing electromechanical technology, PBX (private branch exchanges) in large corporations were introduced to SL-1 (stored logic one) technology where computers were employed as markers to set up a call through a minibar (miniaturized crossbar) switch. Why minibars? In those days, unhardened semiconductor circuits were easily destroyed by "natural lightning" or "accidental contact with commercial power". The cost of hardening the semiconductor circuits was too expensive so mechanical switching of the analog signal was deemed the only way to go. Within a year of the introduction of SL-1, Northern Electric released SP-1 (stored program one) for use in central office switching centers. SP-1 employed minibars as well.

Okay so in my Kitchener-Waterloo location we finally got an all-digital toll-switch (DMS-200) in 1978 and an all-digital local-switch (DMS-100) in 1981.

Fork 2 (continuing education)

Within a few months of starting at Bell Canada in 1973, I realized my freshly minted electronics skills were going to decline so began searching for an antidote. I landed a part-time job as a repair technician at a local musician retail store by the name of Mother's Music (owned by David Boehm). There was lots of work including:

  • repair of microphones and guitar pickups
  • repairing solid state amplifiers (mostly silicon transistors although I do recall one based upon germanium transistors)
  • repair of vacuum tube amplifiers (Orange amps, with their oversized resistors and doughnut-shaped power and output transformers, were most memorable)
  • modifying vacuum tube amplifiers for more power
    • commercial tubes were replaced with industrial/military tubes (see the appendix of the RCA Receiving Tube Manual)
    • bakelite tube sockets were replaced with ceramic sockets
    • audio output and power transformers were upgraded
  • repairing mellotrons (I rebuilt one owned by the Ian Thomas band; apparently this one fell out of the back of a van)
  • repairing synthesizers manufactured by Moog and ARP
  • repairing Electronic Organs manufactured by companies like Farfisa and Solina
  • repairing Solina String Ensembles (unlike the bi-phonic ARP Odyssey, these were fully polyphonic)

The handwriting was on the wall (so to speak) for analog technology so I returned to Conestoga College (evening classes) in 1976 to catch up on the continuing semiconductor revolution which included TTL and CMOS technologies (chips) in their two Digital Electronics courses. I also learned to program in BASIC and COBOL on their HP-3000 minicomputer. In this field you should resolve to continue your education for as long as you work, or longer.

Forks Merge

Bell Canada became more digital in 1978 when they replaced traditional paper-tape long-distance billing recorders with computers ("Interdata Model 70") and industrial controllers ("TeleSciences SRS-1200 Data Recorder"). Both technologies stored digital data on a HP-7970E 9-track tape system and Bell was looking for someone who wanted to maintain this stuff "full time". Since I was attending night courses at Conestoga College, I was offered first crack and the rest, as they say, is history.

Also in 1978, Bell Canada introduced a minicomputer system named TELCON (teletype concentrator) which replaced a room full of thirty-two ASR-35 teletype machines (connected to remote SL-1 and SP-1 switches) with a single PDP-11/04. This led to other projects based upon the PDP-11/23 (ACD for SL-1), PDP-11/44 (BSIMS and CALRS) as well as the PDP-11/73 and PDP/11-84. Then later (over the decades) to projects based upon the VAX-11/730, VAX-11/750, VAX-8550 (MFAS), uVAX-3500, uVAX4300, VAX-6430, AlphaServer-2100, AlphaServer-4100, AlphaServer-DS20e and Itanium rx2800-i2

 link: dips-n-certs (involved a huge amount of corporate training provided by: DEC, HP, etc.)

Personal Computers

Around 1978 I was contacted by Bits-n-Bytes (a retail computer store in Waterloo, Ontario) to repair a HeathKit-H8 computer along with a HeathKit-H9 terminal with a horrible key-bounce problem. Fred Hoffman (the owner) asked for a quote. I offered to fix it for free provided I could keep both units for a month. That's when I learned Benton Harbor BASIC and Benton Harbor DOS. Thanks Fred (especially for letting me keep it for 6-weeks).

Later that same year, I purchased a 48k Apple][ (Apple2) with a 16k Language Card. Since the Apple][ had INTEGER BASIC (written by Steve Wozniak) in ROM, it would load APPLESOFT BASIC (written by Microsoft) into the language card. Apple][+ machines had APPLESOFT BASIC in ROM so loaded INTEGER BASIC. I used this machine to also learn:

  • 6502 Assembly Programming
  • UCSD Pascal Programming
  • Fortran 77 Programming

Epiphany 1: Digital Entertainment Evolves (Part 1/2)

In 2008 I purchased a PS3 game console bundled with a game titled Grand Theft Auto IV which was a bit of a shock. Why? You can drive around Liberty City for hours in various cars, listening to 18 radio stations (playing the music from: ELO, Genesis, Heart, Bob Marley, Queen, Joe Walsh, ZZ-Top, etc.), all the while experiencing different weather conditions including rain, shine, day, night, lightening, thunder, and fog. Your vehicle will actually handle differently depending upon road conditions, weather, weight, technology (front wheel drive vs. rear wheel drive), etc. At certain points you can exit your vehicle to run, walk, buy a hotdog, or board a number of subways. A self-contained digital world on a Blu-ray costing $59. To see what this world looks like, just search for GTA4 at www.youtube.com

comment: PS3 architecture consisted of one 3.2 GHz Cell Broadband Engine with one PPE and six SPEs. This gaming console was so powerful that they could also be used to aid in scientific research (see: folding-at-home). Some universities would buy 25 to 50 then connect them together to produce a PS3-based super computer. Yikes! Not many people saw this coming.

Jump ahead to 2012 when I was playing Batman: Arkham City borrowed from my nephew. This is also a self-contained digital world to support a game which is more like an interactive comic book where you play the role of either Batman or Cat Woman. Most games are only good for 6-10 hours but Batman: Arkham City and Grand Theft Auto IV can require more than 100 hours if you do all the side missions. Okay, so at the original price of $59 dollars this entertainment will cost you 59 cents per hour. Contrast this to a movie you buy for $25 then watch once for 90 minutes. Now I understand why Warner Bros. published this title.

This got me thinking (the epiphany) about my earlier years...

Flashback to 1970

I was in Sam the Record Man when I first heard, and purchased, Switched-On Bach by W. Carlos. At the time, I was misled into believing this was music produced by a programmed computer ("computerized" was the popular phrase). It turned out that this wonderful album from 1968 was painstakingly assembled, note-by-note, on the Moog synthesizer employing keyboard and sequencers running voltage controlled oscillators, voltage controlled filters, and multi-track tape recorders. Electronic: "YES" but computerized: "NO". On top of that remember that this was an analog recording of a machine with an analog audio output. Nevertheless, none of the instruments heard were real, yet the associated harmonics could (when cranked up) blow the output transistors of most solid state audio amplifiers which had been designed for natural instruments.

A similar classical music experience occurred a few years later when I heard, and purchased, the 1974 album Snowflakes Are Dancing by Isao Tomita. If you liked Debussy then this album was a must-have.

Summary

In 40 years (1970-2010) humanity has gone from really good fake musical instruments which a few hundred could play (or afford to play) to today's (2012) video gaming industry which employs tens of thousands and rakes in $25 billion ($25,000,000,000.00) annually. The total number of XBOX-360 and PS3 machines sold by 2012 exceeds 145 million and this number doesn't include other gaming consoles or the number of people playing 3d games on high end PCs.

Comments:
  • In 2012, video game development employed ~18,000 Canadians and added ~2 billion to Canada's economy. Impressive since Canada is only the number three country in game development behind Japan and USA. Also remember that video game technology spills over into special effects for the movie industry.
  • In 2012, Call of Duty: Black Ops II was released and raked in sales of $500 million (yep half a billion) in the first 24 hours. The publisher raked in a second half billion in the next 17 days. If these numbers ever occurred due to movie box-office sales you would read about it in every newspaper and hear about it in every newscast. Most parents hate video games because they see their kids wasting a lot of time playing them (from a parent perspective, today's "video game console" is synonymous with yesterday's BOOB TUBE) but everyone must admit that video game development employs a lot of programmers and sells a lot of computer hardware. Maybe it is better to think of video games more like interactive movies.

Now I could have also mentioned lots of other technological changes including:

  • How we got from "mainframe computers" to "minicomputers" to "microcomputers" which then spawned personal computers, tablet computers, book readers, smart phones, MP3 players, satellite radio, etc.
  • How we got from "chess programs on early Apple and Radio Shack computers" to "IBM's Big Blue beating Gary Kasparov" to "IBM's Jeopardy-playing Watson"
  • How developments in analog music technology (vinyl records, to open reel tape, to 8-track tape, to cassette tape) facilitated the move to digital with the invention of the compact disk by Philips and Sony.
  • How we got from analog television based upon vacuum tubes (staring with "black & white video with monophonic sound" through to "color with stereo sound") on a 4x3 ratio screen to hi-resolution digital television on a 16x9 screen with 7.1 channel audio.
  • How Apple Inc. was able to boot-strap Siri (the computer system you talk to) with the help of technology they purchased from Nuance Communications (a company with roots going back to Xerox and Kurzweil).

...but I think you've already got the idea.

Addendum1: the PS3 and XBOX-360 are seventh-generation gaming consoles. In 2013 we will see the release of the PS4 (PlayStation 4) and the XBOX-One (a.k.a. XBOX-720) which are eighth-generation consoles promising greater realism. I wonder how many years will pass before humanity is playing on a holodeck. It might be sooner than we think.
 
Addendum2: abridged paragraph from page-41 of the book "Thank You for Being Late" (2017) Thomas Freidman
In response to the 1992 Russo-American moratorium on nuclear testing, the US government started ASCI (Accelerated Strategic Computing Initiative) in 1996 to provide a method for developing nuclear weapons in simulation. ASCI Red was delivered in 1997 and could execute 1.8 TeraFLOPS. It cost $55 million, was a little smaller than a tennis court, and required the equivalent power of 800 homes. In 2006 Sony released the PS3 which could also execute 1.8 TeraFLOPS but only cost £200.00 but only required the equivalent power of three 120W incandescent light bulbs.

Epiphany 2: The Dominance of C/C++ (Part 1/3)

No serious telephone company employee should ever admit this, but I did not learn "C" until the summer of 1988. On top of that, I didn't see any value in it at the time, but that would change.

Some of what follows comes from internet FAQs like this one: http://www.faqs.org/faqs/unix-faq/faq/part6/
Title: A very brief look at Unix history Author: Pierre (P.) Lewis <lew@bnr.ca>
comment: the author was an employee of Bell-Northern Research which was 70% owned by Nortel Networks and 30% by Bell Canada. BNR was the Canadian version of Bell Labs

All "C" programming language stories start with the creation of UNIX by employees at Bell Labs in 1969. The original UNIX offering was written using a Macro Assembler and was buggy. To make matters worse, Bell Labs was already working with multiple computers (starting with an 18-bit PDP-7 but intending to soon move to a 16-bit PDP-11 (or DECsystem-20) which was a completely different processor architecture from Digital Equipment Corporation). They solved the first problem (buggy) by creating the "B" language (which was based upon BCPL) then the "C" language which would allow UNIX to be rewritten using a higher level (than macro assembler) language. They solved the second problem (different CPU architectures) by using a CPU-specific code generator in the backend. Now both "C" and UNIX were portable.
comment: processor architectures get their label from the application programmer's view of the general purpose registers. The PDP-11 employed eight 16-bit general purpose registers but could address larger amounts of memory depending upon the mapping technology employed by the implementation-specific bus (Unibus mapping hardware employed: 18-bit, 22-bit and 24-bit modes; the CPU memory management hardware allowed the 16-bit CPU to address up to 24 bits of memory; hardware I/O registers were always mapped to the top of the address space)

comments: perhaps Bell Labs only intended to work with different PDP computers from DEC but here, "portable" means the code can be moved relatively easily to any computer built by any vendor. And this got me thinking (the epiphany) that no computer manufacturer would have ever produced a software tool like this since it provides customers with an exit strategy (the customer now can move their business software to any computer platform from any manufacturer; this must be why Ken Olsen and his company, DEC, disliked UNIX and C). While many companies produced ANSI-standard languages, these offerings also included vendor-specific extensions which made moving to another platform difficult, if not impossible. On a related note, third-party software vendors working in "C" can more easily move their work to other machines thus supporting a larger base of customers. This is also true for the UNIX operating system which is usually the first OS to be ported to new computer architectures and processors. "I am now convinced" that portable languages and operating systems facilitated the explosive development in computer hardware seen since 1970.
Boot-Up of a Portable Software Paradigm

		+-----------------------+
Phase 1 +------>| PDP Assembler -> UNIX +-------------->+
		+-----------------------+		|
							|
	+<----------------------------------------------+
	|
	|	+----------------------------------+
Phase 2 +------>+ UNIX and PDP Assembler -> B -> C +--->+
		+----------------------------------+	|
							|
	+<----------------------------------------------+
	|
	|	+---------------------------+
Phase 3 +------>| UNIX and C -> better UNIX +---------->+
	|	+---------------------------+		|
	|						|
	|	+---------------------------+		|	
Phase 4 +<------| UNIX and C -> better C    +<----------+
		+---------------------------+

Notes:	1) Assembler will always exist but is only used in the code generator (not shown)
	2) Platform specific information (i/o addresses etc.) are found in platform header files
	3) Phases 3 + 4 are totally portable if pure "C" (no embedded assembler instructions)

According to The Linux Programing Interface by Michael Kerrisk, the first UNIX ports beyond PDP-11 happened in 1977 (Denis Richie and Steve Johnson working on the Interdata 8/32; Richard Miller on the Interdata 7/32 at the University of Wollongong) and 1978 (John Reiser and Tom London on the Digital VAX at University of California at Berkeley)

Most people already know the story of how ARPA (now DARPA) funded the development of a self-healing digital communications network meant to survive a destructive war. Experiments started in the 1960s (ARPANET starts in 1969) but activity sped up in the early 1970s when Bell Labs began licensing UNIX to educational institutions for an incredibly low price only to recover their costs (under monopoly rules, Bell Labs was "not allowed to be a software vendor" -or- "make any money vending software") which meant that the majority of ARPA-funded work slowly moved to C on UNIX. This also meant that anyone working on the ARPA project in C on UNIX could easily share their work with peers at other universities. By about 1980 it appeared that all these universities (too many chefs syndrome?) were producing something which would soon break. Another reason for incompatibilities was the fact that TCP was implemented before the creation of the OSI Model and DARPA wanted...

  • to merge these different networks into one common network which would eventually be called the internet (short for inter-network). The internet protocol was developed to facilitate this objective.
  • to implement two formally standardized layers, TCP (1974) and UDP (1980) over IP. They reasoned it might be better if one university did the work so there wouldn't be a bunch of competing implementations to support.

At this point the story shifts to a gifted programmer at Stanford by the name of Bill Joy who was working with "C" on another DEC platform known as VAX-11. By 1982, Joy had developed all the software necessary to implement what we now refer to as TCP/IP running on IPv4.

By the early 1980s, many universities had modified UNIX sufficiently that they were able to rebrand/republish/relicense it to others. The University of California at Berkeley was one such group to do this by offering BSD UNIX to other universities for free or corporations for $1000.00 (IIRC) which was incredibly low compared to commercial products. BSD also introduced Berkley Sockets which allows a programmer to read/write an internet connection in the same way programmers read/write file systems. Bell Labs copied this idea then produced the STREAMs libraries (IIRC) for the AT&T flavor of UNIX.

Some people in the IT industry today are still very critical of "C" (or UNIX/Linux) while they simultaneously promote their favorite language (or OS) but they are fighting a losing battle. Why? When universities became financially squeezed in the early 1970s, many were forced to be more frugal and only consider inexpensive or free alternatives. So most universities went with C and UNIX. Now students had access to all the source code (which meant they could improve the product) which had the unintended consequence of creating a "critical mass" of human talent. In this environment it didn't matter which language was better because a choice had already been made; then students entering the workplace stuck with the software skills they already had. Companies selling software to universities probably saw some short term profits but never had a chance of succeeding in the long run. One day soon even COBOL will fall. FORTRAN is still around but nowhere near as popular as it once was although the libraries for doing math in the complex-plane still remains rather unique.
Question: Is "C" a high level language or a low level language?
Answer: Both. It is a low level language (think portable assembler) which becomes a high level language as soon as your program references external libraries via the #include directive. 

1988

Example 1:

I knew of many Nortel projects (CALRS was one) where conversion to "C" and UNIX increased stability while reducing licensing costs. Nortel's flagship product at the time was DMS which was written in a Pascal variant called Protel (Procedure Oriented Type Enforcing Language). In those days, Nortel was spending a lot of money each year training freshly minted university grads how to program in Protel. Since many of these people already knew how to program in "C", Nortel embarked upon an internal project to convert DMS from Protel to C. They did a fairly good job of this except that the changeover was done "flash style" rather than "gradually". Bugs and delays meant that Nortel's cash cow delivered virtually no revenue for 18 months. Oops!

Example 2:

We now know that 1988 was the year Dave Cutler left Digital Equipment Corporation for Microsoft. Cutler was responsible for the development of Windows-NT (New Technology) which was meant to (and did for a time) run on multiple CPUs including: IA-32, MIPS, and Alpha, with plans for PowerPC, Itanium, AMD64 and ARM. Because this new OS was meant to run on multiple CPUs and computer platforms, "C" was chosen as the language-du-jour because it was portable. Bill Gates (no idiot) was not convinced that 32-bit Windows-NT would be successful -or- would replace 16-bit Windows anytime soon. So unlike the Nortel FUBAR described, Gates funded both projects then ran them in parallel (DECies vs. Microsofties). Perhaps this is where having a technologist at the helm of a company is better than employing a financial person.

comment: click this 2013 link http://www.stroustrup.com/applications.html then hit "control f" to perform a page search for the phrase "Microsoft".
 
Quote: Literally everything at Microsoft is built using recent flavors of Visual C++ including major products like:
  • Windows XP, Vista, System 7
  • Windows NT (NT4 and 2000)
  • Windows 9x (95, 98, Me)
  • Microsoft Office (Word, Excel, Access, PowerPoint, Outlook)
  • Internet Explorer (including Outlook Express)
  • Visual Studio (Visual C++, Visual Basic, Visual FoxPro). Some parts of Visual Studio, like the Base Class Libraries that ship with the .NET Framework, were written using C# but the C# compiler itself is written in C++.
  • Exchange
  • SQL
Note: The yellow paragraph mentions C++ which many people consider a better way to use "C". If you don't use streams or objects in your C++ programs then your code will almost look like "C" and will certainly be read by any modern "C" compiler.

Example 3:

I didn't learn "C" programming until the summer of 1988 during a labor stoppage. I was using Lightspeed C on a Macintosh and I remember thinking "this has got to be someone's idea of job protection". But it always produced really small binaries so I thought it might have some advantages. Also, the concept of reusing your own code, or code written by others (free or purchased), via the #include mechanism seemed an obvious advantage. In subsequent years I noticed professional programmers getting really impressive results using "C" on 80486 and 80586 CPUs so I attended up the following evening classes at Conestoga College:

  • Programming with C (I)
  • Programming with C (II)
  • Object Oriented Programming with C++

1993-1996 (my personal changeover from Assembler to "C")

6811 assembler

In 1993, I did some contract work for a local (Kitchener/Waterloo) company described here. I designed the control board which employed a MC68HC11F1 from Motorola. All the software was written using a plain-text editor in 6811 Macro Assembler Notation on an Apple Macintosh. The binaries were generated using the uAsm 6811 cross-assembler from Micro Dialects. This approach worked well until the code exceeded 8K in size which introduced other problems (you always wanted to use branches for improved size and speed but often needed to switch to jumps when the target location was too distant).

Whitesmiths C

In 1995, I rewrote the whole thing for the Whitesmiths 68HC11 C Compiler/Assembler on an IBM-PC. Implementing startup and interrupt vectors was child's play with this package. Although I loved programming in 6811 Macro, Whitesmiths "C" was a much more productive tool. Descendants of this compiler are still available from COSMIC Software ( http://www.cosmic-software.com ) but you can find cross-compilers and cross-assemblers available for every CPU chip still in production. Here are two of many:

Desktops get TCP/IP

Anyone who remembers working on desktop platforms (PCs as well as Macs) in the early 1990s also remembers not getting TCP/IP stacks from Microsoft or Apple. For example, Windows 95 was released in August 1995 without a TCP/IP stack so if you wanted one (because you wanted to TELNET, FTP, or use the Netscape Navigator) then you needed to get a copy of Winsock from a third party. Windows-95a and Windows-NT4 were both released in 1996 with TCP/IP stacks so 1995-1996 might be considered a major inflection point in the history of computer technology.

But I need to point out that many of these companies were able to directly port the TCP/IP stacks from university sources because the government-funded research was not allowed to be patented or copyrighted -AND- because the software was written in C.

C++

tree

C++ introduces object oriented concepts to C which can only result in greater productivity with fewer bugs. For example:

  • properly written "object constructors" ensure new variables and structures are properly initialized.
  • properly written "object destructors" ensure variables and structures are erased then released from memory without producing leaks.
  • "data encapsulation" ensures that no other software can directly access the variable-in-question but must use a programmer-supplied "method". All of a sudden it is not possible for a newbie to write "February 29" (to the variable in question) when the year is not a leap year. Remember that a part of the Y2K problem was based upon the question "is 2000 a leap year?" (yes)

Modern client software, like browsers (especially tabbed-browsers) from all vendors would be impossible without C++. In fact, I suspect the whole client-server paradigm has been taken further with C++ than was possible with C or any other language. Be sure to think about object-oriented technology whenever you see something (JPEG, GIF, Java Plugin, WAV player) sitting in the middle of your webpage.

Staying with Microsoft for a moment, most technology from them is object oriented in order to support COM (component object module) which is the basis for other Microsoft technologies and frameworks, including: OLE, OLE Automation, ActiveX, COM+, DCOM, the Windows shell, DirectX, and Windows Runtime.

2010-2013 (my partial changeover to C/C++)

Up until 2010, I was able to do all my application programming using HP-BASIC-1.7 Alpha for OpenVMS. But in 2010, I ran into a couple of situations where I had to directly interface with open-source software written in "C". One application involved interfacing an HP-BASIC application to OpenSSL. The second involved interfacing an HP-BASIC application to gSOAP. With most so-called "DEC languages", a developer can supply the compiler with command-line switches to control how variables are written to the symbol table which is used during linking. The appropriate "case control" switch doesn't exist with HP-BASIC-1.7 which means all symbols are up-cased. This means that a programmer needs to write a wrapper in order to facilitate linking. While this is possible, it might be more trouble than it is worth. Add to this the fact that HP-BASIC doesn't have all the data-types available to C/C++ (for example, there are no unsigned variables in HP-BASIC).

For me, it was easier to write the apps in "C" (HP C V7.3-009 on OpenVMS Alpha V8.4) then call the open source software directly.

Now the two C programs I wrote (one client, one server) are fairly ugly because I used pointers to reference the XML structure buried within the SOAP packet. I found a few spare hours in 2013 to go back to gSOAP in order to play with suggestions for a table-walker which can only be done well in C++ (well, you can always do pointer-to-pointer work in C but it looks really ugly; I have also seen table-walkers in C# and Java but those languages are out of scope on this project). Anyway, this time I used HP C++ V7.3-009 for OpenVMS Alpha V8.4 and discovered the resulting source code was smaller and beautiful. Not sure if I will ever be granted time to rewrite the "working" C-based gSOAP apps into C++

This thought continues below: Epiphany-5

Epiphany 3: The Dominance of Linux

Not much to say here except this: where ever you find C/C++ you will also find UNIX® (the trademarked name), Unix (the name of this technology), and Linux

UNIX/Unix

As mentioned above, Bell Labs created the "C" programming language with the intent of squeezing the bugs out of Unix. In case you haven't been paying attention, Unix is now only written in "C" which may leave you with a chicken-or-egg nightmare if you happen to think about this stuff before falling asleep smile

After the US government finished (1983-1984) the breakup of Bell system, AT&T inherited Bell Labs then attempted to turn UNIX into a marketable product

comments:
  1. after 1956, Bell Labs had been forbidden from working outside the telephone industry
  2. 1956 law makers had no idea computers would be found in every industry including telephony (see Nortel above)).

MIT lifer, Richard Stallman, tried to get around the commercialization of Unix by creating the GNU Project (Gnu Not Unix) which was a total Unix rewrite. Since writing OS applications is a whole lot easier than writing a kernel, it shouldn't be a surprise to anyone that GNU wasn't entirely free of Unix until 1992.

Engineering Students

Engineering students, specializing in both hardware and software, had studied Bell Labs UNIX kernel "source code" for years and were now worrying about the legality of this practice. Many universities began to look for alternatives and I remember the MINIX kernel (from the "Free University" in Amsterdam, Netherlands) being a popular contender. I might even have a hardcover manual stashed away someplace in my home office.

I sometimes wonder what is in the Scandinavian water supply because

  1. the next big thing in kernels is Linux which was written by Linus Torvolds at the University of Helsinki, Finland. It was the Linux Kernel which was used to get the GNU Project entirely free of UNIX.
  2. The C language is morphed into C++ by Bjarne Stroustrup who hails from Denmark

Linux

Today, the merger of the Linux kernel with GNU is simply referred to as Linux although some prefer the alternative GNU/Linux (see: GNU/Linux naming controversy)

There are already huge volumes of web information available about Linus Torvolds so let me include one quote from his bio found here:

In 2003, Torvalds left Transmeta to focus exclusively on the Linux kernel, backed by the Open Source Development Labs (OSDL), a consortium formed by high-tech companies, which included IBM, Hewlett-Packard (HP), Intel, AMD, RedHat, Novell and many others. The purpose of the consortium was to promote Linux development. OSDL merged with The Free Standards Group in January 2007 to become The Linux Foundation. Torvalds remains the ultimate authority on what new code is incorporated into the standard Linux kernel.
Wow, that is a lot of corporate support (critical mass?).
According to www.archive.org the site www.osdl.org in 2003 mentions these partners (alphabetical order): Alcatel, Cisco, Computer Associates, Dell, Ericsson, Force Computers, Fujitsu, HP, Hitachi, IBM, Intel, Linuxcare, Miracle Linux Corporation, Mitsubishi Electric, MontaVista Software, NEC Corporation, Nokia, Red Hat, SuSE, TimeSys, Toshiba, Transmeta Corporation and VA Software.
OSDL is now shut down and everything is referred here: http://www.linuxfoundation.org/ but their corporate member list is still impressive.

Smart Phones

Back in 2005, Google wanted to put their Google Talk app on Apple's iPhone but Steve Jobs refused because the app would allow people to make free long distance calls (Jobs was certain this app would cause problems with one of the iPhone's main financial backers, "Cingular Wireless", which was a division of AT&T.

Comments:
  1. Even though Apple customers paid big bucks for the iPhone, Apple always controlled what apps the customer put on the customer's phone. They did this by locking the phones so that app installs could only be done through Apple's iTunes store
  2. This is an example of Karmic Irony because Steve Jobs started off selling Blue Boxes designed by Steve Wozniak and built by both of them to allow people to make free long distance calls over the telephone network.

in 2006, Google made an ultimatum to Apple: either allow Google Talk to be placed on the iPhone or we (Google) will produce a competing product called the gPhone.

Apple refused which caused Google to purchase California Linux vendor Android Inc. Google then created the Open Handset Alliance where member companies would be given the Android OS Software for free provided the manufacturer preset customer modifiable preferences to do searching at Google (where Google makes most of their money).

Tablets and Notebooks

There isn't much difference between gPhones and tablets (other than the screen size) so it should be no surprise that most tablet manufacturers would power their devices with Android. (er, Linux). Other emerging operating systems, like Chrome OS (which is currently only found in Google's Chromebook) and Firefox OS are also just different Linux variants so you can see that Linux is everywhere.

Epiphany 4: Vector Processing

Back in the late 1980s, I found myself in DEC's Field Service Lab (Training Center) at 12 Crosby Drive, Bedford, Massachusetts. We had lectures in the morning and lab assignments in the afternoon. I was assigned system W4 (Isle: W Bay: 4) which happened to be a VAX-8550. While I was working on this system I noticed visitors occasionally walking through a curtained-off area in Isle: X. During our coffee break I mentioned this to my instructor who told us that the system hiding behind the curtain was a VAX-6000 which featured a new optional circuit board capable of vector processing. He further explained that vector processors were all the rage in various kinds of scientific computing like "computing particle trajectories" or "climate circulation models" because they could perform a single instruction (e.g. multiply or multiply-and-accumulate) on multiple data points. Those data points can represent anything you wish including a location in three dimensional (or higher) space. In those days, "vector processing" was available as an expensive option ($$$) but today it is built into all modern CPUs but most people are not aware of it.

comments: we were in the underground field lab reserved for DEC employees because a recent rain storm had flooded the lab reserved for customer use. This place so large that it was difficult to see the far walls. When I mentioned this observation at coffee break the next day, one American DEC Field Engineer said this place is nothing compared to the NSA which hosts computer systems by the acre (that's 0.405 hectares for non-Americans)

This side of y2k, modern "graphics cards" employ 1000-3000 streaming processors so that numerous vector/tensor operations may be executing in parallel. On top of that, If you also remember that graphics cards typically have between 1 and 4 GB of private memory then you come to the realization that graphic cards actually provide a private protected computing environment within your computer platform. Originally, cheap graphics cards only supported single precision floats while many today now also support double precision floats. In fact, some computer engineers look upon graphics cards as an array of several thousand floating point co-processors (think: several thousand 80387 co-processor chips).

Going even further, specialty companies now produce motherboards which can simultaneously host four, or more, graphics cards. Meanwhile, companies like Nvidia also manufacture graphics cards which do not have any monitor connectors because they are only used for number crunching.

Here's a brief snapshot of vector processing development:

Traditionally, processor technology was defined like this:
  • Scalar (one data stream per instruction;  e.g. CISC CPU)
  • Superscalar (1-6 non-blocking scalar instructions simultaneously; e.g. RISC CPU)
  • See: Flynn's Taxonomy for definitions like SISD and SIMD but remember that Data represents "Data stream"
    Caveat: this list purposely omits things like SMP (symmetric multiprocessing) and VAX Clusters
Then CISC and RISC vendors began to add vector processing instructions to their processor chips which blurred everything:
  • Vector (multiple data streams per instruction; true parallel processing on a single computing platform)
    • vector processing (also known as matrix processing) usually involves only two data points
    • anything higher than two data points is usually referred to as tensor processing
    • while it is possible to do floating point math on integer only hardware, floating point hardware can speed up floating point math by an order of magnitude or more. Likewise, you do not need special hardware to compute vectors or tensors but certain applications (climate models, artificial intelligence, etc.) demand it.
  1. Minicomputer / Workstation
    1. 1989: DEC adds vector processing capabilities to their Rigel microprocessor
    2. 1989: DEC adds optional vector processing to VAX-6000 model 400 (called VAXvector)
    3. 1994: VIS 1 (Visual Instruction Set) was introduced into UltraSPARC processors by SUN
    4. 1996: MDMX (MIPS Digital Media eXtension) is released by MIPS
    5. 1997: MVI (Motion Video Extension) was implemented on Alpha 21164PC from DEC/Compaq. MVI appears again in Alpha 21264 and Alpha 21364.
  2. Microcomputer / Desktop
    1. 1997: MMX was implemented on P55C (a.k.a. Pentium 1) from Intel
      • the first Intel offering involved 57 MMX instructions
    2. 1998: 3DNow! was implemented on AMD K-2
    3. 1999: AltiVec (also called "VMX" by IBM and "Velocity Engine" by Apple) was implemented on PowerPC 4 from Motorola
    4. 1999: SSE (Streaming SIMD Extensions) was implemented on Pentium 3 "Katmai" from Intel.
      1. this technology employs 128-bit instructions
      2. SSE was Intel's reply to AMD's 3DNow!
      3. SSE replaces MMX (both are SIMD but SSE uses its own floating point registers)
    5. 2001: SSE2 was implemented on Pentium 4 from Intel
    6. 2004: SSE3 was implemented on Pentium 4 Prescott on from Intel
    7. 2006: SSE4 was implemented on Intel Core and AMD K10
    8. 2008: AVX (Advanced Vector Instructions) proposed by Intel + AMD but not seen until 2011
      1. many components extended to 256-bits
    9. 2012: AVX2 (more components extended to 256-bits)
    10. 2015: AVX-512 (512-bit extensions)
Putting hyper threading aside for a moment, we first see true SMP on the desktop in 2005 with Intel's dual-core Pentium-D. Since then, the number of cores from all vendors has only gone up.
 
But GPU (graphics programming units) take vector processing to a whole new level. Why? A $200.00 graphics card now equip your system with 1500-2000 streaming processors and 2-4 GB of additional high speed memory. According to the 2013 book "CUDA Programming", the author provides evidence why any modern high-powered PC equipped with one, or more (if your motherboard supports it), graphics cards can outperform any supercomputer listed 12 years ago on www.top500.org
I've been in the computer hardware-software business for a long while now but can confirm that computers have only started to get real interesting again this side of 2007 with the releases of CUDA, OpenCL, etc.
One final point. In late 2013 Sony released the PlayStation 4 which is based upon an APU from AMD. What's an APU you might ask? It is an Accelerated Processing Unit which consists of CPU integrated with a GPU (something Intel had already been doing for a number of years without the fancy acronym). The PS4 is built around an APU consisting of two 4-core Jaguar x86-64 CPUs coupled to an equivalent HD 7850 graphics chip. Because the PS4 is build around GDDR5 memory which is only found in graphics cards, it appears SONY built a graphics system with a built-in CPU rather than the traditional processor system with a built-in graphics card.

Epiphany 5: The Dominance of C/C++ (Part 2/3)

DirectX

In the early 1990s, Microsoft was smaller so was "looking for problems to solve" and "markets to expand into". Since many people were attempting to develop computer games, Microsoft informally aligned itself with SIGGRAPH to help produce tools. Next, they offered to do a free port of the game Doom (which only ran on DOS) to Doom95 (to run on Windows95) only for the technical experience. Their first Graphics API (application programming interface) was named DirectX and appeared in 1995 for Windows-95 and 1996 for Windows-NT4.

DirectX is neat because it defines a number of hardware devices in software (including a reference graphics card) then replaces those software devices with hardware when compliant hardware is present. This means that game programmers do not need to worry which CPU, or GPU (if any), is present. Just send your commands to DirectX and it will carry out your wishes.

While recently poking around a game programmer site, I noticed this caveat:

Microsoft recommends you call DirectX directly from Visual-C/C++ or indirectly from a .NET wrapper. Doing direct calls will result in the fastest code possible.

While I have used Microsoft Visual Studio for a few corporate projects, I am no expert. I was always under the impression that you could set the build-options of all Visual Studio languages to produce either "x86-binary for Windows" or "MSIL for the .NET framework". I still believe this. So is it possible that DirectX expects to be called from C/C++ for some reason? I am not certain but I do know is that COM (component object module) is the basis for other Microsoft technologies and frameworks, including: OLE, OLE Automation, ActiveX, COM+, DCOM, the Windows shell, DirectX, and Windows Runtime. And that COM is written in C++

Here is the opening paragraph of the Introduction from the book "Introduction to 3D Games Programming with DirectX 11". (which I highly recommend to programmers)

quote: Direct3D 11 is a rendering library for writing high performance 3D graphics applications using modern graphics hardware on the Windows platform. (A modified version of DirectX 9 is used on the XBOX 360.) Direct3D is a low-level library in the sense that its application  programming interface (API) closely models the underlying graphics hardware it controls. The predominant consumer of Direct3D is the games industry, where higher level rendering engines are built on top of Direct3D. However, other industries need high performance interactive 3D graphics as well, such as medical and scientific visualization walkthrough. In addition, with every new PC being equipped with a modern graphics card, non-3D applications are beginning to take advantage of the GPU (graphics processing unit) to offload work to the graphics card for intensive calculations; this is known as general purpose GPU computing, and Direct3D 11 provides the compute shader API for writing general purpose GPU programs. Although Direct3D is usually programmed from native C++, stable .NET wrappers exist for Direct3D so that you can access this powerful 3D graphics API from managed applications.

Vector Math (again)

DirectX is a collection of other modules. Direct3D and D3DX (Direct3D Extension) are two of many. D3DX is a math library capable of doing math in three (or more) dimensions to support 3d video games but some programmers used D3DX to do scientific work. This led Microsoft to develop XNA (unofficially: DirectX-Nextgen-Architecture) which is a better vector math library.

game vs. non-game

Early in 2013, Microsoft announced that DirectX and XNA will both be folded into Windows-8 and will only be available as a Windows Kernel Service. Oops! Scientific application developers have been told to use move to DirectCompute (but many will move to OpenCL or CUDA)

XBOX-360

Most people do not know that the first "X" in XBOX represents DirectX. Yep, the XBOX-360 run a modified version of DirectX-9 (despite what you have read on the web, nothing higher).

Many people do not know that the XBOX-360 is powered via a tri-core PowerPC chip from by IBM rather than an x86 chip from either Intel or AMD.

Now I guess it is no surprise that DirectX is written in C/C++ and is just compiled differently to generate code for different target processors (game console or Windows PC). Doing this in a non-portable language -or- macro assembler would be too labor intensive as well as bug prone.

Parallel Programming, CUDA, etc.

A few months back (May of 2013) I was trying to learn more about parallel programming so was reading a book titled “CUDA Programming: A Developer's Guide to Parallel Computing with GPUs ” where the author gives evidence that any high-end desktop today (2013) with multiple graphics cards (if your motherboard supports them) can out FLOP anything found at the top of www.top500.org twelve years ago in 2001. Wow! Who knew? One restriction here is that the CUDA technology is only available in C/C++ as a bunch of included libraries. Sure I was aware of vector instructions in VAX and Alpha CPUs but these appeared to only provide pseudo-parallel programming capabilities. But graphics-cards from NVidia and AMD/ATI often provide several thousand streaming processors which are available for whatever you wish; this is true parallel programming on the desktop. You needed CUDA to talk to these cards when you wanted to do math but later discovered that lots of people were using the huge amount of vector math libraries created for DirectX/Direct3d as well as OpenGL. Apparently all the modern games would not be possible without these libraries. Talking about DirectX/Direct3d for a moment, I’ve visited a few of the game programmer sites where most people say “Microsoft allows direct communications with the DirectX/Direct3d API’s from Visual-C++ but for all other languages you need to go through a .NET wrapper (which reduces performance)”. Oops! Another plug for C++

Clustering and Parallel technology

I can only recall three interconnecting technologies that made a large contribution to the computing industry

  • VAXcluster (from Digital Equipment Corporation) allowed many multiple VAX computers running the VMS operating system to be interconnected then operated as one loosely-coupled common platform
    • This technology was migrated to OpenVMS running on Alpha and Itanium so was renamed VMS-Clustering but you probably never heard that phrase because clustering was never as popular as SMP (symmetric multi processing)
    • Transaction-oriented check-pointed operations were guaranteed to failover without loss. The most memorable real-world example happened on 2001-09-11 (a.k.a. 9/11) where nodes of VMScluster were located at the World Trade Towers in New York and New Jersey. The British company running this network from London reported that not a single transaction was dropped despite the fact that they watched destruction of some of the nodes on live television.
  • Oracle RAC (real applications clusters) allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, thus providing clustering.
  • Beowulf Cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

I hadn't given much thought to clustering or parallel software on microcomputers until I received this recent (2013) advert from Intel for two products:

These products were designed to plug into the Microsoft Visual Studio IDE (Integrated Development Environment) targeted at Windows or Linux. However, after visiting the Intel site on 2013-07-20 it appears that these Windows-based tools now only generate code for Linux targets. I'm not sure if a windows flavor is around the corner or not.

This thought continues below: Epiphany-19

Epiphany 6: Digital Entertainment Evolves (Part 2/2)

Two PS3 games were released in 2013 which were head and shoulders above all others.

The Last of Us - Is a movie-quality experience about future life after a biological holocaust. In part of the game, YOU play the roll of Joel who is travelling across a post-apocalyptic United States in 2033, in order to escort the young girl, Ellie, to a research facility where it is believed that Ellie may be the key to developing a vaccine. When Joel and Ellie become separated, YOU play the roll of Ellie for a time.

Grand Theft Auto V - is played from a third-person perspective in an open world environment, allowing the player to interact with the game world at their leisure. The game is set within the fictional state of San Andreas (based on Southern California) and affords the player the ability to freely roam the world's countryside and the fictional city of Los Santos (based on Los Angeles). The single-player story is told through three player-controlled protagonists whom the player switches between, and it follows their efforts to plan and execute six large heists to accrue wealth for themselves.

Note: in this game "open world environment" translates into approximately 49 square miles (127 square km).
One can only wonder what these games will look like when next-gen consoles (PS4 and XBOX-One) appear later this year.

Epiphany 7: The cloud has always been there, sort of

Whenever I attended "communication technology" lectures over the past decades, the instructor almost always started by drawing a picture of a cloud (usually muttering "you don't need to know what goes on up here"). It didn't matter if the topic involved PSTN, X.25 or the Internet, a picture of a cloud was nearby. So today's use of the word CLOUD is actually marketecture for multiple marketing connotations:
  1. Computers in the internet implementing web 2.0
  2. Software as a service
  3. Cloud computing

Early Cloud: Networked Computers before the internet

Everyone reading this will have their own examples. My first memories involve VAXclusters which consisted of multiple VAX computers running the VMS operating system. They could be tightly coupled through a common memory interface, or medium coupled through network communications. Applications were programmed in such a way that the loss of one of the computers does not cause the loss of any storage or transactional data. In fact, the recommended way of performing an OS upgrade was to roll one computer out of the cluster, do the upgrade, roll the computer back into the cluster then repeat the operation on the next VAX.

32-bit VAX evolved into 64-bit Alpha which meant that this technology was referred to by the lesser known name VMS Cluster. Improvements allowed the distance between clustered processors to increase, and such a cluster could be seen in operation during the 9/11 attacks on New York when one VMS Cluster processor was destroyed with one of the twin-trade towers while its partner in New Jersey continued transactional processing without dropping a single transaction. (not something any company with a conscience would want to advertise)

Early Cloud: Computers in the internet (web 1.0)

brief historical overview
  • The IP network unofficially started in 1969 with the development of the first packet switcher by BBN for ARPAnet
  • TCP was added to IP in 1982 (coded by Bill Joy) which allowed computers to do more than just send messages
  • The Internet resulted from the interconnection of various networks including: ARPAnet, NSFnet, MILnet and others.
  • Web browsers and servers (based upon HTTP v0.9) were invented in 1989 by scientists working at CERN in Geneva so they could freely share published documents without needing usernames and passwords to access multiple computers. Remember that the ability to send MIME attachments by email didn't happen until 1992.
  • Standardized HTTP 1.0 first appears in 1996 and many consider this the official start of web 1.0 (although many of us had been using FTP and browsers for a couple of years by that time; Hookup Communications was my Waterloo Ontario provider in 1994; we needed to acquire third-party TCP/IP stacks for DOS, Windows-3, and MacOS since no native products existed)

 specialized computers

  • Back in the 1970s and early 1980s, packet routing was done by full-blown computers; this is still possible
  • Cisco Systems main claim-to-fame was shrinking this router-on-a-computer functionality into a standalone computer appliance known as a Router
  • Networks were based upon signaling technologies like: Ethernet, Token Ring, FDDI, X25, ATM, and PPP over PSTN (telephone dialup).
  • Since then, router and bridging functionalities have morphed into an appliance known as a Switch but this fact is not germane to this discussion.
  • Routers and Switches aside, the internet of the 1990s consisted of many more specialized computers which were unseen. Consider this incomplete list:
protocol function description how high
in the
cloud?
dns
bind
domain name service
berkley internet name domain
translates names into I/P addresses high
smtp simple mail transfer protocol email OUTBOX medium
pop3 post office protocol email INBOX medium
http web server transfers (usually html) formatted data low

Web 1.0 vs. Web 2.0 (more marketecture because no official definition exists)

Most technical people know that protocols like telnet and ftp are connection oriented, and that connection stays up until it is terminated by the client (via user command) or server (timeout). Most people do not know that http (the protocol to support www and/or web) is connectionless. Yep, you read that correctly; Before Y2K, a browser opened a connection to a web server, retrieved a page of data then closed the connection. If there were multiple pictures on the page, a separate open-close transaction was necessary for each one.

caveat: what I am describing here is HTTP/1.0 which still works that way. A second newer protocol called HTTP/1.1 was added in 1999. The keep-alive feature of this protocol keeps the TCP/IP connection open for a short time (programmable by the server) while the client makes multiple requests of the server.
SSL (Secure Sockets Layer)
Many people consider Web 2.0 the ability to do business over the web which would be impossible without the secure transmission of information.

Definitions:
http:
non-secure web transactions usually occur on port 80
https:
secure transactions usually occur on port 443 using (https = http with security)
Experiments in Secure Network Programming beginning in 1993 resulted with Netscape developing SSLv1 in 1994 which was never released. SSLv2 was released in 1995 with a number of security flaws which resulted in the release of SSLv3 in 1996. And now the story takes a strange twist.

The Browser Wars

Microsoft was a little late in recognizing the importance of the internet so went to war with its perceived rival, Netscape. 
  • remember that the first release of Windows-95 (released in August of 1995) did not contain a TCP/IP stack. It needed to be acquired from a third-party vendor and many people chose Winsock
  • Internet Explorer v1.0 was available on the Windows Plus! add-on package which was purchased separately
Netscape was not able to defend themselves against the attack from Microsoft (the US Department of Justice filed an antitrust law suit against Microsoft in 1998 citing monopolistic behavior). Netscape placed a lot of their software into the public domain (open source) in February of 1998. Netscape was acquired by AOL (America Online) in November of 1998. Even though the software was in the public domain, AOL employees tinkered with it as if it was their own product but kept released software into the public domain. And here the story gets stranger.

One programmer, or perhaps it was a team, wanted to triple encode session keys in SSLv3 and so wrote "C" routines to do so. But they made a mistake in the declaration of one variable making it a "long" rather than a "long long". This had the effect of reducing the resultant key space to a much smaller (and now crackable) size. No one knows how much of this open source code mad it into other products.

Two Security Forks
  1. Most programmers aware of the debacle fixed the problem then moved on
  2. People at the IETF worried that any secure system communicating with an unfixed SSL3 system would be hackable so they introduced a fourth protocol as well as a new API to call it.

Enter TLS1

The IETF improved upon SSLv3.0 and might have called their new protocol SSLv3.1 or SSLv4.0 but, since they did not want people to continue to use the old libraries or even accidentally link against them, they named their new protocol TLSv1.0 (Transport Layer Security). They also modified the calling structure to prevent accidental linking. Security improvements continued with TLSv1.1 which morphed into TLSv1.2

Conclusions

Not only has security allowed consumers to securely purchase goods from the sites like Amazon an eBay using credit cards and PayPal, it has allowed many companies to put their corporate records into computers located in the cloud. One neat feature of cloud computers is their ability to automatically backup data to other cloud computers located around the world. Companies would only do this if it was secure.

Oracle Corporation

Almost anyone alive today will recognize that Oracle has been very successful in getting their flagship database product connected to communications networks, including the internet. The following timeline may shed some light on Oracle's view of these phrases.

Product Year Introduced Comments
Oracle 8 1997
Oracle 8i 1999 i = internet (implemented by incorporating a built in JAVA VM)
Oracle 9i 2001
Oracle 10g 2003 g = grid computing
Oracle 11g 2007
Oracle 12c 2013 c = cloud computing

Epiphany 8: A huge amount of technological change is due to video gaming

this Epiphany is under construction

video cards

solid matter displays

3d display technology

cloud computing

AAA Games

Okay so as near as I can tell, the phrase "triple-A game" is a marketing term created by the video game industry to distinguish "big budget projects" from the indie community. Anyway, here are what some industry-watchers think are the unofficial criteria for a triple-A label in 2015:

That last item should raise a few eyebrows for several reasons:

Epiphany 9: Game consoles lead, PCs follow
(or is it the other way around? perhaps it is a neck-to-neck horse race)

Busses

Originally, video cards were just another peripheral device sitting out on the relatively slow ISA bus (Industry Standard Architecture bus) but in order to produce higher quality "generated" video the CPU needed a faster path to the video card. The ISA bus led to the EISA bus (Extended ISA bus) which led to the PCI bus (Peripheral Component Interconnect bus) but this was still too slow for generated video so Intel created the AGP (Accelerated Graphics Port) which allowed the video card to sit on a high-speed bus directly connected to the CPU. Industry did not take kindly to this proprietary approach and so countered by creating PCIe (PCI express) bus with Intel being one of the partners.

{ in this discussion I have purposely ignored technology that didn't take root like PCI-X etc. }

Memory

Over the years we've seen both CPUs and RAM memory get faster. The fastest memory has always been SRAM (Static RAM) but it was too expensive so vendors relied upon slower DRAM (Dynamic RAM) which led to DDR then DDR2 then DDR3.

With video cards now being electrically closer to the CPU, it was time for video card vendors to raise the ante with their own improvements. Because video memory is accessed differently than main system memory, video card manufacturers optimized DRAM designs then prefixed their acronyms with a "G" for graphic.

Architecture

Okay so up until 2013, video cards were something you added to a computer system. In November of 2013, Sony released their PS4 (Play Station 4). All of main memory is composed of GDDR5 memory and the ATI portion of the 8-core Jaguar CPU has full-direct access to memory (as does the CPU).  It is almost as if Sony put a CPU inside a video card rather than a video card inside a CPU-based computer system.

Cores

Back in 2005, Microsoft released the XBOX-360 sporting a tri-core PowerPC CPU built by IBM. Around the same time, Sony released the PlayStation 3 which employed an 8-core Cell Processor (7x SP6E; 1 x PPE) built by IBM. This doesn't sound like a big deal until you realize there were no multi-core CPU systems available in the retail marketplace. Yep, the fastest retail hardware was only available as a gaming console.

In 2013, Microsoft released the XBOX-One and Sony released the PlayStation 4 and both platforms are based upon an 8-core Jaguar APU from AMD. What's an APU? It is a CPU and Video Card Controller integrated into one chip (or chip carrier). I guess I do not need to point out that 8-core chips are not yet available in the retail market. Video consoles still lead the way

In 2016, Intel released a new core-i7 desktop processor featuring 10 cores (this extreme edition was aimed at the gaming community)