Friday, February 20, 2009

Computers, AI, and Fortran

Now here is a rather interesting exchange of views from WUWT on Feb 19 and 20 of 2009, on computers and what they can and cannot do.  My added comments are [in brackets.]

[The basic post is about Global Climate Models, aka General Circulation Models, and just how poorly they are written, validated, and that their resulting predictions are very bad when compared to reality]

David Halliday wrote:

“Computers can’t do anything humans can’t do. Computers can’t think. Computers can’t create. What computers can do is some of what humans can do only faster.”

[My response:] In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do. There appears to be a limit to the amount of information a human (even great humans) can assimilate, process, and keep account of. Computers can do this far better. There are also documented examples of, for example, neural network algorithms that *learn* from mistakes, from partial successes, and deduce rules or answers that have eluded even the most experienced and smartest humans.

There are also relationship-discovery algorithms, aka data mining, that explore vast reams of data and reveal insights that humans have never before discovered.

In the field of computerized advanced process control, well, let’s just say that many of us are very glad humans are not at the controls, but instead let the computers do the work. Fly-by-wire is just one example of this, wherein advanced aircraft fly in or near the unstable regime, a regime where human responses and anticipation just cannot adequately respond.

John Galt – re is Fortran still in use? [This in response to John Galt, a programmer with some experience, who evidently doubts Fortran is in use due to its creaky age and inadequacies.  True, it is not too good at writing code for internet applications.  But for engineering applications, it has few if any peers.  IMHO.]

Absolutely. Operating companies have millions of lines of code written in Fortran, that works and works quite well every day. No one in the private sector has the time or budget to rewrite perfectly good code just to bring it up to some newly-written standard. Those new standards change every few years, and rewriting would be a complete waste of effort. There may be some limited instances where this is done, but it must have a justifiable positive influence on the financial bottom line. 

[This is based on my experience and knowledge of oil and chemical companies, and the process models and process control software that we wrote in the 60's 70's and 80's.  It was Fortran 72 and Fortran 77 for most of my time writing this code.  Some of it was later used as black-box subroutines called by other languages, such as GUIs (graphical user interfaces) for simulators and training software.]

---------------------------

[David Halliday responds:]

David Holliday 

[Quoting my earlier comment] “In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do.”

My original statement is correct. There is nothing a computer can do that a human can’t do. The computer can just do it faster.

Computers are machines. Programs are instructions to the machine to do things. Humans design the programs. Humans write the programs. Humans test the programs. And humans run the programs. Therefore, humans can do the same thing the programs do but just slower.

Computers aren’t creative. They have no independence of thought. They don’t think at all. They have not independence of action. They have no cognitive understanding. They simply execute the programs. One of the biggest misnomers in Computer Science is Artificial Intelligence. There is no intelligence in a computer. And we’ve never been able to put it in there.

I first studied Artificial Intelligence in the early 80’s. Neural nets, which are often purported to be advanced, self-learning computers, are fundamentally self-weighting algorithms that can varying their behaviour based on feedback mechanisms. Expert systems are simply rule-based approaches to decision systems. Humans build the neural nets and humans write the rules. There is nothing about how these programs work that we don’t understand. The HAL 9000 of 2001: A Space Odyssey doesn’t exist today or maybe ever.  

[Now, David Halliday may not realize just what I referred to earlier about neural networks solving problems that the best and brightest human minds had spent countless hours on with no results:  A complex chemical plant just could not make product that met specifications.  The stuff it did make had to be sold at a deep discount as bad quality.  The plant had several processing steps (not uncommon) and many independent variables in each step.  The chemists and engineers did all they could from theory and experience, but to no avail.  The neural net guys were trotted in, built their NN, and fed data from each failed trial into the NN.  They fed in all the variables and settings for each failed test, with the resulting yield and quality of product.  Then, they ran it in Predict mode, with the goal being highest yield of on-spec product.  The NN churned away and produced a new combination of independent variables for each of the several process steps.  After the chemists and engineers reviewed these settings for safety and reasonableness (e.g. can this pump generate that much flow?  Can that heater produce a stream at that temperature?) they agreed to give it a go.  And it worked.   The question is, could humans have eventually found that precise combination of variables?  Maybe.  But then, they had been trying for a long time to do just that, with no success.]

As someone who has worked in computers for over 26 years from programmer to Chief Technology Officer, I can tell you with a high-degree of confidence I understand how computers work and what they can do. They don’t do anything we don’t tell them to do. And since everything they do is something we tell them to do we can do it.

Don’t confuse that computers can do things much faster than humans with what humans can do. The point is they are just doing what we program them to do. Of course they do it orders and orders of magnitude faster than we can. Hence, the old joke, “To err is human, but to really [mess] up takes a computer.”  [This is a family-friendly blog...editorial license is hereby granted to me by me to clean up offending language -- Roger; our version of that last line was "to really foul things up requires a computer."]

---------------------------------------------------------------

[Next, Richard M chimes in with:]

Richard M 

I agree with almost everything David Holliday (20:02:43) : said. Computers do not have any intelligence and a superfast human could do everything a computer could do. However, there are no superfast humans, so in reality computers can do many things us poor slow humans could never do or would even attempt to do.

---------------------------------------------------------------

[As I am taking shots from all points of the compass here, I finally had some slack time to respond thusly:]

Roger Sowell 

David Halliday [wrote]

“One of the biggest misnomers in Computer Science is Artificial Intelligence. There is no intelligence in a computer. And we’ve never been able to put it in there.

[My response is:} We must have taken different classes in AI, then. Mine was from UCLA where the instructor wrote the AI for NASA’s Mars rovers. AI definitely exists, and I stand completely by my earlier assertions.

But I will not get further into a Did so! Did Not! contest, as it is fruitless and a waste of Anthony’s and moderators’ time.

------------------------------------------------------------------

[Next, Squidly jumps in with a response to my longer earlier comment:]

I would agree that there are “some” things that computers can do that humans cannot. Computational speed is perhaps one, but that is just about where it ends. I have studied AI for quite some time and it was my primary collegiate focus, and I too play with neural networks from time to time just for fun. But the human brain by contrast, can perform many things that computers presently cannot do and some things that they may never do. One very humanly simple thing that computers are extremely poor at is pattern recognition. Humans process patterns with astounding accuracy and at an astounding rate. As a very simple example of this, I was recently sent an email from a colleage, the special thing about it was that the letters were all jumbled up. All words were written with the proper beginning and ending letters, had the proper number of characters and the correct characters as a whole, but, all inner letters were out of order. The interesting part of this is that you can read it almost as easily as you read anything else. As long as the words contain the correct letters, length and beginning and ending letters, it doesn’t matter. Your brain automatically compensates on the fly through pattern recognition. Its an interesting experiment and one that you can easily try for yourself. Now, one would say “so, a computer can do that”, yes, but through iteration and rearranging, not through first take pattern recognition, and certainly not with the efficiency of the human brain. And as for “fly-by-wire”, your brain handles more fly-by-wire than our entire fleet of Stealth bombers combined, every moment of your life, monitoring thoughts, temperature, body functions, heartbeats, internal clock, circulation systems, neural systems, and on and on, all in real-time. That’s pretty tough to beat. We may get close someday in the future, but for now, not even close.

As a tiny example, that goes to this and the model topic, have you seen and heard, even a short film that was completely computer generated, that you could not discern from reality? And that is the simple stuff.

Unfortunately, we still have actors (politician interchangeable)…

---------------------------------------------------------------

[And again, we hear from Squidly:]

Squidly 

David Holliday (20:02:43) :

My original statement is correct. There is nothing a computer can do that a human can’t do. The computer can just do it faster.

By in large I agree with you. There seems to be a popular misconception that I think has been largely fueled by Hollywood. Computers cannot do the things you see on the big screen. Unfortunately, even my father suffers from this misconception, and he is a retired engineer from MIT! And the worst part is that he is eating up AGW like there is no tomorrow. I fight with him on the AGW subject daily.

BTW, to all, yes, AGW is most certainly a religion. I have seen this transformation in my father, and it is rather disturbing. I would never have guessed that I would be seeing this behavior from my father, but he’s clearly had too much kool-aid. I’ve always viewed him as perhaps the most rational and objective person I have known, but wow, not when it comes to AGW. It is some scary stuff!

-------------------------------------------------------------------

[Next, we hear from Alan the Brit, also agreeing with Halliday:]

Alan the Brit 

David Holliday:-)

That is just about bang on. I am so delighted to see so many mature heads making the point, GIGO. There will always be a human being at the end of it somewhere. Assume & presume nothing, ever!

As an engineer, & I know I have said this before, computers are little more than powerful calculators & number crunchers, sure they can churn out the numerical answers by the nanosecond where we mere fleshy lumps take minutes to do the same thing. However, the wee, wee, wee, wee, tiny flaw in the whole thing, is that if you get the design philosophy wrong, no amount of number crunching will lead you to the right solution, but only to many ways in which you prove you got it wrong! This point I would like to direct to Roger Sowell, yes computers are wonderful things, but they are after all just a tool to do a job;-) I spend many hours recommending to graduate engineers they sit down with a pad & a pencil & sketch things out by hand before they ever get near a computer programme. As a 51 yo luddite I mistrust computers, & with the current political administration losing personal data left, right, & centre I feel vindicated.

--------------------------------------------------------------------

[And again, this time from John Galt:]

John Galt 

John Galt – re is Fortran still in use?

[Quoting me from above] Absolutely. Operating companies have millions of lines of code written in Fortran, that works and works quite well every day. No one in the private sector has the time or budget to rewrite perfectly good code just to bring it up to some newly-written standard. Those new standards change every few years, and rewriting would be a complete waste of effort. There may be some limited instances where this is done, but it must have a justifiable positive influence on the financial bottom line.

[Now John Galt's reply] I work as a software engineer/consultant and I’m well aware of the problems of maintaining and updating code.

Remember the Y2k crisis? That came about because old code was never updated. Nobody knew if programs that had been in operation for decades would work and in many cases, nobody could dig through the layers of patches, bandaids, paperclips and hacks to decipher the internals of the programs, either.

I should not be surprised by the reported size of the Fortran code base, but I am. This language isn’t part of the Computer Science curriculum in any universities in this part of the USA. Is it still taught in the Engineering schools?

-------------------------------------------------------------------

[And now me, with a reply to John Galt re Fortran in engineering schools: ]

Roger Sowell 

John Galt: re Fortran still taught in engineering schools?

Yup. The University of Texas at Austin, for one. UT is a decent institute of higher education (and my undergrad alma mater). (not to be confused with University of Tennessee, another UT)

see http://www.utexas.edu

Click here for fortran class

Also, the other UT (Tennessee) teaches fortran . From this site: “For example, we’ve made changes in the NE [nuclear engineering] Fundamentals course in response to alumni feedback, bringing the Fortran computer language back to the curriculum in order to prepare graduates for the field.”

The Y2K fortran bugs were not that hard to fix. Refineries, chemical plants, power plants, etc. with fortran made it through midnight 12/31/1999 into 2000 just fine.

Just another scare-mongering non-event.

-------------------------------------------------------------------

[And John Galt, a gracious fellow, responds:]

Thanks for the update regarding Fortran in engineering schools.

pre-Y2K was a great time to be in software consulting. This business never made so much money. If you had asked me about the seriousness of the threat, I would have repeated the industry line about it being the gravest danger you could imagine.

--------------------------------------------------------------------

[And finally I respond to Alan the Brit, but by now I am uncomfortable really getting into this on Anthony's blog, as it uses up his space and occupies his time to moderate (or his other moderators'); so I offer to take this over here.  But, probably none of the other participants know about this blog.  Anthony has requested on an earlier thread that we stay on topic. We shall see if anyone finds this.]

Roger Sowell 

Alan the Brit,

“This point I would like to direct to Roger Sowell, yes computers are wonderful things, but they are after all just a tool to do a job;-) I spend many hours recommending to graduate engineers they sit down with a pad & a pencil & sketch things out by hand before they ever get near a computer programme. As a 51 yo luddite I mistrust computers, & with the current political administration losing personal data left, right, & centre I feel vindicated.

I also am/was an engineer, dating from the slide rule days. I completely agree that it is usually best to think it through first with a pad and pencil, perhaps even research a bit to see what others have published. There are, no doubt, many thousands of good software routines in regular use that are just doing what humans can do, only faster and error-free. I have written and implemented my share of those.

I think this all comes down to semantics, just what is "artificial intelligence." To me, if a human cannot do it (whatever “it” is), but the computer can, that is a form of AI. The examples I gave earlier are on point.

We as humans give a label to people with great memories, or abilities to solve problems that no one else can. That label is usually “intelligent.” There are even standardized tests (albeit controversial) that purport to give a score that measures IQ. As an attorney, I had to take quite a few rather difficult tests to prove a certain level of ability before I was awarded my license to practice law. [Note, none of those tests involved IQ, at least not directly]  Other professions do too, and I have no intention to place attorneys in a spotlight. Professional engineers, PhDs, MDs, CPAs, CFAs, the list is long. I have a lot of respect for others without fancy degrees, too, especially my auto mechanic. Even he uses a computerized diagnostic tester from time to time; I think it has a rules-based expert system in it.

Hence, when a computer can solve a problem no human could or ever will, is it also “intelligent?”

Anthony, if this is too far off-topic, I can take this over to my energyguy’s musings blog so as not to waste your time. — Roger

-----------------------------------------------------

Roger E. Sowell, Esq.   legal website is here.

aka energyguy on townhall.com

1 comment:

Anonymous said...

I would just add to this that while it may become a semantic argument about what's a computer and what's a robot / sensor:

Computers drive equipment that has senses we simply do not have. They hold rockets stable with gyro's and have acceleration sensors sensitive to miniscule forces. They sense temperatures to fractions of a degree, etc. Then they respond to these senses. We simply can not do that.

We can't see in infrared, and seek based on it (we need to add a machine, often with a computer in it). We can not "see" in radar nor "hear" in sonar (and computers can easily see in synthetic aperture term). We can't smell the same thing their chemical sensors can smell and we can't feel temperatures in the 1000K range without burning out, nor can we touch 49K without freezing and react appropriately to fractional bits. We can't hear radiowaves, nor can we talk with each other over them at gigabaud rates with exact precision (unless we have machines, usually compututer driven, as our intermediaries...). And, of course, computers can remember petabytes of information with 100% accuracy for decades. People can't. At its most basic, computers can draw electronic pictures in glowing phosphor in a vacuum tube display. We can't. (Though we can make machines that do - but that's the whole point...)

So I would say yes, computers can do many things we can not do. Often due to mechanical sensors we do not have or direct use of machines we can build, but not operate adequately.