Earlier, I wrote on a plan to divert a portion of the Missouri River, pump it uphill and 800 miles into the tributaries of the Colorado River to supply water and power to California.
My proposed National Excess Water Transport Aqueduct Project (NEWTAP) will go a long way to solving a couple of problems. First, and obviously, is the chronic water shortage in California and other western states, and flooding along the Missouri. Second, what to do with wind power in the Plains when the power demand is in the big cities (the lack of transmission lines problem).
As I wrote earlier: “One possibility on the national level is a water transfer system from the Missouri River at Kansas City, that runs approximately 800 miles southwest to the continental divide in New Mexico, just south of Interstate 40. From there the water would flow into tributaries of the Colorado River. The hydroelectric plants are already in place on Hoover Dam and Glen Canyon Dam. Therefore, some of the power required to pump the water uphill and 800 miles would be recovered. The elevation change is on the order of 6,000 feet.
The water route will be through the U.S.’ great wind corridor, so it is conceivable to use windmills to provide energy to the pumps.
A further improvement on this plan is to also divert a portion of the upper Mississippi River west and into the National Water Transport Project. One possibility is a 150-mile canal due west along US route 36 from Hannibal to St. Joseph. This would allow a water flow of approximately 2,000 cubic feet per second, or more.
The water transfer to the Colorado River would eliminate the need for power transmission lines, because power would be generated at Glen Canyon Dam and Hoover Dam, then sent to Southern California or elsewhere through existing transmission lines. Thus, there would be some savings by not having to build power transmission lines to connect the wind-generators to cities.
A useful means of storing excess wind-generated power is to pump water uphill for later use in hydroelectric plants when the power is needed. This trans-continental, uphill waterway would do exactly that, storing the water in Lake Powell and Lake Mead.
I see no technical reasons why this would not work. Crossing existing creeks, rivers, highways, railroads, and hills, can all be done. However, on the legal and environmental side, there are more difficulties. There is a water-rights legal issue of transferring water from one water basin into another. This plan would transfer water from the Missouri water basin across a couple of other basins and into the Colorado water basin. Then there are the eminent domain issues to acquire the right-of-way. This is not a problem, if the governments decree the project is in the public interest. In practice, though, such decrees at times generate public hostility. Finally, the environmental issues are rather large. One can envision the EIR (Environmental Impact Report) for an 800-mile canal crossing several states!
Still, such a project would be of ultimate good. The money spent would provide employment for thousands, and for many years. The energy generated by the windmills would be recovered (at least in part), which is in line with the “Generate Green” movement. That is far better than building a few nuclear power plants. And the water would go to good use, irrigating farms to feed the U.S. and the world.
Roger E. Sowell, Esq. Legal website is here.
[Quoting my earlier comment] “In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do.”
My original statement is correct. There is nothing a computer can do that a human can’t do. The computer can just do it faster.
Computers are machines. Programs are instructions to the machine to do things. Humans design the programs. Humans write the programs. Humans test the programs. And humans run the programs. Therefore, humans can do the same thing the programs do but just slower.
Computers aren’t creative. They have no independence of thought. They don’t think at all. They have not independence of action. They have no cognitive understanding. They simply execute the programs. One of the biggest misnomers in Computer Science is Artificial Intelligence. There is no intelligence in a computer. And we’ve never been able to put it in there.
I first studied Artificial Intelligence in the early 80’s. Neural nets, which are often purported to be advanced, self-learning computers, are fundamentally self-weighting algorithms that can varying their behaviour based on feedback mechanisms. Expert systems are simply rule-based approaches to decision systems. Humans build the neural nets and humans write the rules. There is nothing about how these programs work that we don’t understand. The HAL 9000 of 2001: A Space Odyssey doesn’t exist today or maybe ever.
[Now, David Halliday may not realize just what I referred to earlier about neural networks solving problems that the best and brightest human minds had spent countless hours on with no results: A complex chemical plant just could not make product that met specifications. The stuff it did make had to be sold at a deep discount as bad quality. The plant had several processing steps (not uncommon) and many independent variables in each step. The chemists and engineers did all they could from theory and experience, but to no avail. The neural net guys were trotted in, built their NN, and fed data from each failed trial into the NN. They fed in all the variables and settings for each failed test, with the resulting yield and quality of product. Then, they ran it in Predict mode, with the goal being highest yield of on-spec product. The NN churned away and produced a new combination of independent variables for each of the several process steps. After the chemists and engineers reviewed these settings for safety and reasonableness (e.g. can this pump generate that much flow? Can that heater produce a stream at that temperature?) they agreed to give it a go. And it worked. The question is, could humans have eventually found that precise combination of variables? Maybe. But then, they had been trying for a long time to do just that, with no success.]
As someone who has worked in computers for over 26 years from programmer to Chief Technology Officer, I can tell you with a high-degree of confidence I understand how computers work and what they can do. They don’t do anything we don’t tell them to do. And since everything they do is something we tell them to do we can do it.
Don’t confuse that computers can do things much faster than humans with what humans can do. The point is they are just doing what we program them to do. Of course they do it orders and orders of magnitude faster than we can. Hence, the old joke, “To err is human, but to really [mess] up takes a computer.” [This is a family-friendly blog...editorial license is hereby granted to me by me to clean up offending language -- Roger; our version of that last line was "to really foul things up requires a computer."]
---------------------------------------------------------------
[Next, Richard M chimes in with:]
Richard M (21:22:24) :
I agree with almost everything David Holliday (20:02:43) : said. Computers do not have any intelligence and a superfast human could do everything a computer could do. However, there are no superfast humans, so in reality computers can do many things us poor slow humans could never do or would even attempt to do.
---------------------------------------------------------------
[As I am taking shots from all points of the compass here, I finally had some slack time to respond thusly:]
Roger Sowell (21:51:48) :
David Halliday [wrote]
“One of the biggest misnomers in Computer Science is Artificial Intelligence. There is no intelligence in a computer. And we’ve never been able to put it in there.”
[My response is:} We must have taken different classes in AI, then. Mine was from UCLA where the instructor wrote the AI for NASA’s Mars rovers. AI definitely exists, and I stand completely by my earlier assertions.
But I will not get further into a Did so! Did Not! contest, as it is fruitless and a waste of Anthony’s and moderators’ time.
------------------------------------------------------------------
[Next, Squidly jumps in with a response to my longer earlier comment:]
I would agree that there are “some” things that computers can do that humans cannot. Computational speed is perhaps one, but that is just about where it ends. I have studied AI for quite some time and it was my primary collegiate focus, and I too play with neural networks from time to time just for fun. But the human brain by contrast, can perform many things that computers presently cannot do and some things that they may never do. One very humanly simple thing that computers are extremely poor at is pattern recognition. Humans process patterns with astounding accuracy and at an astounding rate. As a very simple example of this, I was recently sent an email from a colleage, the special thing about it was that the letters were all jumbled up. All words were written with the proper beginning and ending letters, had the proper number of characters and the correct characters as a whole, but, all inner letters were out of order. The interesting part of this is that you can read it almost as easily as you read anything else. As long as the words contain the correct letters, length and beginning and ending letters, it doesn’t matter. Your brain automatically compensates on the fly through pattern recognition. Its an interesting experiment and one that you can easily try for yourself. Now, one would say “so, a computer can do that”, yes, but through iteration and rearranging, not through first take pattern recognition, and certainly not with the efficiency of the human brain. And as for “fly-by-wire”, your brain handles more fly-by-wire than our entire fleet of Stealth bombers combined, every moment of your life, monitoring thoughts, temperature, body functions, heartbeats, internal clock, circulation systems, neural systems, and on and on, all in real-time. That’s pretty tough to beat. We may get close someday in the future, but for now, not even close.
As a tiny example, that goes to this and the model topic, have you seen and heard, even a short film that was completely computer generated, that you could not discern from reality? And that is the simple stuff.
Unfortunately, we still have actors (politician interchangeable)…
---------------------------------------------------------------
[And again, we hear from Squidly:]
Squidly (22:07:38) :
By in large I agree with you. There seems to be a popular misconception that I think has been largely fueled by Hollywood. Computers cannot do the things you see on the big screen. Unfortunately, even my father suffers from this misconception, and he is a retired engineer from MIT! And the worst part is that he is eating up AGW like there is no tomorrow. I fight with him on the AGW subject daily.
BTW, to all, yes, AGW is most certainly a religion. I have seen this transformation in my father, and it is rather disturbing. I would never have guessed that I would be seeing this behavior from my father, but he’s clearly had too much kool-aid. I’ve always viewed him as perhaps the most rational and objective person I have known, but wow, not when it comes to AGW. It is some scary stuff!
-------------------------------------------------------------------
[Next, we hear from Alan the Brit, also agreeing with Halliday:]
Alan the Brit (01:36:49) :
David Holliday:-)
That is just about bang on. I am so delighted to see so many mature heads making the point, GIGO. There will always be a human being at the end of it somewhere. Assume & presume nothing, ever!
As an engineer, & I know I have said this before, computers are little more than powerful calculators & number crunchers, sure they can churn out the numerical answers by the nanosecond where we mere fleshy lumps take minutes to do the same thing. However, the wee, wee, wee, wee, tiny flaw in the whole thing, is that if you get the design philosophy wrong, no amount of number crunching will lead you to the right solution, but only to many ways in which you prove you got it wrong! This point I would like to direct to Roger Sowell, yes computers are wonderful things, but they are after all just a tool to do a job;-) I spend many hours recommending to graduate engineers they sit down with a pad & a pencil & sketch things out by hand before they ever get near a computer programme. As a 51 yo luddite I mistrust computers, & with the current political administration losing personal data left, right, & centre I feel vindicated.
--------------------------------------------------------------------
[And again, this time from John Galt:]
John Galt (07:51:40) :
[Now John Galt's reply] I work as a software engineer/consultant and I’m well aware of the problems of maintaining and updating code.
Remember the Y2k crisis? That came about because old code was never updated. Nobody knew if programs that had been in operation for decades would work and in many cases, nobody could dig through the layers of patches, bandaids, paperclips and hacks to decipher the internals of the programs, either.
I should not be surprised by the reported size of the Fortran code base, but I am. This language isn’t part of the Computer Science curriculum in any universities in this part of the USA. Is it still taught in the Engineering schools?
-------------------------------------------------------------------
[And now me, with a reply to John Galt re Fortran in engineering schools: ]
Roger Sowell (10:08:27) :
John Galt: re Fortran still taught in engineering schools?
Yup. The University of Texas at Austin, for one. UT is a decent institute of higher education (and my undergrad alma mater). (not to be confused with University of Tennessee, another UT)
see http://www.utexas.edu
Click here for fortran class
Also, the other UT (Tennessee) teaches fortran . From this site: “For example, we’ve made changes in the NE [nuclear engineering] Fundamentals course in response to alumni feedback, bringing the Fortran computer language back to the curriculum in order to prepare graduates for the field.”
The Y2K fortran bugs were not that hard to fix. Refineries, chemical plants, power plants, etc. with fortran made it through midnight 12/31/1999 into 2000 just fine.
Just another scare-mongering non-event.
-------------------------------------------------------------------
[And John Galt, a gracious fellow, responds:]
Thanks for the update regarding Fortran in engineering schools.
pre-Y2K was a great time to be in software consulting. This business never made so much money. If you had asked me about the seriousness of the threat, I would have repeated the industry line about it being the gravest danger you could imagine.
--------------------------------------------------------------------
[And finally I respond to Alan the Brit, but by now I am uncomfortable really getting into this on Anthony's blog, as it uses up his space and occupies his time to moderate (or his other moderators'); so I offer to take this over here. But, probably none of the other participants know about this blog. Anthony has requested on an earlier thread that we stay on topic. We shall see if anyone finds this.]
Roger Sowell (16:46:09) :
Alan the Brit,
“This point I would like to direct to Roger Sowell, yes computers are wonderful things, but they are after all just a tool to do a job;-) I spend many hours recommending to graduate engineers they sit down with a pad & a pencil & sketch things out by hand before they ever get near a computer programme. As a 51 yo luddite I mistrust computers, & with the current political administration losing personal data left, right, & centre I feel vindicated.”
I also am/was an engineer, dating from the slide rule days. I completely agree that it is usually best to think it through first with a pad and pencil, perhaps even research a bit to see what others have published. There are, no doubt, many thousands of good software routines in regular use that are just doing what humans can do, only faster and error-free. I have written and implemented my share of those.
I think this all comes down to semantics, just what is "artificial intelligence." To me, if a human cannot do it (whatever “it” is), but the computer can, that is a form of AI. The examples I gave earlier are on point.
We as humans give a label to people with great memories, or abilities to solve problems that no one else can. That label is usually “intelligent.” There are even standardized tests (albeit controversial) that purport to give a score that measures IQ. As an attorney, I had to take quite a few rather difficult tests to prove a certain level of ability before I was awarded my license to practice law. [Note, none of those tests involved IQ, at least not directly] Other professions do too, and I have no intention to place attorneys in a spotlight. Professional engineers, PhDs, MDs, CPAs, CFAs, the list is long. I have a lot of respect for others without fancy degrees, too, especially my auto mechanic. Even he uses a computerized diagnostic tester from time to time; I think it has a rules-based expert system in it.
Hence, when a computer can solve a problem no human could or ever will, is it also “intelligent?”
Anthony, if this is too far off-topic, I can take this over to my energyguy’s musings blog so as not to waste your time. — Roger
-----------------------------------------------------
Roger E. Sowell, Esq. legal website is here.
aka energyguy on townhall.com