* Note that troughout this post, I use a critical language that may appear slightly offensive, please take it as on-topic to this newsgroup and not personaly. Marvin Minsky wrote: > Unfortunately, in my view, the rest of the artificial intelligence > community tried, instead, to make their computers do this by > themselves—by trying to build what I call ‘baby machines', which were > supposed to learn from experience. These all failed to make much > progress because (in my view) they started out with inadequate schemes > for learning new things. You cannot teach algebra to a cat; human > infants are already equipped with architectural features to equip them > to think about the causes of their successes and failures and then to > make appropriate changes. One important detail is that humans too come with so much knowledge hardwired in them (a 3 dimensional representation of the physical world for example), that when you build a seriously flexible system that will be able to do all this on it's own (and if I understand you correctly, be able to transfer it's learning abbility between diffirent topics) you will get something fairly better than any human around. BTW, the program isn't at all that hard to make, the whole of the problem is in the principle, this will not be a program which's quality would be measured in lines... > Many other researchers went in the direction of trying to build > evolution-based systems. These were to begin with very simple > structures and then (by using some scheme for mutation and then > selection) evolve more architecture. This includes what are called > "neural networks" and "genetic" programs—which have often solved > interesting problems, but have never reached high intellectual levels. > In my view, this was because they were not designed to have the > ability to analyze and reflect on what they had done—and then make > appropriate changes; they were not equipped to improve or learn new > ways to represent knowledge or make plans to solve new kinds of > problems. Philosophicaly put, very well said, but practicaly put, without much value, mind you. "The ability to analyze and reflect" is something very specific in coding terms and weather you like it or not, it will never get you any "high intellectual levels" or any "intellectual levels" at all. As for the second part, particulary the one regarding the "make plans to solve new kinds of problems", do you realise how much work does that involve and can't you just say "we first need to find out the purpose of life then we'll continue making AI"? Or however else you suppose the AI will invent what to do next when it's current tasks list is finnished? I mean, look, first you proove you fully realise the system needs to be so flexible to be independend of itself, then you continue with philosophical claims that only have a practical basis in methods that are very far from flexible. I take it it must be a proffesional deformation? > Yet other researchers built systems that were based on logic—hoping > that through being precise and unambiguous, these would be very > dependable. However, in my view, the very precision of those systems > prevented them from being able to reason by analogy—which, in my view, > is at the heart of how people think. (And the logical systems in > current use make it virtually impossible to support the kinds of > self-reflective processes that they would need to improve their own > operations. Oh really?! All news to me. Well if you limit your view of "all of computerscience" to one specific programming language, prefferably a Java, perhaps so. All programming languages form C++ to Assembly make it perfectly possible to process their own code data in the program's free time, finding ways to optimize or translate itself for use in other processors, possibly such designed by itself. Allas, all this highly AIistic knowledge was all forgoten in the days when floppy DOS's days were counted and programs no longer needed to be nice and compact. > Many other researchers designed robots to do various kinds of > specialize tasks. We see this as an epidemic that has infected almost > every university. Those researchers hoped that by starting with > simple jobs they would learn enough that, eventually, they would > become able to design robots that could progressively solve > increasingly hard and more important problems. I don't understand their logic, but don't you think that if a robot can do every specific specialized task, it would infact be AI? I mean, look, the future of AI depends on good planning, a sophisticated _coding_ principle that, like mathematics, is not polydimensional with none of it's dimmensions (aspects) fixed. AI needs to evolve within a flexible system that will allow it to progress and I can see you understand that. What I don't understand is why you keep telling off all the attempts to start building one? Do you realise AI will not be built in one step? Every AI project needs a practical (commercial) backing for it to survive, the AI must do something usefull. Do you realise philosophical word tricks do not work on real code? Every AI project needs to start somewhere quite un-AIy. Don't you realise AI will take no shortcuts to build? It will take approx 500 to 1000 man-years from NOW to build and we have to _start_ NOW if we want to start picking off the years. Observer aka DustWolf aka CyberLegend aka Jure Sah C'ya! -- Cellphone: +38640809676 (SMS enabled) Don't feel bad about asking/telling me anything, I will always gladly reply. "Keeping an open mind is not about disregarding new definitions to things." The future of AI is in technology integration, we have prepared everything for you: http://www.aimetasearch.com/ici/index.htm MesonAI -- If nobody else wants to do it, why shouldn't we?(TM)