OpenCyc is the open source version of the Cyc technology, the world’s largest and most complete general knowledge base and commonsense reasoning engine. OpenCyc can be used as the basis for a wide variety of intelligent applications. This is release 1.0 of OpenCyc featuring the complete Cyc ontology of over 260,000 terms and their definitional assertions numbering over 1.8 million. OpenCyc requires about 500MB of disk space and performs best with over 512 MB RAM. One GB of RAM is recommended for Cyc when accessed by Java applications.
I do not want to sound like an ingratefull twit but OpenCyc isn’t quite “open”, isn’t it? After all, the inference engine, which is the core of the application, is clearly kept tightly closed and out of reach of anyone.
Yes, a version of that inference engine, which only offers very basic funcitonality, is shipped with the open source project but it isn’t quite the same as being “open”, is it?
Well, I guess they want to be able to pay their bills too so they had to keep something proprietary too from the whole deal. They have spend many years researching Cyc.
Yes, but that still means it’s not really “open.” At least, it doesn’t fit the definition of what most readers of this site would associated with “open” and software.
Not that there’s anything wrong with that! I’m a commercial software developer myself. I just think the other poster is right, and the name is a bit of a misnomer.
Well, it depends how you see it. If you see Cyc as a two-part project, then it’s half open. But these two parts can be used individually and a new open source application can be created and takes full advantage of the knowledge base. If OpenCyc was supposed to be only the “data” part, then it is indeed open.
Anyways, this project is a monster (in a good way).
Edited 2006-07-15 00:59
Anyways, this project is a monster (in a good way).
That much I can definitely agree with.
http://opencyc.cvs.sourceforge.net/opencyc/
I recommend anyone interested in Cyc to have a look at ConceptNet too. http://www.conceptnet.org/
Cyc’s approach somehow reminds me of handwriting assembly codes instead of using compilers. In early days people didn’t trust compilers to generate good codes, but we know better now.
And here I thought everyone had given up on this dead end after Terry Winograd’s rants in the 90s.
It’s funny how the AI community cycles over the same old ground every couple of decades.
And here I thought everyone had given up on this dead end
Actually, everyone has given up. Otherwise, Cyc would be making billions by now, and I would be chatting with a room sized computer, named HAL, with colorful lights blinking and magnetic tapes rotating 😉
Cyc is the last dinosaur from an older time of AI research. This makes it interesting (learn from the errors of a previous approach, for example, and pick up some tried ideas).
Actually, Cyc reminds me of a game I tried to write when I was 10 years old learning BASIC (notice the caps). I didn’t know about loops yet, so I was drawing a grid, line by line. I didn’t know about exponential complexity either, so the entire program logic consisted of deep, nested IF-THEN-ELSEs, describing every possible path. Somehow, I’m not sure why, this approach failed 😉
Edited 2006-07-15 08:52
Ha, sounds like my experience with basic. I knew about loops, and goto, but had no clue what that darned ‘gosub’ did. So before I’d call ‘goto’, I’d set a flag indicating where I was calling from, and then for my ‘return’ I’d test the flag to figure out where to ‘goto’ back to.
Back on topic, I thought Cyc’s makers had a number of commercial customers for their product. Granted it’s not full AI, but the ability to make automated inferences from large sets of data might be attractive to certain companies.
And here I thought everyone had given up on this dead end after Terry Winograd’s rants in the 90s.
I can’t find a link regarding this..what was the gist of it?
dimosd summed it up pretty well in replying to me: http://osnews.com/permalink.php?news_id=15194&comment_id=143183
I now see that while I was searching someone posted that.
Can’t believe someone found it necessary to vote me down however. It’s not like it was off-topic. 😐
I never can figure out why some posts get modded down. I sometimes think someone just randomly votes to use up their votes.
Terry Winograd was an early researcher in AI language understanding. He’s most famous for SHRDLU, and ‘blocks world’, which was a simple system. I took his course on languages just before he had his big epiphany on what Searle calls the Chinese Room problem. If you’re curious about this stuff, http://www.iep.utm.edu/c/chineser.htm is a good start.
Despite the supposed rebuttals, I stille think that Searle is right, and “knowledge based” systems are no more than pattern matching engines.
What a wonderful name for a project. Especially for the Poles.
Cyc in (informal) polish: “soft fleshy milk-secreting glandular organ on the chest of a woman”.
I’d say – openCyc is the sign of another cultural revolution taking place, a next obvious step after Open Source Movement. Make your Cyc open today!
A project can only be open or not open. No one can claim that a project is open and then, as a side note, state that the fundamental part is actually very closed and untouchable. Where is the need to make a fool of their proposed clients?
If their intention was to keep the Cyc engine closed and offer an open front-end then just say it. Now, trying to act as their proposed clients are stupid and not realize that the core of the program isn’t open or even free (maybe when it is too late?)… That’s just disonest.
I remember reading about this in Discover or Scientific American magazines in the late 1980’s. The concept that the machine simulates awareness and inferenced logic was amazing to me. Some people here have poo-poo’d the technology. Is that on the grounds that it’s intended effect has been supplanted by better technology, like the assembly language versus compiled language, or because it never reached critical mass commercially, like OPENSTEP did until it became the foundation of OS X?
This approach, classic/symbolic AI, made overenthousiastic claims early on, which were later proved naive and unrealistic. It consumed large amounts of money before it collapsed, causing “AI winter” (later 80s-early 90s). It basically gave the whole AI field a bad name.
For an example, read
http://en.wikipedia.org/wiki/Fifth_generation_computer_systems_proj…
about Japan’s experience.
That’s why it is still sometimes ridiculed today… but you can’t learn without making mistakes! Besides, what didn’t work in a previous technological context may work in the future (with reasonable improvements).
I think the most important lesson learned is: keep your expectations low (realistic), and your goals practical.