By Christopher Naslund
Computers only know what we program into them, correction, they used to know only what we program into them. Carnegie Mellon University created a program that continuously searches the web for patterns in text on millions of websites, compiles what it learns in its database and then tweets what it learns.
The program known as Never-Ending Language Learning system (NELL) closely mimics the way humans learn. It groups text patterns together in different semantic categories- cities, sports teams, actors, companies, universities, plants, etc. and then tweets what it learns, for example: I think “justice studies” is a #sociopolitical and another example is: I think “manilacomiccon” is a #convention.
NELL never stops learning, it has been searching the web 24 hours a day, seven days a week for the past 10 months for information that it can group together semantically. So far NELL has logged about 390,000 facts, with around 87 percent accuracy.
Carnegie Mellon created NELL because computers that understand language can one day promise a big payoff and it will help refine search results from search engines more accurately. Instead of just providing links to the questions asked, it will provide natural- language answers.
The program has it flaws just like any new computer program, when NELL read the sentence “I deleted my Internet cookies” and when it read another sentence “I deleted my Internet files” NELL thought it was also a baked good. Dr. Mitchell had to restart NELL’s bakery education section.
This is a very cool technology and I am excited when search engines actually give me answers instead of links, or can step-by-step answers to what ever I ask.