z-logo
open-access-imgOpen Access
Exploring the use of natural language systems for fact identification: Towards the automatic construction of healthcare portals
Author(s) -
Peck Frederick A.,
Bhavnani Suresh K.,
Blackmon Marilyn H.,
Radev Dragomir R.
Publication year - 2004
Publication title -
proceedings of the american society for information science and technology
Language(s) - English
Resource type - Journals
eISSN - 1550-8390
pISSN - 0044-7870
DOI - 10.1002/meet.1450410139
Subject(s) - computer science , information retrieval , similarity (geometry) , identification (biology) , strengths and weaknesses , domain (mathematical analysis) , natural language , process (computing) , web page , reliability (semiconductor) , data science , artificial intelligence , natural language processing , data mining , world wide web , psychology , social psychology , mathematical analysis , power (physics) , botany , physics , mathematics , quantum mechanics , image (mathematics) , biology , operating system
In prior work we observed that expert searchers follow well‐defined search procedures in order to obtain comprehensive information on the Web. Motivated by that observation, we developed a prototype domain portal called the Strategy Hub that provides expert search procedures to benefit novice searchers. The search procedures in the prototype were entirely handcrafted by search experts, making further expansion of the Strategy Hub cost‐prohibitive. However, a recent study on the distribution of healthcare information on the web suggested that search procedures can be automatically generated from pages that have been rated based on the extent to which they cover facts relevant to a topic. This paper presents the results of experiments designed to automate the process of rating the extent to which a page covers relevant facts. To automatically generate these ratings, we used two natural language systems, Latent Semantic Analysis and MEAD, to compute the similarity between sentences on the page and each fact. We then used an algorithm to convert these similarity scores to a single rating that represents the extent to which the page covered each fact. These automatic ratings are compared with manual ratings using inter‐rater reliability statistics. Analysis of these statistics reveals the strengths and weaknesses of each tool, and suggests avenues for improvement.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here