Monday, April 21, 2014
Text Size
New ranking methods
Research papers on new search ranking methods and concepts

DocumentsDate added

Order by : Name | Date | Hits [ Ascendant ]
"In this paper, we propose a new method to discover collectionadapted ranking functions based on Genetic Programming (GP). Our Combined Component Approach (CCA) is based on the combination of several term-weighting components (i.e., term frequency, collection frequency, normalization) extracted from well-known ranking functions."
"The paper is concerned with applying learning to rank to document retrieval. Ranking SVM is a typical method of learning to rank. We point out that there are two factors one must consider when applying Ranking SVM, in general a “learning to rank” method, to document retrieval. First, correctly ranking documents on the top of the result list is crucial for an Information Retrieval system. One must conduct training in a way that such ranked results are accurate. Second, the number of relevant documents can vary from query to query."
"This tutorial provides an overview on recent advances made in ranking and selection (R&S) for selecting the best simulated system and discusses challenges that still exist in the field. We focus on indifference-zone R&S procedures that provide a guaranteed probability of correct selection when the best system is at least a user-specified amount better than the other systems."
"We address the task of learning rankings of documents from search engine logs of user behavior. Previous work on this problem has relied on passively collected clickthrough data. In contrast, we show that an active exploration strategy can provide data that leads to much faster learning."
"In view of the recent progress in the field of internet search engines, there is a growing need for mechanisms to evaluate the performance of these useful and popular tools. So far, the vast majority of researchers have relied on the informationretrieval metrics of “precision” and “recall” that quantify the occurrence of “hits” and “misses” in the returned list of documents. What they fail to do is to measure the quality of the ranking that the search engine has provided. This paper wants to rectify the situation. We discuss the issue in some detail, and then propose a new mechanism that we believe is better suited for our needs."
"The explosive growth and the widespread accessibility of the Web has led to surge of research activity in the area of information retrieval on the World Wide Web. The seminal papers of Kleinberg [31], and Brin and Page [9] introduced Link Analysis Ranking, where hyperlink structures are used to determine the relative authority of aWeb page, and produce improved algorithms for the ranking of Web search results. In this paper we work within the hubs and authorities framework defined by Kleinberg [31] and we propose new families of algorithms. Two of the algorithms we propose use a Bayesian approach, as opposed to the usual algebraic and graph theoretic approaches. We also introduce a theoretical framework for the study of Link Analysis Ranking algorithms. The framework allows for the definition of specific properties of Link Analysis Ranking algorithms, as well as for comparing different algorithms. We study the properties of the algorithms that we define, and we provide an axiomatic characterization of the INDEGREE heuristic, where each node is ranked according to the number of incoming links. We conclude the paper with an extensive experimental evaluation. We study the quality of the algorithms, and we examine how the existence of different structures in the graphs affect their performance"
"Maximizing only the relevance between queries and documents will not satisfy users if they want the top search results to present a wide coverage of topics by a few representative documents. In this paper, we propose two new metrics to evaluate the performance of information retrieval: diversity, which measures the topic coverage of a group of documents, and information richness, which measures the amount of information contained in a document. Then we present a novel ranking scheme, Affinity Rank, which utilizes these two metrics to improve search results. We demonstrate how Affinity Rank works by a toy data set, and verify our method by experiments on real-world data sets".
"The main goal of this paper is to customize the Web for specific feature and/or community graph, finding out the confidence of each page in the graph in question from the past experience and calculate the page rank of the pages in the graph from confidence obtained and link structure. We view the Web in the Universe from the users query points of view and customize accordingly".
Page 1 of 2
Please update your Flash Player to view content.
Restore Default Settings