121 resultados para Cuckoo search
Resumo:
Consider a person searching electronic health records, a search for the term ‘cracked skull’ should return documents that contain the term ‘cranium fracture’. A information retrieval systems is required that matches concepts, not just keywords. Further more, determining relevance of a query to a document requires inference – its not simply matching concepts. For example a document containing ‘dialysis machine’ should align with a query for ‘kidney disease’. Collectively we describe this problem as the ‘semantic gap’ – the difference between the raw medical data and the way a human interprets it. This paper presents an approach to semantic search of health records by combining two previous approaches: an ontological approach using the SNOMED CT medical ontology; and a distributional approach using semantic space vector space models. Our approach will be applied to a specific problem in health informatics: the matching of electronic patient records to clinical trials.
Resumo:
This study investigates the application of local search methods on the railway junction traffic conflict-resolution problem, with the objective of attaining a quick and reasonable solution. A procedure based on local search relies on finding a better solution than the current one by a search in the neighbourhood of the current one. The structure of neighbourhood is therefore very important to an efficient local search procedure. In this paper, the formulation of the structure of the solution, which is the right-of-way sequence assignment, is first described. Two new neighbourhood definitions are then proposed and the performance of the corresponding local search procedures is evaluated by simulation. It has been shown that they provide similar results but they can be used to handle different traffic conditions and system requirements.
Resumo:
User-Web interactions have emerged as an important research in the field of information science. In this study, we examine extensively the Web searching performed by general users. Our goal is to investigate the effects of users’ cognitive styles on their Web search behavior in relation to two broad components: Information Searching and Information Processing Approaches. We use questionnaires, a measure of cognitive style, Web session logs and think-aloud as the data collection instruments. Our study findings show wholistic Web users tend to adopt a top-down approach to Web searching, where the users searched for a generic topic, and then reformulate their queries to search for specific information. They tend to prefer reading to process information. Analytic users tend to prefer a bottom-up approach to information searching and they process information by scanning search result pages.
Resumo:
Many researchers have investigated and modelled aspects of Web searching. A number of studies have explored the relationships between individual differences and Web searching. However, limited studies have explored the role of users’ cognitive styles in determining Web searching behaviour. Current models of Web searching have limited consideration of users’ cognitive styles. The impact of users’ cognitive style on Web searching and their relationships are little understood or represented. Individuals differ in their information processing approaches and the way they represent information, thus affecting their performance. To create better models of Web searching we need to understand more about user’s cognitive style and their Web search behaviour, and the relationship between them. More rigorous research is needed in using more complex and meaningful measures of relevance; across a range of different types of search tasks and different populations of Internet users. The project further explores the relationships between the users’ cognitive style and their Web searching. The project will develop a model depicting the relationships between a user’s cognitive style and their Web searching. The related literature, aims and objectives and research design are discussed.
Resumo:
Purpose: Businesses cannot rely on their customers to always do the right thing. To help researchers and service providers better understand the dark (and light) side of customer behavior, this study aims to aggregate and investigate perceptions of consumer ethics from young consumers on five continents. The study seeks to present a profile of consumer behavioral norms, how ethical inclinations have evolved over time, and country differences. ---------- Design/methodology/approach: Data were collected from ten countries across five continents between 1997 and 2007. A self-administered questionnaire containing 14 consumer scenarios asked respondents to rate acceptability of questionable consumer actions. ---------- Findings: Overall, consumers found four of the 14 questionable consumer actions acceptable. Illegal activities were mostly viewed as unethical, while some legal actions that were against company policy were viewed less harshly. Differences across continents emerged, with Europeans being the least critical, while Asians and Africans shared duties as most critical of consumer actions. Over time, consumers have become less tolerant of questionable behaviors. ---------- Practical implications: Service providers should use the findings of this study to better understand the service customer. Knowing what customers in general believe is ethical or unethical can help service designers focus on the aspects of the technology or design most vulnerable to customer deviance. ---------- Multinationals already know they must adapt their business practices to the market in which they are operating, but they must also adapt their expectations as to the behavior of the corresponding consumer base. Originality/value: This investigation into consumer ethics helps businesses understand what their customer base believes is the right thing in their role as customer. This is a large-scale study of consumer ethics including 3,739 respondents on five continents offering an evolving view of the ethical inclinations of young consumers.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.
Resumo:
In the present paper, we introduce BioPatML.NET, an application library for the Microsoft Windows .NET framework [2] that implements the BioPatML pattern definition language and sequence search engine. BioPatML.NET is integrated with the Microsoft Biology Foundation (MBF) application library [3], unifying the parsers and annotation services supported or emerging through MBF with the language, search framework and pattern repository of BioPatML. End users who wish to exploit the BioPatML.NET engine and repository without engaging the services of a programmer may do so via the freely accessible web-based BioPatML Editor, which we describe below.
Resumo:
The growing importance and need of data processing for information extraction is vital for Web databases. Due to the sheer size and volume of databases, retrieval of relevant information as needed by users has become a cumbersome process. Information seekers are faced by information overloading - too many result sets are returned for their queries. Moreover, too few or no results are returned if a specific query is asked. This paper proposes a ranking algorithm that gives higher preference to a user’s current search and also utilizes profile information in order to obtain the relevant results for a user’s query.
Resumo:
This paper discusses human factors issues of low cost railway level crossings in Australia. Several issues are discussed in this paper including safety at passive level railway crossings, human factors considerations associated with unavailability of a warning device, and a conceptual model for how safety could be compromised at railway level crossings following prolonged or frequent unavailability. The research plans to quantify safety risk to motorists at level crossings using a Human Reliability Assessment (HRA) method, supported by data collected using an advanced driving simulator. This method aims to identify human error within tasks and task units identified as part of the task analysis process. It is anticipated that by modelling driver behaviour the current study will be able to quantify meaningful task variability including temporal parameters, between participants and within participants. The process of complex tasks such as driving through a level crossing is fundamentally context-bound. Therefore this study also aims to quantify those performance-shaping factors that contribute to vehicle train collisions by highlighting changes in the task units and driver physiology. Finally we will also consider a number of variables germane to ensuring external validity of our results. Without this inclusion, such an analysis could seriously underestimate the probabilistic risk assessment.
Resumo:
The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.