984 resultados para Query processing
Resumo:
Niklas Luhmann's theory of social systems has been widely influential in the German-speaking countries in the past few decades. However, despite its significance, particularly for organization studies, it is only very recently that Luhmann's work has attracted attention on the international stage as well. This Special Issue is in response to that. In this introductory paper, we provide a systematic overview of Luhmann's theory. Reading his work as a theory about distinction generating and processing systems, we especially highlight the following aspects: (i) Organizations are processes that come into being by permanently constructing and reconstructing themselves by means of using distinctions, which mark what is part of their realm and what not. (ii) Such an organizational process belongs to a social sphere sui generis possessing its own logic, which cannot be traced back to human actors or subjects. (iii) Organizations are a specific kind of social process characterized by a specific kind of distinction: decision, which makes up what is specifically organizational about organizations as social phenomena. We conclude by introducing the papers in this Special Issue. Copyright © 2006 SAGE.
Resumo:
Purpose – The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users. Design/methodology/approach – The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration. Findings – The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified. Research limitations/implications – Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age-related sites is a direction that is currently being exploring. Practical implications – This research is background work that can be incorporated in search engines or other web-based applications, to help marketing companies and advertisers. Originality/value – This research enhances the current state of knowledge in short-text classification and query log learning. Classification schemes, Computer networks, Information retrieval, Man-machine systems, User interfaces
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
Purpose – To investigate and identify the patterns of interaction between searchers and search engine during web searching. Design/methodology/approach – The authors examined 2,465,145 interactions from 534,507 users of Dogpile.com submitted on May 6, 2005, and compared query reformulation patterns. They investigated the type of query modifications and query modification transitions within sessions. Findings – The paper identifies three strong query reformulation transition patterns: between specialization and generalization; between video and audio, and between content change and system assistance. In addition, the findings show that web and images content were the most popular media collections. Originality/value – This research sheds light on the more complex aspects of web searching involving query modifications.
Resumo:
The present study used ERPs to compare processing of fear-relevant (FR) animals (snakes and spiders) and non-fear-relevant (NFR) animals similar in appearance (worms and beetles). EEG was recorded from 18 undergraduate participants (10 females) as they completed two animal-viewing tasks that required simple categorization decisions. Participants were divided on a post hoc basis into low snake/spider fear and high snake/spider fear groups. Overall, FR animals were rated higher on fear and elicited a larger LPC. However, individual differences qualified these effects. Participants in the low fear group showed clear differentiation between FR and NFR animals on subjective ratings of fear and LPC modulation. In contrast, participants in the high fear group did not show such differentiation between FR and NFR animals. These findings suggest that the salience of feared-FR animals may generalize on both a behavioural and electro-cortical level to other animals of similar appearance but of a non-harmful nature.
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
Background When observers are asked to identify two targets in rapid sequence, they often suffer profound performance deficits for the second target, even when the spatial location of the targets is known. This attentional blink (AB) is usually attributed to the time required to process a previous target, implying that a link should exist between individual differences in information processing speed and the AB. Methodology/Principal Findings The present work investigated this question by examining the relationship between a rapid automatized naming task typically used to assess information-processing speed and the magnitude of the AB. The results indicated that faster processing actually resulted in a greater AB, but only when targets were presented amongst high similarity distractors. When target-distractor similarity was minimal, processing speed was unrelated to the AB. Conclusions/Significance Our findings indicate that information-processing speed is unrelated to target processing efficiency per se, but rather to individual differences in observers' ability to suppress distractors. This is consistent with evidence that individuals who are able to avoid distraction are more efficient at deploying temporal attention, but argues against a direct link between general processing speed and efficient information selection.
Resumo:
Using Gray and McNaughton’s (2000) revised Reinforcement Sensitivity Theory (r-RST), we examined the influence of personality on processing of words presented in gain-framed and loss-framed anti-speeding messages and how the processing biases associated with personality influenced message acceptance. The r-RST predicts that the nervous system regulates personality and that behaviour is dependent upon the activation of the Behavioural Activation System (BAS), activated by reward cues and the Fight-Flight-Freeze System (FFFS), activated by punishment cues. According to r-RST, individuals differ in the sensitivities of their BAS and FFFS (i.e., weak to strong), which in turn leads to stable patterns of behaviour in the presence of rewards and punishments, respectively. It was hypothesised that individual differences in personality (i.e., strength of the BAS and the FFFS) would influence the degree of both message processing (as measured by reaction time to previously viewed message words) and message acceptance (measured three ways by perceived message effectiveness, behavioural intentions, and attitudes). Specifically, it was anticipated that, individuals with a stronger BAS would process the words presented in the gain-frame messages faster than those with a weaker BAS and individuals with a stronger FFFS would process the words presented in the loss-frame messages faster than those with a weaker FFFS. Further, it was expected that greater processing (faster reaction times) would be associated with greater acceptance for that message. Driver licence holding students (N = 108) were recruited to view one of four anti-speeding messages (i.e., social gain-frame, social loss-frame, physical gain-frame, and physical loss-frame). A computerised lexical decision task assessed participants’ subsequent reaction times to message words, as an indicator of the extent of processing of the previously viewed message. Self-report measures assessed personality and the three message acceptance measures. As predicted, the degree of initial processing of the content of the social gain-framed message mediated the relationship between the reward sensitive trait and message effectiveness. Initial processing of the physical loss-framed message partially mediated the relationship between the punishment sensitive trait and both message effectiveness and behavioural intention ratings. These results show that reward sensitivity and punishment sensitivity traits influence cognitive processing of gain-framed and loss-framed message content, respectively, and subsequently, message effectiveness and behavioural intention ratings. Specifically, a range of road safety messages (i.e., gain-frame and loss-frame messages) could be designed which align with the processing biases associated with personality and which would target those individuals who are sensitive to rewards and those who are sensitive to punishments.
Resumo:
In the recent past, there are some social issues when personal sensitive data in medical database were exposed. The personal sensitive data should be protected and access must be accounted for. Protecting the sensitive information is possible by encrypting such information. The challenge is querying the encrypted information when making the decision. Encrypted query is practically somewhat tedious task. So we present the more effective method using bucket index and bloom filter technology. We find that our proposed method shows low memory and fast efficiency comparatively. Simulation approaches on data encryption techniques to improve health care decision making processes are presented in this paper as a case scenario.
Resumo:
The quality of discovered features in relevance feedback (RF) is the key issue for effective search query. Most existing feedback methods do not carefully address the issue of selecting features for noise reduction. As a result, extracted noisy features can easily contribute to undesirable effectiveness. In this paper, we propose a novel feature extraction method for query formulation. This method first extract term association patterns in RF as knowledge for feature extraction. Negative RF is then used to improve the quality of the discovered knowledge. A novel information filtering (IF) model is developed to evaluate the proposed method. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics confirm that the proposed model achieved encouraging performance compared to state-of-the-art IF models.
Resumo:
This paper develops a framework for classifying term dependencies in query expansion with respect to the role terms play in structural linguistic associations. The framework is used to classify and compare the query expansion terms produced by the unigram and positional relevance models. As the unigram relevance model does not explicitly model term dependencies in its estimation process it is often thought to ignore dependencies that exist between words in natural language. The framework presented in this paper is underpinned by two types of linguistic association, namely syntagmatic and paradigmatic associations. It was found that syntagmatic associations were a more prevalent form of linguistic association used in query expansion. Paradoxically, it was the unigram model that exhibited this association more than the positional relevance model. This surprising finding has two potential implications for information retrieval models: (1) if linguistic associations underpin query expansion, then a probabilistic term dependence assumption based on position is inadequate for capturing them; (2) the unigram relevance model captures more term dependency information than its underlying theoretical model suggests, so its normative position as a baseline that ignores term dependencies should perhaps be reviewed.
Resumo:
Studies of orthographic skills transfer between languages focus mostly on working memory (WM) ability in alphabetic first language (L1) speakers when learning another, often alphabetically congruent, language. We report two studies that, instead, explored the transferability of L1 orthographic processing skills in WM in logographic-L1 and alphabetic-L1 speakers. English-French bilingual and English monolingual (alphabetic-L1) speakers, and Chinese-English (logographic-L1) speakers, learned a set of artificial logographs and associated meanings (Study 1). The logographs were used in WM tasks with and without concurrent articulatory or visuo-spatial suppression. The logographic-L1 bilinguals were markedly less affected by articulatory suppression than alphabetic-L1 monolinguals (who did not differ from their bilingual peers). Bilinguals overall were less affected by spatial interference, reflecting superior phonological processing skills or, conceivably, greater executive control. A comparison of span sizes for meaningful and meaningless logographs (Study 2) replicated these findings. However, the logographic-L1 bilinguals’ spans in L1 were measurably greater than those of their alphabetic-L1 (bilingual and monolingual) peers; a finding unaccounted for by faster articulation rates or differences in general intelligence. The overall pattern of results suggests an advantage (possibly perceptual) for logographic-L1 speakers, over and above the bilingual advantage also seen elsewhere in third language (L3) acquisition.