924 resultados para EUREKA (Information retrieval system)
Resumo:
Failing injectors are one of the most common faults in diesel engines. The severity of these faults could have serious effects on diesel engine operations such as engine misfire, knocking, insufficient power output or even cause a complete engine breakdown. It is thus essential to prevent such faults from occurring by monitoring the condition of these injectors. In this paper, the authors present the results of an experimental investigation on identifying the signal characteristics of a simulated incipient injector fault in a diesel engine using both in-cylinder pressure and acoustic emission (AE) techniques. A time waveform event driven synchronous averaging technique was used to minimize or eliminate the effect of engine speed variation and amplitude fluctuation. It was found that AE is an effective method to detect the simulated injector fault in both time (crank angle) and frequency (order) domains. It was also shown that the time domain in-cylinder pressure signal is a poor indicator for condition monitoring and diagnosis of the simulated injector fault due to the small effect of the simulated fault on the engine combustion process. Nevertheless, good correlations between the simulated injector fault and the lower order components of the enveloped in-cylinder pressure spectrum were found at various engine loading conditions.
Resumo:
Modelling the power systems load is a challenge since the load level and composition varies with time. An accurate load model is important because there is a substantial component of load dynamics in the frequency range relevant to system stability. The composition of loads need to be charaterised because the time constants of composite loads affect the damping contributions of the loads to power system oscillations, and their effects vary with the time of the day, depending on the mix of motors loads. This chapter has two main objectives: 1) describe the load modelling in small signal using on-line measurements; and 2) present a new approach to develop models that reflect the load response to large disturbances. Small signal load characterisation based on on-line measurements allows predicting the composition of load with improved accuracy compared with post-mortem or classical load models. Rather than a generic dynamic model for small signal modelling of the load, an explicit induction motor is used so the performance for larger disturbances can be more reliably inferred. The relation between power and frequency/voltage can be explicitly formulated and the contribution of induction motors extracted. One of the main features of this work is the induction motor component can be associated to nominal powers or equivalent motors
Resumo:
Discovering proper search intents is a vi- tal process to return desired results. It is constantly a hot research topic regarding information retrieval in recent years. Existing methods are mainly limited by utilizing context-based mining, query expansion, and user profiling techniques, which are still suffering from the issue of ambiguity in search queries. In this pa- per, we introduce a novel ontology-based approach in terms of a world knowledge base in order to construct personalized ontologies for identifying adequate con- cept levels for matching user search intents. An iter- ative mining algorithm is designed for evaluating po- tential intents level by level until meeting the best re- sult. The propose-to-attempt approach is evaluated in a large volume RCV1 data set, and experimental results indicate a distinct improvement on top precision after compared with baseline models.
Resumo:
Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, an important security attribute called key compromise impersonation (KCI) resilience has been completely ignored for the case of GKE protocols. Informally, a protocol is said to provide KCI resilience if the compromise of the long-term secret key of a protocol participant A does not allow the adversary to impersonate an honest participant B to A. In this paper, we argue that KCI resilience for GKE protocols is at least as important as it is for 2PKE protocols. Our first contribution is revised definitions of security for GKE protocols considering KCI attacks by both outsider and insider adversaries. We also give a new proof of security for an existing two-round GKE protocol under the revised security definitions assuming random oracles. We then show how to achieve insider KCIR in a generic way using a known compiler in the literature. As one may expect, this additional security assurance comes at the cost of an extra round of communication. Finally, we show that a few existing protocols are not secure against outsider KCI attacks. The attacks on these protocols illustrate the necessity of considering KCI resilience for GKE protocols.
Resumo:
Rule extraction from neural network algorithms have been investigated for two decades and there have been significant applications. Despite this level of success, rule extraction from neural network methods are generally not part of data mining tools, and a significant commercial breakthrough may still be some time away. This paper briefly reviews the state-of-the-art and points to some of the obstacles, namely a lack of evaluation techniques in experiments and larger benchmark data sets. A significant new development is the view that rule extraction from neural networks is an interactive process which actively involves the user. This leads to the application of assessment and evaluation techniques from information retrieval which may lead to a range of new methods.
Resumo:
While the phrase “six degrees of separation” is widely used to characterize a variety of humanderived networks, in this study we show that in patent citation network, related patents are connected with an average distance of 6, whereas an average distance for a random pair of nodes in the graph is approximately 15. We use this information to improve the recall level in prior-art retrieval in the setting of blind relevance feedback without any textual knowledge.
Resumo:
Quantum theory has recently been employed to further advance the theory of information retrieval (IR). A challenging research topic is to investigate the so called quantum-like interference in users’ relevance judgement process, where users are involved to judge the relevance degree of each document with respect to a given query. In this process, users’ relevance judgement for the current document is often interfered by the judgement for previous documents, due to the interference on users’ cognitive status. Research from cognitive science has demonstrated some initial evidence of quantum-like cognitive interference in human decision making, which underpins the user’s relevance judgement process. This motivates us to model such cognitive interference in the relevance judgement process, which in our belief will lead to a better modeling and explanation of user behaviors in relevance judgement process for IR and eventually lead to more user-centric IR models. In this paper, we propose to use probabilistic automaton(PA) and quantum finite automaton (QFA), which are suitable to represent the transition of user judgement states, to dynamically model the cognitive interference when the user is judging a list of documents.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
Many user studies in Web information searching have found the significant effect of task types on search strategies. However, little attention was given to Web image searching strategies, especially the query reformulation activity despite that this is a crucial part in Web image searching. In this study, we investigated the effects of topic domains and task types on user’s image searching behavior and query reformulation strategies. Some significant differences in user’s tasks specificity and initial concepts were identified among the task domains. Task types are also found to influence participant’s result reviewing behavior and query reformulation strategies.
Resumo:
This chapter reviews the incidence of coverage of Papua New Guinea affairs in the Australian press and in Australian broadcast media. It presents the findings of a formal monitoring of selected newspaper coverage and news broadcasts of the leading Australian television and radio outlets. The study also includes news stories published on ABC Online. The findings for print media suggest that coverage of PNG is inadequate and may be contributing towards negative images of that country in Australia. The broadcast monitoring found also that beyond the ABC's regular and balanced coverage, there was very little mention of PNG on Australian airwaves. The deployment of resources by the ABC was seen as a potential model for increased quantity and quality of coverage, with its maintenance of a correspondent and office in the country, and use of reports from PNG across a wide range of programs. The investigation noted some early indications of a shift in media attention, following the election of a new government in Australia in 2007, which gave some priority attention to PNG including a visit by the then Prime Minister.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
‘Nobody knows anything’, said William Goldman of studio filmmaking. The rule is ever more apt as we survey the radical changes that digital distribution, along with the digitisation of production and exhibition, is wreaking on global film circulation. Digital Disruption: Cinema Moves On-line helps to make sense of what has happened in the short but turbulent history of on-line distribution. It provides a realistic assessment of the genuine and not-so-promising methods that have been tried to address the disruptions that moving from ‘analogue dollars’ to ‘digital cents’ has provoked in the film industry. Paying close attention to how the Majors have dealt with the challenges – often unsuccessfully – it focuses as much attention on innovations and practices outside the mainstream. Throughout, it is alive to, and showcases, important entrepreneurial innovations such as Mubi, Jaman, Withoutabox and IMDb. Written by leading academic commentators that have followed the fortunes of world cinema closely and passionately, as well as experienced hands close to the fluctuating fortunes of the industry, Digital Disruption: Cinema Moves On-line is an indispensable guide to great changes in film and its audiences.
Resumo:
Consider the concept combination ‘pet human’. In word association experiments, human subjects produce the associate ‘slave’ in relation to this combination. The striking aspect of this associate is that it is not produced as an associate of ‘pet’, or ‘human’ in isolation. In other words, the associate ‘slave’ seems to be emergent. Such emergent associations sometimes have a creative character and cognitive science is largely silent about how we produce them. Departing from a dimensional model of human conceptual space, this article will explore concept combinations, and will argue that emergent associations are a result of abductive reasoning within conceptual space, that is, below the symbolic level of cognition. A tensor-based approach is used to model concept combinations allowing such combinations to be formalized as interacting quantum systems. Free association norm data is used to motivate the underlying basis of the conceptual space. It is shown by analogy how some concept combinations may behave like quantum-entangled (non-separable) particles. Two methods of analysis were presented for empirically validating the presence of non-separable concept combinations in human cognition. One method is based on quantum theory and another based on comparing a joint (true theoretic) probability distribution with another distribution based on a separability assumption using a chi-square goodness-of-fit test. Although these methods were inconclusive in relation to an empirical study of bi-ambiguous concept combinations, avenues for further refinement of these methods are identified.
Resumo:
The fashion ecosystem is at boiling point as consumers turn up the heat in all areas of the fashion value, trend and supply chain. While traditionally fashion has been a monologue from designer brand to consumer, new technology and the virtual world has given consumers a voice to engage brands in a conversation to express evolving needs, ideas and feedback. Product customisation is no longer innovative. Successful brands are including customers in the design process and holding conversations ‘with’ them to improve product, manufacturing, sales, distribution, marketing and sustainable business practices. Co-creation and crowd sourcing are integral to any successful business model and designers and manufacturers are supplying the technology or tools for these creative, active, participatory ‘prosumers’. With this collaboration however, there arises a worrying trend for fashion professionals. The ‘design it yourself’, ‘indiepreneur’ who with the combination of technology, the internet, excess manufacturing capacity, crowd funding and the idea of sharing the creative integrity of a product (‘copyleft’ not copyright) is challenging the notion that the fashion supply chain is complex. The passive ‘consumer’ no longer exists. Fashion designers now share the stage with ‘amateur’ creators who are disrupting every activity they touch, while being motivated by profit as well as a quest for originality and innovation. This paper examines the effects this ‘consumer’ engagement is having on traditional fashion models and the fashion supply chain. Crowd sourcing, crowd funding, co-creating, design it yourself, global sourcing, the virtual supply chain, social media, online shopping, group buying, consumer to consumer marketing and retail, and branding the ‘individual’ are indicative of the new consumer-driven fashion models. Consumers now drive the fashion industry - from setting trends, through to creating, producing, selling and marketing product. They can turn up the heat at any time _ and any point _ in the fashion supply chain. They are raising the temperature at each and every stage of the chain, decreasing or eliminating the processes involved: decreasing the risk of fashion obsolescence, quantities for manufacture, complexity of distribution and the consumption of product; eliminating certain stages altogether and limiting the brand as custodians of marketing. Some brands are discovering a new ‘enemy’ – the very people they are trying to sell to. Keywords: fashion supply chain, virtual world, consumer, ‘prosumers’, co-creation, fashion designers