156 resultados para Machine Typed Document


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the pairwise distances of signatures produced by the TopSig retrieval model on two document collections. The distribution of the distances are compared to purely random signatures. It explains why TopSig is only competitive with state of the art retrieval models at early precision. Only the local neighbourhood of the signatures is interpretable. We suggest this is a common property of vector space models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation, and can also improve productivity and enhance system safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and an assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. All machine components are subjected to degradation processes in real environments and they have certain failure characteristics which can be related to the operating conditions. This paper describes a technique for accurate assessment of the remnant life of machines based on health state probability estimation and involving historical knowledge embedded in the closed loop diagnostics and prognostics systems. The technique uses a Support Vector Machine (SVM) classifier as a tool for estimating health state probability of machine degradation, which can affect the accuracy of prediction. To validate the feasibility of the proposed model, real life historical data from bearings of High Pressure Liquefied Natural Gas (HP-LNG) pumps were analysed and used to obtain the optimal prediction of remaining useful life. The results obtained were very encouraging and showed that the proposed prognostic system based on health state probability estimation has the potential to be used as an estimation tool for remnant life prediction in industrial machinery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power system stabilizer (PSS) is one of the most important controllers in modern power systems for damping low frequency oscillations. Many efforts have been dedicated to design the tuning methodologies and allocation techniques to obtain optimal damping behaviors of the system. Traditionally, it is tuned mostly for local damping performance, however, in order to obtain a globally optimal performance, the tuning of PSS needs to be done considering more variables. Furthermore, with the enhancement of system interconnection and the increase of system complexity, new tools are required to achieve global tuning and coordination of PSS to achieve optimal solution in a global meaning. Differential evolution (DE) is a recognized as a simple and powerful global optimum technique, which can gain fast convergence speed as well as high computational efficiency. However, as many other evolutionary algorithms (EA), the premature of population restricts optimization capacity of DE. In this paper, a modified DE is proposed and applied for optimal PSS tuning of 39-Bus New-England system. New operators are introduced to reduce the probability of getting premature. To investigate the impact of system conditions on PSS tuning, multiple operating points will be studied. Simulation result is compared with standard DE and particle swarm optimization (PSO).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe a machine-translated parallel English corpus for the NTCIR Chinese, Japanese and Korean (CJK) Wikipedia collections. This document collection is named CJK2E Wikipedia XML corpus. The corpus could be used by the information retrieval research community and knowledge sharing in Wikipedia in many ways; for example, this corpus could be used for experimentations in cross-lingual information retrieval, cross-lingual link discovery, or omni-lingual information retrieval research. Furthermore, the translated CJK articles could be used to further expand the current coverage of the English Wikipedia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprise Systems (ES) can be understood as the de facto standard for holistic operational and managerial support within an organization. Most commonly ES are offered as commercial off-the-shelf packages, requiring customization in the user organization. This process is a complex and resource-intensive task, which often prevents small and midsize enterprises (SME) from undertaking configuration projects. Especially in the SME market independent software vendors provide pre-configured ES for a small customer base. The problem of ES configuration is shifted from the customer to the vendor, but remains critical. We argue that the yet unexplored link between process configuration and business document configuration must be closer examined as both types of configuration are closely tied to one another.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper makes a case for thinking about the primary school as a logic machine (apparatus) as a way of thinking about processes of in-school stratification. Firstly we discuss related literature on in-school stratification in primary schools, particularly as it relates to literacy learning. Secondly we explain how school reform can be thought about in terms of the idea of the machine or apparatus. In which case the processes of in-school stratification can be mapped as more than simply concerns about school organisation (such as students grouping) but also involve a politics of truth, played out in each school, that constitutes school culture and what counts as ‘good’ pedagogy. Thirdly, the chapter will focus specifically on research conducted into primary schools in the Northern Suburbs of Adelaide, one of the most educationally disadvantaged regions in Australia, as a case study of the relationship between in-school stratification and the reproduction of inequality. We will draw from more than 20 years of ethnographic work in primary school in the northern suburbs of Adelaide and provide a snapshot of a recent attempt to improve literacy achievement in a few Northern Suburbs public primary schools (SILA project). The SILA project, through diagnostic reviews, has provided a significant analysis of the challenges facing policy and practice in such challenging school contexts that also maps onto existing (inter)national research. These diagnostic reviews said ‘hard things’ that required attention by SILA schools and these included: · an over reliance on whole class, low level, routine tasks and hence a lack of challenge and rigour in the learning tasks offered to students ; · a focus on the 'code breaking' function of language at the expense of richer conceptualisations of literacy that might guide teachers’ understanding of challenging pedagogies ; · the need for substantial shifts in the culture of schools, especially unsettling deficit views of students and their communities ; · a need to provide a more ‘consistent’ approach to teaching literacy across the school; · a need to focus School Improvement Plans in order to implement a clear focus on literacy learning; and, · a need to sustain professional learning to produce new knowledge and practice . The paper will conclude with suggestions for further research and possible reform projects into the primary school as a logic machine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cognitive impairment and physical disability are common in Parkinson’s disease (PD). As a result diet can be difficult to measure. This study aimed to evaluate the use of a photographic dietary record (PhDR) in people with PD. During a 12-week nutrition intervention study, 19 individuals with PD kept 3-day PhDRs on three occasions using point-and-shoot digital cameras. Details on food items present in the PhDRs and those not photographed were collected retrospectively during an interview. Following the first use of the PhDR method, the photographer completed a questionnaire (n=18). In addition, the quality of the PhDRs was evaluated at each time point. The person with PD was the sole photographer in 56% of the cases, with the remainder by the carer or combination of person with PD and the carer. The camera was rated as easy to use by 89%, keeping a PhDR was considered acceptable by 94% and none would rather use a “pen and paper” method. Eighty-three percent felt confident to use the camera again to record intake. Of the photos captured (n=730), 89% were of adequate quality (items visible, in-focus), while only 21% could be used alone (without interview information) to assess intake. Over the study, 22% of eating/drinking occasions were not photographed. PhDRs were considered an easy and acceptable method to measure intake among individuals with PD and their carers. The majority of PhDRs were of adequate quality, however in order to quantify intake the interview was necessary to obtain sufficient detail and capture missing items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.