957 resultados para Machine Approach
Resumo:
The presented work proposes a new approach for anomaly detection. This approach is based on changes in a population of evolving agents under stress. If conditions are appropriate, changes in the population (modeled by the bioindicators) are representative of the alterations to the environment. This approach, based on an ecological view, improves functionally traditional approaches to the detection of anomalies. To verify this assertion, experiments based on Network Intrussion Detection Systems are presented. The results are compared with the behaviour of other bioinspired approaches and machine learning techniques.
Resumo:
The necessity/convenience for improving accuracy in determining the flood frequency is widely accepted further than among hydrologists, and is increasingly deepened in relationship with the statement of different thresholds related to the respective management systems. And both Scientific and Management Communities fully accept the necessity of living with determined levels of flood risk. Most of the approaches for “Advancing Methods” improving concentrate on the statistical ways, even since Climate in fact is not a Stationary process. The question is here reflected since the SMARTeST research and final highlights, policy and recommendations. The paper looks at a better agreement between Hydrology and the whole Climate as the result of the Global Thermal Machine and takes mainly into account a historical approach, trying to show the necessity of a wider collection and analysis of climate data for statistical approaches.
Resumo:
By analysing the dynamic principles of the human gait, an economic gait‐control analysis is performed, and passive elements are included to increase the energy efficiency in the motion control of active orthoses. Traditional orthoses use position patterns from the clinical gait analyses (CGAs) of healthy people, which are then de‐normalized and adjusted to each user. These orthoses maintain a very rigid gait, and their energy cosT is very high, reducing the autonomy of the user. First, to take advantage of the inherent dynamics of the legs, a state machine pattern with different gains in eachstate is applied to reduce the actuator energy consumption. Next, different passive elements, such as springs and brakes in the joints, are analysed to further reduce energy consumption. After an off‐line parameter optimization and a heuristic improvement with genetic algorithms, a reduction in energy consumption of 16.8% is obtained by applying a state machine control pattern, and a reduction of 18.9% is obtained by using passive elements. Finally, by combining both strategies, a more natural gait is obtained, and energy consumption is reduced by 24.6%compared with a pure CGA pattern.
Resumo:
The necessity/convenience for improving accuracy in determining the flood frequency is widely accepted further than among hydrologists, and is increasingly deepened in relationship with the statement of different thresholds related to the respective management systems. And both Scientific and Management Communities fully accept the necessity of living with determined levels of flood risk. Most of the approaches for “Advancing Methods” improving concentrate on the statistical ways, even since Climate in fact is not a Stationary process. The question is here reflected since the SMARTeST research and final highlights, policy and recommendations. The paper looks at a better agreement between Hydrology and the whole Climate as the result of the Global Thermal Machine and takes mainly into account a historical approach, trying to show the necessity of a wider collection and analysis of climate data for statistical approaches.
Resumo:
New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.
Resumo:
Reproducible research in scientic work ows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and nal results, improves understanding, and permits replaying a work ow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We dene a process for documenting the work ow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation sing a real work ow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predened virtual machine image on both computing platforms.
Resumo:
Services in smart environments pursue to increase the quality of people?s lives. The most important issues when developing this kind of environments is testing and validating such services. These tasks usually imply high costs and annoying or unfeasible real-world testing. In such cases, artificial societies may be used to simulate the smart environment (i.e. physical environment, equipment and humans). With this aim, the CHROMUBE methodology guides test engineers when modeling human beings. Such models reproduce behaviors which are highly similar to the real ones. Originally, these models are based on automata whose transitions are governed by random variables. Automaton?s structure and the probability distribution functions of each random variable are determined by a manual test and error process. In this paper, it is presented an alternative extension of this methodology which avoids the said manual process. It is based on learning human behavior patterns automatically from sensor data by using machine learning techniques. The presented approach has been tested on a real scenario, where this extension has given highly accurate human behavior models,
Resumo:
Reproducible research in scientific workflows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and final results, improves understanding, and permits replaying a workflow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We define a process for documenting the workflow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation using a real workflow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predefined virtual machine image on both computing platforms.
Resumo:
We present a general approach to forming structure-activity relationships (SARs). This approach is based on representing chemical structure by atoms and their bond connectivities in combination with the inductive logic programming (ILP) algorithm PROGOL. Existing SAR methods describe chemical structure by using attributes which are general properties of an object. It is not possible to map chemical structure directly to attribute-based descriptions, as such descriptions have no internal organization. A more natural and general way to describe chemical structure is to use a relational description, where the internal construction of the description maps that of the object described. Our atom and bond connectivities representation is a relational description. ILP algorithms can form SARs with relational descriptions. We have tested the relational approach by investigating the SARs of 230 aromatic and heteroaromatic nitro compounds. These compounds had been split previously into two subsets, 188 compounds that were amenable to regression and 42 that were not. For the 188 compounds, a SAR was found that was as accurate as the best statistical or neural network-generated SARs. The PROGOL SAR has the advantages that it did not need the use of any indicator variables handcrafted by an expert, and the generated rules were easily comprehensible. For the 42 compounds, PROGOL formed a SAR that was significantly (P < 0.025) more accurate than linear regression, quadratic regression, and back-propagation. This SAR is based on an automatically generated structural alert for mutagenicity.
Resumo:
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.
Resumo:
This paper describes a range of opportunities for military and government applications of human-machine communication by voice, based on visits and contacts with numerous user organizations in the United States. The applications include some that appear to be feasible by careful integration of current state-of-the-art technology and others that will require a varying mix of advances in speech technology and in integration of the technology into applications environments. Applications that are described include (1) speech recognition and synthesis for mobile command and control; (2) speech processing for a portable multifunction soldier's computer; (3) speech- and language-based technology for naval combat team tactical training; (4) speech technology for command and control on a carrier flight deck; (5) control of auxiliary systems, and alert and warning generation, in fighter aircraft and helicopters; and (6) voice check-in, report entry, and communication for law enforcement agents or special forces. A phased approach for transfer of the technology into applications is advocated, where integration of applications systems is pursued in parallel with advanced research to meet future needs.
Resumo:
Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
An emerging issue in the field of astronomy is the integration, management and utilization of databases from around the world to facilitate scientific discovery. In this paper, we investigate application of the machine learning techniques of support vector machines and neural networks to the problem of amalgamating catalogues of galaxies as objects from two disparate data sources: radio and optical. Formulating this as a classification problem presents several challenges, including dealing with a highly unbalanced data set. Unlike the conventional approach to the problem (which is based on a likelihood ratio) machine learning does not require density estimation and is shown here to provide a significant improvement in performance. We also report some experiments that explore the importance of the radio and optical data features for the matching problem.
Resumo:
We present an approach to parsing rehive clauses in Arabic in the tradition of the Paninian Grammar Frumework/2] which leads to deriving U common logicul form for equivalent sentences. Particular attention is paid to the analysis of resumptive pronouns in the retrieval of syntuctico-semantic relationships. The analysis arises from the development of a lexicalised dependency grammar for Arabic that has application for machine translation.