920 resultados para INTELLIGENCE SYSTEMS METHODOLOGY


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using a chain of urns, we build a Bayesian nonparametric alarm system to predict catastrophic events, such as epidemics, black outs, etc. Differently from other alarm systems in the literature, our model is constantly updated on the basis of the available information, according to the Bayesian paradigm. The papers contains both theoretical and empirical results. In particular, we test our alarm system on a well-known time series of sunspots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ore-forming and geoenviromental systems commonly involve coupled fluid flowand chemical reaction processes. The advanced numerical methods and computational modeling have become indispensable tools for simulating such processes in recent years. This enables many hitherto unsolvable geoscience problems to be addressed using numerical methods and computational modeling approaches. For example, computational modeling has been successfully used to solve ore-forming and mine site contamination/remediation problems, in which fluid flow and geochemical processes play important roles in the controlling dynamic mechanisms. The main purpose of this paper is to present a generalized overview of: (1) the various classes and models associated with fluid flow/chemically reacting systems in order to highlight possible opportunities and developments for the future; (2) some more general issues that need attention in the development of computational models and codes for simulating ore-forming and geoenviromental systems; (3) the related progresses achieved on the geochemical modeling over the past 50 years or so; (4) the general methodology for modeling of oreforming and geoenvironmental systems; and (5) the future development directions associated with modeling of ore-forming and geoenviromental systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter presents fuzzy cognitive maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. The corresponding Web KnowARR framework incorporates findings from fuzzy logic. To this end, a first emphasis is particularly on the Web KnowARR framework along with a stakeholder management use case to illustrate the framework’s usefulness as a second focal point. This management form is to help projects to acceptance and assertiveness where claims for company decisions are actively involved in the management process. Stakeholder maps visually (re-) present these claims. On one hand, they resort to non-public content and on the other they resort to content that is available to the public (mostly on the Web). The Semantic Web offers opportunities not only to present public content descriptively but also to show relationships. The proposed framework can serve as the basis for the public content of stakeholder maps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Our method to evaluate the synchronization accuracy is inspired by signal processing algorithms and relies on fine grain time information. The method is able to calculate the clock offset and skew between devices with nanosecond accuracy in real time. It was implemented using software defined radio technology. We demonstrate that GPS-based synchronization suffers from remaining clock offset in the range of a few hundred of nanoseconds but the clock skew is negligible. Finally, we determine a corresponding lower bound on the expected positioning error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality data are not only relevant for successful Data Warehousing or Business Intelligence applications; they are also a precondition for efficient and effective use of Enterprise Resource Planning (ERP) systems. ERP professionals in all kinds of businesses are concerned with data quality issues, as a survey, conducted by the Institute of Information Systems at the University of Bern, has shown. This paper demonstrates, by using results of this survey, why data quality problems in modern ERP systems can occur and suggests how ERP researchers and practitioners can handle issues around the quality of data in an ERP software Environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanisms that allow pathogens to colonize the host are not the product of isolated genes, but instead emerge from the concerted operation of regulatory networks. Therefore, identifying components and the systemic behavior of networks is necessary to a better understanding of gene regulation and pathogenesis. To this end, I have developed systems biology approaches to study transcriptional and post-transcriptional gene regulation in bacteria, with an emphasis in the human pathogen Mycobacterium tuberculosis (Mtb). First, I developed a network response method to identify parts of the Mtb global transcriptional regulatory network utilized by the pathogen to counteract phagosomal stresses and survive within resting macrophages. As a result, the method unveiled transcriptional regulators and associated regulons utilized by Mtb to establish a successful infection of macrophages throughout the first 14 days of infection. Additionally, this network-based analysis identified the production of Fe-S proteins coupled to lipid metabolism through the alkane hydroxylase complex as a possible strategy employed by Mtb to survive in the host. Second, I developed a network inference method to infer the small non-coding RNA (sRNA) regulatory network in Mtb. The method identifies sRNA-mRNA interactions by integrating a priori knowledge of possible binding sites with structure-driven identification of binding sites. The reconstructed network was useful to predict functional roles for the multitude of sRNAs recently discovered in the pathogen, being that several sRNAs were postulated to be involved in virulence-related processes. Finally, I applied a combined experimental and computational approach to study post-transcriptional repression mediated by small non-coding RNAs in bacteria. Specifically, a probabilistic ranking methodology termed rank-conciliation was developed to infer sRNA-mRNA interactions based on multiple types of data. The method was shown to improve target prediction in Escherichia coli, and therefore is useful to prioritize candidate targets for experimental validation.