918 resultados para Information Models
Resumo:
High resolution digital elevation models (DEMs) of Santiaguito and Pacaya volcanoes, Guatemala, were used to estimate volume changes and eruption rates between 1954 and 2001. The DEMs were generated from contour maps and aerial photography, which were analyzed in ArcGIS 9.0®. Because both volcanoes were growing substantially over the five decade period, they provide a good data set for exploring effective methodology for estimating volume changes. The analysis shows that the Santiaguito dome complex grew by 0.78 ± 0.07 km3 (0.52 ± 0.05 m3 s-1) over the 1954-2001 period with nearly all the growth occurring on the El Brujo (1958-75) and Caliente domes (1971-2001). Adding information from field data prior to 1954, the total volume extruded from Santiaguito since 1922 is estimated at 1.48 ± 0.19 km3. Santiaguito’s growth rate is lower than most other volcanic domes, but it has been sustained over a much longer period and has undergone a change toward more exogenous and progressively slower extrusion with time. At Santiaguito some of the material being added at the dome is subsequently transported downstream by block and ash flows, mudflows and floods, creating channel shifting and areas of aggradation and erosion. At Pacaya volcano a total volume of 0.21 ± 0.05 km3 was erupted between 1961 and 2001 for an average extrusion rate of 0.17 ± 0.04 m3 s-1. Both the Santiaguito and Pacaya eruption rate estimates reported here are minima, because they do not include estimates of materials which are transported downslope after eruption and data on ashfall which may result in significant volumes of material spread over broad areas. Regular analysis of high resolution DEMs using the methods outlined here, would help quantify the effects of fluvial changes to downstream populated areas, as well as assist in tracking hazards related to dome collapse and eruption.
Resumo:
OBJECTIVES: This paper is concerned with checking goodness-of-fit of binary logistic regression models. For the practitioners of data analysis, the broad classes of procedures for checking goodness-of-fit available in the literature are described. The challenges of model checking in the context of binary logistic regression are reviewed. As a viable solution, a simple graphical procedure for checking goodness-of-fit is proposed. METHODS: The graphical procedure proposed relies on pieces of information available from any logistic analysis; the focus is on combining and presenting these in an informative way. RESULTS: The information gained using this approach is presented with three examples. In the discussion, the proposed method is put into context and compared with other graphical procedures for checking goodness-of-fit of binary logistic models available in the literature. CONCLUSION: A simple graphical method can significantly improve the understanding of any logistic regression analysis and help to prevent faulty conclusions.
Resumo:
Background: The literature on the applications of homeopathy for controlling plant diseases in both plant pathological models and field trials was first reviewed by Scofield in 1984. No other review on homeopathy in plant pathology has been published since, though much new research has subsequently been carried out using more advanced methods. Objectives: To conduct an up-to-date review of the existing literature on basic research in homeopathy using phytopathological models and experiments in the field. Methods: A literature search was carried out on publications from 1969 to 2009, for papers that reported experiments on homeopathy using phytopathological models (in vitro and in planta) and field trials. The selected papers were summarized and analysed on the basis of a Manuscript Information Score (MIS) to identify those that provided sufficient information for proper interpretation (MIS ≥ 5). These were then evaluated using a Study Methods Evaluation Procedure (SMEP). Results: A total of 44 publications on phytopathological models were identified: 19 papers with statistics, 6 studies with MIS ≥ 5. Publications on field were 9, 6 with MIS ≥ 5. In general, significant and reproducible effects with decimal and centesimal potencies were found, including dilution levels beyond the Avogadro's number. Conclusions: The prospects for homeopathic treatments in agriculture are promising, but much more experimentation is needed, especially at a field level, and on potentisation techniques, effective potency levels and conditions for reproducibility. Phytopathological models may also develop into useful tools to answer pharmaceutical questions.
Resumo:
The Plasma and Supra-Thermal Ion Composition (PLASTIC) instrument is one of four experiment packages on board of the two identical STEREO spacecraft A and B, which were successfully launched from Cape Canaveral on 26 October 2006. During the two years of the nominal STEREO mission, PLASTIC is providing us with the plasma characteristics of protons, alpha particles, and heavy ions. PLASTIC will also provide key diagnostic measurements in the form of the mass and charge state composition of heavy ions. Three measurements (E/qk, time of flight, ESSD) from the pulse height raw data are used to characterize the solar wind ions from the solar wind sector, and part of the suprathermal particles from the wide-angle partition with respect to mass, atomic number and charge state. In this paper, we present a new method for flight data analysis based on simulations of the PLASTIC response to solar wind ions. We present the response of the entrance system / energy analyzer in an analytical form. Based on stopping power theory, we use an analytical expression for the energy loss of the ions when they pass through a thin carbon foil. This allows us to model analytically the response of the time of flight mass spectrometer to solar wind ions. Thus we present a new version of the analytical response of the solid state detectors to solar wind ions. Various important parameters needed for our models were derived, based on calibration data and on the first flight measurements obtained from STEREO-A. We used information from each measured event that is registered in full resolution in the Pulse Height Analysis words and we derived a new algorithm for the analysis of both existing and future data sets of a similar nature which was tested and works well. This algorithm allows us to obtain, for each measured event, the mass, atomic number and charge state in the correct physical units. Finally, an important criterion was developed for filtering our Fe raw flight data set from the pulse height data without discriminating charge states.
Resumo:
Systems must co-evolve with their context. Reverse engineering tools are a great help in this process of required adaption. In order for these tools to be flexible, they work with models, abstract representations of the source code. The extraction of such information from source code can be done using a parser. However, it is fairly tedious to build new parsers. And this is made worse by the fact that it has to be done over and over again for every language we want to analyze. In this paper we propose a novel approach which minimizes the knowledge required of a certain language for the extraction of models implemented in that language by reflecting on the implementation of preparsed ASTs provided by an IDE. In a second phase we use a technique referred to as Model Mapping by Example to map platform dependent models onto domain specific model.
Resumo:
In this paper two models for the simulation of glucose-insulin metabolism of children with Type 1 diabetes are presented. The models are based on the combined use of Compartmental Models (CMs) and artificial Neural Networks (NNs). Data from children with Type 1 diabetes, stored in a database, have been used as input to the models. The data are taken from four children with Type 1 diabetes and contain information about glucose levels taken from continuous glucose monitoring system, insulin intake and food intake, along with corresponding time. The influences of taken insulin on plasma insulin concentration, as well as the effect of food intake on glucose input into the blood from the gut, are estimated from the CMs. The outputs of CMs, along with previous glucose measurements, are fed to a NN, which provides short-term prediction of glucose values. For comparative reasons two different NN architectures have been tested: a Feed-Forward NN (FFNN) trained with the back-propagation algorithm with adaptive learning rate and momentum, and a Recurrent NN (RNN), trained with the Real Time Recurrent Learning (RTRL) algorithm. The results indicate that the best prediction performance can be achieved by the use of RNN.
Resumo:
In the laboratory of Dr. Dieter Jaeger at Emory University, we use computer simulations to study how the biophysical properties of neurons—including their three-dimensional structure, passive membrane resistance and capacitance, and active membrane conductances generated by ion channels—affect the way that the neurons transfer synaptic inputs into the action potential streams that represent their output. Because our ultimate goal is to understand how neurons process and relay information in a living animal, we try to make our computer simulations as realistic as possible. As such, the computer models reflect the detailed morphology and all of the ion channels known to exist in the particular neuron types being simulated, and the model neurons are tested with synaptic input patterns that are intended to approximate the inputs that real neurons receive in vivo. The purpose of this workshop tutorial was to explain what we mean by ‘in vivo-like’ synaptic input patterns, and how we introduce these input patterns into our computer simulations using the freely available GENESIS software package (http://www.genesis-sim.org/GENESIS). The presentation was divided into four sections: first, an explanation of what we are talking about when we refer to in vivo-like synaptic input patterns
Resumo:
In January 2012, Poland witnessed massive protests, both in the streets and on the Internet, opposing ratification of the Anti-Counterfeiting Trade Agreement, which triggered a wave of strong anti-ACTA movements across Europe. In Poland, these protests had further far-reaching consequences, as they not only changed the initial position of the government on the controversial treaty but also actually started a public debate on the role of copyright law in the information society. Moreover, as a result of these events the Polish Ministry for Administration and Digitisation launched a round table, gathering various stakeholders to negotiate a potential compromise with regard to copyright law that would satisfy conflicting interests of various actors. This contribution will focus on a description of this massive resentment towards ACTA and a discussion of its potential reasons. Furthermore, the mechanisms that led to the extraordinary influence of the anti-ACTA movement on the governmental decisions in Poland will be analysed through the application of models and theories stemming from the social sciences. The importance of procedural justice in the copyright legislation process, especially its influence on the image of copyright law and obedience of its norms, will also be emphasised.
Resumo:
Simulation techniques are almost indispensable in the analysis of complex systems. Materials- and related information flow processes in logistics often possess such complexity. Further problem arise as the processes change over time and pose a Big Data problem as well. To cope with these issues adaptive simulations are more and more frequently used. This paper presents a few relevant advanced simulation models and intro-duces a novel model structure, which unifies modelling of geometrical relations and time processes. This way the process structure and their geometric relations can be handled in a well understandable and transparent way. Capabilities and applicability of the model is also presented via a demonstrational example.
Resumo:
In studies related to deep geological disposal of radioactive waste, it is current practice to transfer external information (e.g. from other sites, from underground rock laboratories or from natural analogues) to safety cases for specific projects. Transferable information most commonly includes parameters, investigation techniques, process understanding, conceptual models and high-level conclusions on system behaviour. Prior to transfer, the basis of transferability needs to be established. In argillaceous rocks, the most relevant common feature is the microstructure of the rocks, essentially determined by the properties of clay–minerals. Examples are shown from the Swiss and French programmes how transfer of information was handled and justified. These examples illustrate how transferability depends on the stage of development of a repository safety case and highlight the need for adequate system understanding at all sites involved to support the transfer.
Resumo:
The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
OBJECTIVE Crohn's disease is a chronic inflammatory process that has recently been associated with a higher risk of early implant failure. Herein we provide information on the impact of colitis on peri-implant bone formation using preclinical models of chemically induced colitis. METHODS Colitis was induced by intrarectal instillation of 2,4,6-trinitro-benzene-sulfonic-acid (TNBS). Colitis was also induced by feeding rats dextran-sodium-sulfate (DSS) in drinking water. One week after disease induction, titanium miniscrews were inserted into the tibia. Four weeks after implantation, peri-implant bone volume per tissue volume (BV/TV) and bone-to-implant contacts (BIC) were determined by histomorphometric analysis. RESULTS Cortical histomorphometric parameters were similar in the control (n = 10), DSS (n = 10) and TNBS (n = 8) groups. Cortical BV/TV was 92.2 ± 3.7%, 92.0 ± 3.0% and 92.6 ± 2.7%. Cortical BIC was 81.3 ± 8.8%, 83.2 ± 8.4% and 84.0 ± 7.0%, respectively. No significant differences were observed when comparing the medullary BV/TV and BIC (19.5 ± 6.4%, 16.2 ± 5.6% and 15.4 ± 9.0%) and (48.8 ± 12.9%, 49.2 ± 6.2 and 41.9 ± 11.7%), respectively. Successful induction of colitis was confirmed by loss of body weight and colon morphology. CONCLUSIONS The results suggest bone regeneration around implants is not impaired in chemically induced colitis models. Considering that Crohn's disease can affect any part of the gastrointestinal tract including the mouth, our model only partially reflects the clinical situation.
Resumo:
According to Bandura (1997) efficacy beliefs are a primary determinant of motivation. Still, very little is known about the processes through which people integrate situational factors to form efficacy beliefs (Myers & Feltz, 2007). The aim of this study was to gain insight into the cognitive construction of subjective group-efficacy beliefs. Only with a sound understanding of those processes is there a sufficient base to derive psychological interventions aimed at group-efficacy beliefs. According to cognitive theories (e.g., Miller, Galanter, & Pribram, 1973) individual group-efficacy beliefs can be seen as the result of a comparison between the demands of a group task and the resources of the performing group. At the center of this comparison are internally represented structures of the group task and plans to perform it. The empirical plausibility of this notion was tested using functional measurement theory (Anderson, 1981). Twenty-three students (M = 23.30 years; SD = 3.39; 35 % females) of the University of Bern repeatedly judged the efficacy of groups in different group tasks. The groups consisted of the subjects and another one to two fictive group members. The latter were manipulated by their value (low, medium, high) in task-relevant abilities. Data obtained from multiple full factorial designs were structured with individuals as second level units and analyzed using mixed linear models. The task-relevant abilities of group members, specified as fixed factors, all had highly significant effects on subjects’ group-efficacy judgments. The effect sizes of the ability factors showed to be dependent on the respective abilities’ importance in a given task. In additive tasks (Steiner, 1972) group resources were integrated in a linear fashion whereas significant interaction between factors was obtained in interdependent tasks. The results also showed that people take into account other group members’ efficacy beliefs when forming their own group-efficacy beliefs. The results support the notion that personal group-efficacy beliefs are obtained by comparing the demands of a task with the performing groups’ resources. Psychological factors such as other team members’ efficacy beliefs are thereby being considered task relevant resources and affect subjective group-efficacy beliefs. This latter finding underlines the adequacy of multidimensional measures. While the validity of collective efficacy measures is usually estimated by how well they predict performances, the results of this study allow for a somewhat internal validity criterion. It is concluded that Information Integration Theory holds potential to further help understand people’s cognitive functioning in sport relevant situations.
Resumo:
In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.