982 resultados para Prediction theory.
Resumo:
In this paper we investigate the quantum phase transition from magnetic Bose Glass to magnetic Bose-Einstein condensation induced by amagnetic field in NiCl2 center dot 4SC(NH2)(2) (dichloro-tetrakis-thiourea-nickel, or DTN), doped with Br (Br-DTN) or site diluted. Quantum Monte Carlo simulations for the quantum phase transition of the model Hamiltonian for Br-DTN, as well as for site-diluted DTN, are consistent with conventional scaling at the quantum critical point and with a critical exponent z verifying the prediction z = d; moreover the correlation length exponent is found to be nu = 0.75(10), and the order parameter exponent to be beta = 0.95(10). We investigate the low-temperature thermodynamics at the quantum critical field of Br-DTN both numerically and experimentally, and extract the power-law behavior of the magnetization and of the specific heat. Our results for the exponents of the power laws, as well as previous results for the scaling of the critical temperature to magnetic ordering with the applied field, are incompatible with the conventional crossover-scaling Ansatz proposed by Fisher et al. [Phys. Rev. B 40, 546 (1989)]. However they can all be reconciled within a phenomenological Ansatz in the presence of a dangerously irrelevant operator.
Resumo:
This work used the colloidal theory to describe forces and energy interactions of colloidal complexes in the water and those formed during filtration run in direct filtration. Many interactions of particle energy profiles between colloidal surfaces for three geometries are presented here in: spherical, plate and cylindrical; and four surface interactions arrangements: two cylinders, two spheres, two plates and a sphere and a plate. Two different situations were analyzed, before and after electrostatic destabilization by action of the alum sulfate as coagulant in water studies samples prepared with kaolin. In the case were used mathematical modeling by extended DLVO theory (from the names: Derjarguin-Landau-Verwey-Overbeek) or XDLVO, which include traditional approach of the electric double layer (EDL), surfaces attraction forces or London-van der Waals (LvdW), esteric forces and hydrophobic forces, additionally considering another forces in colloidal system, like molecular repulsion or Born Repulsion and Acid-Base (AB) chemical function forces from Lewis.
Resumo:
This work evaluates the efficiency of economic levels of theory for the prediction of (3)J(HH) spin-spin coupling constants, to be used when robust electronic structure methods are prohibitive. To that purpose, DFT methods like mPW1PW91. B3LYP and PBEPBE were used to obtain coupling constants for a test set whose coupling constants are well known. Satisfactory results were obtained in most of cases, with the mPW1PW91/6-31G(d,p)//B3LYP/6-31G(d,p) leading the set. In a second step. B3LYP was replaced by the semiempirical methods PM6 and RM1 in the geometry optimizations. Coupling constants calculated with these latter structures were at least as good as the ones obtained by pure DFT methods. This is a promising result, because some of the main objectives of computational chemistry - low computational cost and time, allied to high performance and precision - were attained together. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In protein databases there is a substantial number of proteins structurally determined but without function annotation. Understanding the relationship between function and structure can be useful to predict function on a large scale. We have analyzed the similarities in global physicochemical parameters for a set of enzymes which were classified according to the four Enzyme Commission (EC) hierarchical levels. Using relevance theory we introduced a distance between proteins in the space of physicochemical characteristics. This was done by minimizing a cost function of the metric tensor built to reflect the EC classification system. Using an unsupervised clustering method on a set of 1025 enzymes, we obtained no relevant clustering formation compatible with EC classification. The distance distributions between enzymes from the same EC group and from different EC groups were compared by histograms. Such analysis was also performed using sequence alignment similarity as a distance. Our results suggest that global structure parameters are not sufficient to segregate enzymes according to EC hierarchy. This indicates that features essential for function are rather local than global. Consequently, methods for predicting function based on global attributes should not obtain high accuracy in main EC classes prediction without relying on similarities between enzymes from training and validation datasets. Furthermore, these results are consistent with a substantial number of studies suggesting that function evolves fundamentally by recruitment, i.e., a same protein motif or fold can be used to perform different enzymatic functions and a few specific amino acids (AAs) are actually responsible for enzyme activity. These essential amino acids should belong to active sites and an effective method for predicting function should be able to recognize them. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Quantum chemical calculations at the B3LYP/6-31G* level of theory were employed for the structure-activity relationship and prediction of the antioxidant activity of edaravone and structurally related derivatives using energy (E), ionization potential (IP), bond dissociation energy (BDE), and stabilization energies(Delta E-iso). Spin density calculations were also performed for the proposed antioxidant activity mechanism. The electron abstraction is related to electron-donating groups (EDG) at position 3, decreasing the IP when compared to substitution at position 4. The hydrogen abstraction is related to electron-withdrawing groups (EDG) at position 4, decreasing the BDECH when compared to other substitutions, resulting in a better antioxidant activity. The unpaired electron formed by the hydrogen abstraction from the C-H group of the pyrazole ring is localized at 2, 4, and 6 positions. The highest scavenging activity prediction is related to the lowest contribution at the carbon atom. The likely mechanism is related to hydrogen transfer. It was found that antioxidant activity depends on the presence of EDG at the C-2 and C-4 positions and there is a correlation between IP and BDE. Our results identified three different classes of new derivatives more potent than edaravone.
Resumo:
In Airbus GmbH (Hamburg) has been developed a new design of Rear Pressure Bulkhead (RPB) for the A320-family. The new model has been formed with vacuum forming technology. During this process the wrinkling phenomenon occurs. In this thesis is described an analytical model for prediction of wrinkling based on the energetic method of Timoshenko. Large deflection theory has been used for analyze two cases of study: a simply supported circular thin plate stamped by a spherical punch and a simply supported circular thin plate formed with vacuum forming technique. If the edges are free to displace radially, thin plates will develop radial wrinkles near the edge at a central deflection approximately equal to four plate thicknesses w0/ℎ≈4 if they’re stamped by a spherical punch and w0/ℎ≈3 if they’re formed with vacuum forming technique. Initially, there are four symmetrical wrinkles, but the number increases if the central deflection is increased. By using experimental results, the “Snaptrhough” phenomenon is described.
Resumo:
Suppose that we are interested in establishing simple, but reliable rules for predicting future t-year survivors via censored regression models. In this article, we present inference procedures for evaluating such binary classification rules based on various prediction precision measures quantified by the overall misclassification rate, sensitivity and specificity, and positive and negative predictive values. Specifically, under various working models we derive consistent estimators for the above measures via substitution and cross validation estimation procedures. Furthermore, we provide large sample approximations to the distributions of these nonsmooth estimators without assuming that the working model is correctly specified. Confidence intervals, for example, for the difference of the precision measures between two competing rules can then be constructed. All the proposals are illustrated with two real examples and their finite sample properties are evaluated via a simulation study.
Resumo:
High density oligonucleotide expression arrays are a widely used tool for the measurement of gene expression on a large scale. Affymetrix GeneChip arrays appear to dominate this market. These arrays use short oligonucleotides to probe for genes in an RNA sample. Due to optical noise, non-specific hybridization, probe-specific effects, and measurement error, ad-hoc measures of expression, that summarize probe intensities, can lead to imprecise and inaccurate results. Various researchers have demonstrated that expression measures based on simple statistical models can provide great improvements over the ad-hoc procedure offered by Affymetrix. Recently, physical models based on molecular hybridization theory, have been proposed as useful tools for prediction of, for example, non-specific hybridization. These physical models show great potential in terms of improving existing expression measures. In this paper we demonstrate that the system producing the measured intensities is too complex to be fully described with these relatively simple physical models and we propose empirically motivated stochastic models that compliment the above mentioned molecular hybridization theory to provide a comprehensive description of the data. We discuss how the proposed model can be used to obtain improved measures of expression useful for the data analysts.
Resumo:
In many complex and dynamic domains, the ability to generate and then select the appropriate course of action is based on the decision maker's "reading" of the situation--in other words, their ability to assess the situation and predict how it will evolve over the next few seconds. Current theories regarding option generation during the situation assessment and response phases of decision making offer contrasting views on the cognitive mechanisms that support superior performance. The Recognition-Primed Decision-making model (RPD; Klein, 1989) and Take-The-First heuristic (TTF; Johnson & Raab, 2003) suggest that superior decisions are made by generating few options, and then selecting the first option as the final one. Long-Term Working Memory theory (LTWM; Ericsson & Kintsch, 1995), on the other hand, posits that skilled decision makers construct rich, detailed situation models, and that as a result, skilled performers should have the ability to generate more of the available task-relevant options. The main goal of this dissertation was to use these theories about option generation as a way to further the understanding of how police officers anticipate a perpetrator's actions, and make decisions about how to respond, during dynamic law enforcement situations. An additional goal was to gather information that can be used, in the future, to design training based on the anticipation skills, decision strategies, and processes of experienced officers. Two studies were conducted to achieve these goals. Study 1 identified video-based law enforcement scenarios that could be used to discriminate between experienced and less-experienced police officers, in terms of their ability to anticipate the outcome. The discriminating scenarios were used as the stimuli in Study 2; 23 experienced and 26 less-experienced police officers observed temporally-occluded versions of the scenarios, and then completed assessment and response option-generation tasks. The results provided mixed support for the nature of option generation in these situations. Consistent with RPD and TTF, participants typically selected the first-generated option as their final one, and did so during both the assessment and response phases of decision making. Consistent with LTWM theory, participants--regardless of experience level--generated more task-relevant assessment options than task-irrelevant options. However, an expected interaction between experience level and option-relevance was not observed. Collectively, the two studies provide a deeper understanding of how police officers make decisions in dynamic situations. The methods developed and employed in the studies can be used to investigate anticipation and decision making in other critical domains (e.g., nursing, military). The results are discussed in relation to how they can inform future studies of option-generation performance, and how they could be applied to develop training for law enforcement officers.
Resumo:
Anomie theorists have been reporting the suppression of shared welfare orientations by the overwhelming dominance of economic values within capitalist societies since before the outset of neoliberalism debate. Obligations concerning common welfare are more and more often subordinated to the overarching aim of realizing economic success goals. This should be especially valid with for social life in contemporary market societies. This empirical investigation examines the extent to which market imperatives and values of the societal community are anchored within the normative orientations of market actors. Special attention is paid to whether the shape of these normative orientations varies with respect to the degree of market inclusion. Empirical analyses, based on the data of a standardized written survey within the German working population carried out in 2002, show that different types of normative orientation can be distinguished among market actors. These types are quite similar to the well-known types of anomic adaptation developed by Robert K. Merton in “Social Structure and Anomie” and are externally valid with respect to the prediction of different forms of economic crime. Further analyses show that the type of normative orientation actors adopt within everyday life depends on the degree of market inclusion. Confirming anomie theory, it is shown that the individual willingness to subordinate matters of common welfare to the aim of economic success—radical market activism—gets stronger the more actors are included in the market sphere. Finally, the relevance of reported findings for the explanation of violent behavior, especially with view to varieties of corporate violence, is discussed.
Resumo:
This paper introduces an extended hierarchical task analysis (HTA) methodology devised to evaluate and compare user interfaces on volumetric infusion pumps. The pumps were studied along the dimensions of overall usability and propensity for generating human error. With HTA as our framework, we analyzed six pumps on a variety of common tasks using Norman’s Action theory. The introduced method of evaluation divides the problem space between the external world of the device interface and the user’s internal cognitive world, allowing for predictions of potential user errors at the human-device level. In this paper, one detailed analysis is provided as an example, comparing two different pumps on two separate tasks. The results demonstrate the inherent variation, often the cause of usage errors, found with infusion pumps being used in hospitals today. The reported methodology is a useful tool for evaluating human performance and predicting potential user errors with infusion pumps and other simple medical devices.