53 resultados para model confidence set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a practical algorithm for the simulation of interactive deformation in a 3D polygonal mesh model. The algorithm combines the conventional simulation of deformation using a spring-mass-damping model, solved by explicit numerical integration, with a set of heuristics to describe certain features of the transient behaviour, to increase the speed and stability of solution. In particular, this algorithm was designed to be used in the simulation of synthetic environments where it is necessary to model realistically, in real time, the effect on non-rigid surfaces being touched, pushed, pulled or squashed. Such objects can be solid or hollow, and have plastic, elastic or fabric-like properties. The algorithm is presented in an integrated form including collision detection and adaptive refinement so that it may be used in a self-contained way as part of a simulation loop to include human interface devices that capture data and render a realistic stereoscopic image in real time. The algorithm is designed to be used with polygonal mesh models representing complex topology, such as the human anatomy in a virtual-surgery training simulator. The paper evaluates the model behaviour qualitatively and then concludes with some examples of the use of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new elastic–viscoplastic (EVP) soil model has been used to simulate the measured deformation response of a soft estuarine soil loaded by a stage-constructed embankment. The simulation incorporates prefabricated vertical drains installed in the foundation soils and reinforcement installed at the base of the embankment. The numerical simulations closely matched the temporal changes in surface settlement beneath the centerline and shoulder of the embankment. More importantly, the elastic–viscoplastic model simulated the pattern and magnitudes of the lateral deformations beneath the toe of the embankment — a notoriously difficult aspect of modelling the deformation response of soft soils. Simulation of the excess pore-water pressure proved more difficult because of the heterogeneous nature of the estuarine deposit. Excess pore-water pressures were, however, mapped reasonably well at three of the six monitoring locations. The simulations were achieved using a small set of material constants that can easily be obtained from standard laboratory tests. This study validates the use of the EVP model for problems involving soft soil deposits beneath loading from a geotechnical structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relevant mouse models of E2a-PBX1-induced pre-B cell leukemia are still elusive. We now report the generation of a pre-B leukemia model using E2a-PBX1 transgenic mice, which lack mature and precursor T-cells as a result of engineered loss of CD3epsilon expression (CD3epsilon(-/-)). Using insertional mutagenesis and inverse-PCR, we show that B-cell leukemia development in the E2a-PBX1 x CD3epsilon(-/-) compound transgenic animals is significantly accelerated when compared to control littermates, and document several known and novel integrations in these tumors. Of all common integration sites, a small region of 19 kb in the Hoxa gene locus, mostly between Hoxa6 and Hoxa10, represented 18% of all integrations in the E2a-PBX1 B-cell leukemia and was targeted in 86% of these leukemias compared to 17% in control tumors. Q-PCR assessment of expression levels for most Hoxa cluster genes in these tumors revealed an unprecedented impact of the proviral integrations on Hoxa gene expression, with tumors having one to seven different Hoxa genes overexpressed at levels up to 6600-fold above control values. Together our studies set the stage for modeling E2a-PBX1-induced B-cell leukemia and shed new light on the complexity pertaining to Hox gene regulation. In addition, our results show that the Hoxa gene cluster is preferentially targeted in E2a-PBX1-induced tumors, thus suggesting functional collaboration between these oncogenes in pre-B-cell tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To test the effect of an adapted U.S. model of pharmaceutical care on prescribing of inappropriate psychoactive (anxiolytic, hypnotic, and antipsychotic) medications and falls in nursing homes for older people in Northern Ireland (NI).

DESIGN: Cluster randomized controlled trial.

SETTING: Nursing homes randomized to intervention (receipt of the adapted model of care; n=11) or control (usual care continued; n=11).

PARTICIPANTS: Residents aged 65 and older who provided informed consent (N=334; 173 intervention, 161 control).

INTERVENTION: Specially trained pharmacists visited intervention homes monthly for 12 months and reviewed residents' clinical and prescribing information, applied an algorithm that guided them in assessing the appropriateness of psychoactive medication, and worked with prescribers (general practitioners) to improve the prescribing of these drugs. The control homes received usual care.

MEASUREMENTS: The primary end point was the proportion of residents prescribed one or more inappropriate psychoactive medicine according to standardized protocols; falls were evaluated using routinely collected falls data mandated by the regulatory body for nursing homes in NI.

RESULTS: The proportion of residents taking inappropriate psychoactive medications at 12 months in the intervention homes (25/128, 19.5%) was much lower than in the control homes (62/124, 50.0%) (odds ratio=0.26, 95% confidence interval=0.14–0.49) after adjustment for clustering within homes. No differences were observed at 12 months in the falls rate between the intervention and control groups.

CONCLUSION: Marked reductions in inappropriate psychoactive medication prescribing in residents resulted from pharmacist review of targeted medications, but there was no effect on falls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background

G protein-coupled receptors (GPCRs) constitute one of the largest groupings of eukaryotic proteins, and represent a particularly lucrative set of pharmaceutical targets. They play an important role in eukaryotic signal transduction and physiology, mediating cellular responses to a diverse range of extracellular stimuli. The phylum Platyhelminthes is of considerable medical and biological importance, housing major pathogens as well as established model organisms. The recent availability of genomic data for the human blood fluke Schistosoma mansoni and the model planarian Schmidtea mediterranea paves the way for the first comprehensive effort to identify and analyze GPCRs in this important phylum.

Results

Application of a novel transmembrane-oriented approach to receptor mining led to the discovery of 117 S. mansoni GPCRs, representing all of the major families; 105 Rhodopsin, 2 Glutamate, 3 Adhesion, 2 Secretin and 5 Frizzled. Similarly, 418 Rhodopsin, 9 Glutamate, 21 Adhesion, 1 Secretin and 11 Frizzled S. mediterranea receptors were identified. Among these, we report the identification of novel receptor groupings, including a large and highly-diverged Platyhelminth-specific Rhodopsin subfamily, a planarian-specific Adhesion-like family, and atypical Glutamate-like receptors. Phylogenetic analysis was carried out following extensive gene curation. Support vector machines (SVMs) were trained and used for ligand-based classification of full-length Rhodopsin GPCRs, complementing phylogenetic and homology-based classification.

Conclusions

Genome-wide investigation of GPCRs in two platyhelminth genomes reveals an extensive and complex receptor signaling repertoire with many unique features. This work provides important sequence and functional leads for understanding basic flatworm receptor biology, and sheds light on a lucrative set of anthelmintic drug targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ammonia oxidation reaction on supported polycrystalline platinum catalyst was investigated in an aluminum-based microreactor. An extensive set of reactions was included in the chemical reactor modeling to facilitate the construction of a kinetic model capable of satisfactory predictions for a wide range of conditions (NH3 partial pressure, 0.01-0.12 atm; O-2 partial pressure, 0.10-0.88 atm; temperature, 523-673 K; contact time, 0.3-0.7 ms). The elementary surface reactions used in developing the mechanism were chosen based on the literature data concerning ammonia oxidation on a Pt catalyst. Parameter estimates for the kinetic model were obtained using multi-response least squares regression analysis using the isothermal plug-flow reactor approximation. To evaluate the model, the behavior of a microstructured reactor was simulated by means of a complete Navier-Stokes model accounting for the reactions on the catalyst surface and the effect of temperature on the physico-chemical properties of the reacting mixture. In this way, the effect of the catalytic wall temperature non-uniformity and the effect of a boundary layer on the ammonia conversion and selectivity were examined. After further optimization of appropriate kinetic parameters, the calculated selectivities and product yields agree very well with the values actually measured in the microreactor. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the dominant choice-modelling paradigm based on Random Utility Maximization-theory (RUM-theory). We highlight how RRM-based models provide closed form, logit-type formulations for choice probabilities that allow for capturing semi-compensatory behaviour and choice set-composition effects while being equally parsimonious as their utilitarian counterparts. Using data from a Stated Choice-experiment aimed at identifying valuations of characteristics of nature parks, we compare RRM-based models and RUM-based models in terms of parameter estimates, goodness of fit, elasticities and consequential policy implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is 'poolable'. © 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper builds on and extends previous research to contribute to ongoing discussion on the use of resource and carbon accounting tools in regional policy making. The Northern Visions project has produced the first evidence-based footpath setting out the actions that need to be taken to achieve the step changes in the Ecological and Carbon Footprint of Northern Ireland. A range of policies and strategies were evaluated using the Resources and Energy Analysis Programme. The analysis provided the first regional evidence base that current sustainable development policy commitments would not lead to the necessary reductions in either the Ecological Footprint or carbon dioxide emissions. Building on previous applications of Ecological Footprint analysis in regional policy making, the research has demonstrated that there is a valuable role for Ecological and Carbon Footprint Analysis in policy appraisal. The use of Ecological and Carbon Footprint Analysis in regional policy making has been evaluated and recommendations made on ongoing methodological development. The authors hope that the research can provide insights for the ongoing use Ecological and Carbon Footprint Analysis in regional policy making and help set out the priorities for research to support this important policy area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We restate the notion of orthogonal calculus in terms of model categories. This provides a cleaner set of results and makes the role of O(n)-equivariance clearer. Thus we develop model structures for the category of n-polynomial and n-homogeneous functors, along with Quillen pairs relating them. We then classify n-homogeneous functors, via a zig-zag of Quillen equivalences, in terms of spectra with an O(n)-action. This improves upon the classification theorem of Weiss. As an application, we develop a variant of orthogonal calculus by replacing topological spaces with orthogonal spectra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of accurate structural/thermal numerical models of complex systems, such as aircraft fuselage barrels, is often limited and determined by the smallest scales that need to be modelled. The development of reduced order models of the smallest scales and consequently their integration with higher level models can be a way to minimise the bottle neck present, while still having efficient, robust and accurate numerical models. In this paper a methodology on how to develop compact thermal fluid models (CTFMs) for compartments where mixed convection regimes are present is demonstrated. Detailed numerical simulations (CFD) have been developed for an aircraft crown compartment and validated against experimental data obtained from a 1:1 scale compartment rig. The crown compartment is defined as the confined area between the upper fuselage and the passenger cabin in a single aisle commercial aircraft. CFD results were utilised to extract average quantities (temperature and heat fluxes) and characteristic parameters (heat transfer coefficients) to generate CTFMs. The CTFMs have then been compared with the results obtained from the detailed models showing average errors for temperature predictions lower than 5%. This error can be deemed acceptable when compared to the nominal experimental error associated with the thermocouple measurements.

The CTFMs methodology developed allows to generate accurate reduced order models where accuracy is restricted to the region of Boundary Conditions applied. This limitation arises from the sensitivity of the internal flow structures to the applied boundary condition set. CTFMs thus generated can be then integrated in complex numerical modelling of whole fuselage sections.

Further steps in the development of an exhaustive methodology would be the implementation of a logic ruled based approach to extract directly from the CFD simulations numbers and positions of the nodes for the CTFM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002; Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part-of-speech-tagged corpus. Concepts are characterized by weighted properties, enriched with concept-property types that approximate classical relations such as hypernymy and function. Our model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures (discovering properties of concepts, grouping properties by property type) but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach. Copyright © 2009 Cognitive Science Society, Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-world graphs or networks tend to exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Much effort has been directed into creating realistic and tractable models for unlabelled graphs, which has yielded insights into graph structure and evolution. Recently, attention has moved to creating models for labelled graphs: many real-world graphs are labelled with both discrete and numeric attributes. In this paper, we present AGWAN (Attribute Graphs: Weighted and Numeric), a generative model for random graphs with discrete labels and weighted edges. The model is easily generalised to edges labelled with an arbitrary number of numeric attributes. We include algorithms for fitting the parameters of the AGWAN model to real-world graphs and for generating random graphs from the model. Using the Enron “who communicates with whom” social graph, we compare our approach to state-of-the-art random labelled graph generators and draw conclusions about the contribution of discrete vertex labels and edge weights to the structure of real-world graphs.