981 resultados para Simplified Models.
Resumo:
The Pulmonary Embolism Severity Index (PESI) is a validated clinical prognostic model for patients with pulmonary embolism (PE). Recently, a simplified version of the PESI was developed. We sought to compare the prognostic performance of the original and simplified PESI. Using data from 15,531 patients with PE, we compared the proportions of patients classified as low versus higher risk between the original and simplified PESI and estimated 30-day mortality within each risk group. To assess the models' accuracy to predict mortality, we calculated sensitivity, specificity, and predictive values and likelihood ratios for low- versus higher-risk patients. We also compared the models' discriminative power by calculating the area under the receiver-operating characteristic curve. The overall 30-day mortality was 9.3%. The original PESI classified a significantly greater proportion of patients as low-risk than the simplified PESI (40.9% vs. 36.8%; p<0.001). Low-risk patients based on the original and simplified PESI had a mortality of 2.3% and 2.7%, respectively. The original and simplified PESI had similar sensitivities (90% vs. 89%), negative predictive values (98% vs. 97%), and negative likelihood ratios (0.23 vs. 0.28) for predicting mortality. The original PESI had a significantly greater discriminatory power than the simplified PESI (area under the ROC curve 0.78 [95% CI: 0.77-0.79] vs. 0.72 [95% CI: 0.71-0.74]; p<0.001). In conclusion, even though the simplified PESI accurately identified patients at low-risk of adverse outcomes, the original PESI classified a higher proportion of patients as low-risk and had a greater discriminatory power than the simplified PESI.
Resumo:
This study aimed to assess the performance of two prognostic models-the European Society of Cardiology (ESC) model and the simplified Pulmonary Embolism Severity Index (sPESI)-in predicting short-term mortality in patients with pulmonary embolism (PE).
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.
Resumo:
To model strength degradation due to low cycle fatigue, at least three different approaches can be considered. One possibility is based on the formulation of a new free energy function and damage energy release rate, as was proposed by Ju(1989). The second approach uses the notion of bounding surface introduced in cyclic plasticity by Dafalias and Popov (1975). From this concept, some models have been proposed to quantify damage in concrete or RC (Suaris et al. 1990). The model proposed by the author to include fatigue effects is based essentially in Marigo (1985) and can be included in this approach.
Resumo:
The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models
Resumo:
Many studies have been developed to analyze the structural seismic behavior through the damage index concept. The evaluation of this index has been employed to quantify the safety of new and existing structures and, also, to establish a framework for seismic retrofitting decision making of structures. Most proposed models are based in a posterthquake evaluation in such a way they uncouple the structural response from the damage evaluation. In this paper, a generalization of the model by Flórez-López (1995) is proposed. The formulation employs irreversible thermodynamics and internal state variable theory applied to the study of beams and frames and it allows and explicit coupling between the degradation and the structural mechanical behavior. A damage index es defined in order to model elastoplasticity coupled with damage and fatigue damage.
Resumo:
A simplified CFD wake model based on the actuator disk concept is used to simulate the wind turbine, represented by a disk upon which a distribution of forces, defined as axial momentum sources, are applied on the incoming non-uniform flow. The rotor is supposed to be uniformly loaded, with the exerted forces function of the incident wind speed, the thrust coefficient and the rotor diameter. The model is tested under different parameterizations of turbulence models and validated through experimental measurements downwind of a wind turbine in terms of wind speed deficit and turbulence intensity.
Resumo:
This paper presents a simplified finite element (FE) methodology for solving accurately beam models with (Timoshenko) and without (Bernoulli-Euler) shear deformation. Special emphasis is made on showing how it is possible to obtain the exact solution on the nodes and a good accuracy inside the element. The proposed simplifying concept, denominated as the equivalent distributed load (EDL) of any order, is based on the use of Legendre orthogonal polynomials to approximate the original or acting load for computing the results between the nodes. The 1-span beam examples show that this is a promising procedure that allows the aim of using either one FE and an EDL of slightly higher order or by using an slightly larger number of FEs leaving the EDL in the lowest possible order assumed by definition to be equal to 4 independently of how irregular the beam is loaded.
Resumo:
The optimization of power architectures is a complex problem due to the plethora of different ways to connect various system components. This issue has been addressed by developing a methodology to design and optimize power architectures in terms of the most fundamental system features: size, cost and efficiency. The process assumes various simplifications regarding the utilized DC/DC converter models in order to prevent the simulation time to become excessive and, therefore, stability is not considered. The objective of this paper is to present a simplified method to analyze small-signal stability of a system in order to integrate it into the optimization methodology. A black-box modeling approach, applicable to commercial converters with unknown topology and components, is based on frequency response measurements enabling the system small-signal stability assessment. The applicability of passivity-based stability criterion is assessed. The stability margins are stated utilizing a concept of maximum peak criteria derived from the behavior of the impedance-based sensitivity function that provides a single number to state the robustness of the stability of a well-defined minor-loop gain.
Resumo:
Estimation of evolutionary distances has always been a major issue in the study of molecular evolution because evolutionary distances are required for estimating the rate of evolution in a gene, the divergence dates between genes or organisms, and the relationships among genes or organisms. Other closely related issues are the estimation of the pattern of nucleotide substitution, the estimation of the degree of rate variation among sites in a DNA sequence, and statistical testing of the molecular clock hypothesis. Mathematical treatments of these problems are considerably simplified by the assumption of a stationary process in which the nucleotide compositions of the sequences under study have remained approximately constant over time, and there now exist fairly extensive studies of stationary models of nucleotide substitution, although some problems remain to be solved. Nonstationary models are much more complex, but significant progress has been recently made by the development of the paralinear and LogDet distances. This paper reviews recent studies on the above issues and reports results on correcting the estimation bias of evolutionary distances, the estimation of the pattern of nucleotide substitution, and the estimation of rate variation among the sites in a sequence.
Resumo:
The process of liquid silicon infiltration is investigated for channels with radii from 0.25 to 0.75 [mm] drilled in compact carbon preforms. The advantage of this setup is that the study of the phenomenon results to be simplified. For comparison purposes, attempts are made in order to work out a framework for evaluating the accuracy of simulations. The approach relies on dimensionless numbers involving the properties of the surface reaction. It turns out that complex hydrodynamic behavior derived from second Newton law can be made consistent with Lattice-Boltzmann simulations. The experiments give clear evidence that the growth of silicon carbide proceeds in two different stages and basic mechanisms are highlighted. Lattice-Boltzmann simulations prove to be an effective tool for the description of the growing phase. Namely, essential experimental constraints can be implemented. As a result, the existing models are useful to gain more insight on the process of reactive infiltration into porous media in the first stage of penetration, i.e. up to pore closure because of surface growth. A way allowing to implement the resistance from chemical reaction in Darcy law is also proposed.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
We present and analyze three different online algorithms for learning in discrete Hidden Markov Models (HMMs) and compare their performance with the Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of the generalization error we draw learning curves in simplified situations and compare the results. The performance for learning drifting concepts of one of the presented algorithms is analyzed and compared with the Baldi-Chauvin algorithm in the same situations. A brief discussion about learning and symmetry breaking based on our results is also presented. © 2006 American Institute of Physics.
Resumo:
A szerző a tisztán elméleti célokra kifejlesztett Neumann-modellt és a gyakorlati alkalmazások céljára kifejlesztett Leontief-modellt veti össze. A Neumann-modell és a Leontief-modell éves termelési periódust feltételező, zárt, stacionárius változatának hasonló matematikai struktúrája azt a feltételezést sugallja, hogy az utóbbi a Neumann-modell sajátos eseteként értelmezhető. Az egyes modellek közgazdasági tartalmát és feltevéseit részletesen kibontva és egymással összevetve a szerző megmutatja, hogy a fenti következtetés félrevezető, két merőben különböző modellről van szó, nem lehet az egyikből a másikat levezetni. Az ikertermelés és technológiai választék lehetősége a Neumann-modell elengedhetetlen feltevése, az éves termelési periódus feltevése pedig kizárja folyam jellegű kibocsátások explicit figyelembevételét. Mindezek feltevések ugyanakkor idegenek a Leontief-modelltől. A két modell valójában egy általánosabb állomány–folyam jellegű zárt, stacionárius modell sajátos esete, méghozzá azok folyamváltozókra redukált alakja. _____ The paper compares the basic assumptions and methodology of the Von Neumann model, developed for purely abstract theoretical purposes, and those of the Leontief model, designed originally for practical applications. Study of the similar mathematical structures of the Von Neumann model and the closed, stationary Leontief model, with a unit length of production period, often leads to the false conclusion that the latter is just a simplified version of the former. It is argued that the economic assumptions of the two models are quite different, which makes such an assertion unfounded. Technical choice and joint production are indispensable features of the Von Neumann model, and the assumption of unitary length of production period excludes the possibility of taking service flows explicitly into account. All these features are completely alien to the Leontief model, however. It is shown that the two models are in fact special cases of a more general stock-flow stationary model, reduced to forms containing only flow variables.