830 resultados para Global sensitivity analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

- Objective To compare health service cost and length of stay between a traditional and an accelerated diagnostic approach to assess acute coronary syndromes (ACS) among patients who presented to the emergency department (ED) of a large tertiary hospital in Australia. - Design, setting and participants This historically controlled study analysed data collected from two independent patient cohorts presenting to the ED with potential ACS. The first cohort of 938 patients was recruited in 2008–2010, and these patients were assessed using the traditional diagnostic approach detailed in the national guideline. The second cohort of 921 patients was recruited in 2011–2013 and was assessed with the accelerated diagnostic approach named the Brisbane protocol. The Brisbane protocol applied early serial troponin testing for patients at 0 and 2 h after presentation to ED, in comparison with 0 and 6 h testing in traditional assessment process. The Brisbane protocol also defined a low-risk group of patients in whom no objective testing was performed. A decision tree model was used to compare the expected cost and length of stay in hospital between two approaches. Probabilistic sensitivity analysis was used to account for model uncertainty. - Results Compared with the traditional diagnostic approach, the Brisbane protocol was associated with reduced expected cost of $1229 (95% CI −$1266 to $5122) and reduced expected length of stay of 26 h (95% CI −14 to 136 h). The Brisbane protocol allowed physicians to discharge a higher proportion of low-risk and intermediate-risk patients from ED within 4 h (72% vs 51%). Results from sensitivity analysis suggested the Brisbane protocol had a high chance of being cost-saving and time-saving. - Conclusions This study provides some evidence of cost savings from a decision to adopt the Brisbane protocol. Benefits would arise for the hospital and for patients and their families.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A finite gain differential amplifier is used along with a few passive RC elements to simulate an inductor. Methods for obtaining low Q inductance and frequency dependent high QI inductance are described. Sensitivity analysis when the gain varies is also included.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a new multi-stage mine production timetabling (MMPT) model to optimise open-pit mine production operations including drilling, blasting and excavating under real-time mining constraints. The MMPT problem is formulated as a mixed integer programming model and can be optimally solved for small-size MMPT instances by IBM ILOG-CPLEX. Due to NP-hardness, an improved shifting-bottleneck-procedure algorithm based on the extended disjunctive graph is developed to solve large-size MMPT instances in an effective and efficient way. Extensive computational experiments are presented to validate the proposed algorithm that is able to efficiently obtain the near-optimal operational timetable of mining equipment units. The advantages are indicated by sensitivity analysis under various real-life scenarios. The proposed MMPT methodology is promising to be implemented as a tool for mining industry because it is straightforwardly modelled as a standard scheduling model, efficiently solved by the heuristic algorithm, and flexibly expanded by adopting additional industrial constraints.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper presents an innovative approach to modelling the causal relationships of human errors in rail crack incidents (RCI) from a managerial perspective. A Bayesian belief network is developed to model RCI by considering the human errors of designers, manufactures, operators and maintainers (DMOM) and the causal relationships involved. A set of dependent variables whose combinations express the relevant functions performed by each DMOM participant is used to model the causal relationships. A total of 14 RCI on Hong Kong’s mass transit railway (MTR) from 2008 to 2011 are used to illustrate the application of the model. Bayesian inference is used to conduct an importance analysis to assess the impact of the participants’ errors. Sensitivity analysis is then employed to gauge the effect the increased probability of occurrence of human errors on RCI. Finally, strategies for human error identification and mitigation of RCI are proposed. The identification of ability of maintainer in the case study as the most important factor influencing the probability of RCI implies the priority need to strengthen the maintenance management of the MTR system and that improving the inspection ability of the maintainer is likely to be an effective strategy for RCI risk mitigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research treats the lateral impact behaviour of composite columns, which find increasing use as bridge piers and building columns. It offers (1) innovative experimental methods for testing structural columns, (2) dynamic computer simulation techniques as a viable tool in analysis and design of such columns and (3) significant new information on their performance which can be used in design. The research outcomes will enable to protect lives and properties against the risk of vehicular impacts caused either accidentally or intentionally.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The determination of the overconsolidation ratio (OCR) of clay deposits is an important task in geotechnical engineering practice. This paper examines the potential of a support vector machine (SVM) for predicting the OCR of clays from piezocone penetration test data. SVM is a statistical learning theory based on a structural risk minimization principle that minimizes both error and weight terms. The five input variables used for the SVM model for prediction of OCR are the corrected cone resistance (qt), vertical total stress (sigmav), hydrostatic pore pressure (u0), pore pressure at the cone tip (u1), and the pore pressure just above the cone base (u2). Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. From the sensitivity analysis, it is clear that qt=primary in situ data influenced by OCR followed by sigmav, u0, u2, and u1. Comparison between SVM and some of the traditional interpretation methods is also presented. The results of this study have shown that the SVM approach has the potential to be a practical tool for determination of OCR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper an approach for obtaining depth and section modulus of the cantilever sheet pile wall using inverse reliability method is described. The proposed procedure employs inverse first order reliability method to obtain the design penetration depth and section modulus of the steel sheet pile wall in order that the reliability of the wall against failure modes must meet a desired level of safety. Sensitivity analysis is conducted to assess the effect of uncertainties in design parameters on the reliability of cantilever sheet pile walls. The analysis is performed by treating back fill soil properties, depth of the water table from the top of the sheet pile wall, yield strength of steel and section modulus of steel pile as random variables. Two limit states, viz., rotational and flexural failure of sheet pile wall are considered. The results using this approach are used to develop a set of reliability based design charts for different coefficients of variation of friction angle of the backfill (5%, 10% and 15%). System reliability considerations in terms of series and parallel systems are also studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, the stability of anchored cantilever sheet pile wall in sandy soils is investigated using reliability analysis. Targeted stability is formulated as an optimization problem in the framework of an inverse first order reliability method. A sensitivity analysis is conducted to investigate the effect of parameters influencing the stability of sheet pile wall. Backfill soil properties, soil - steel pile interface friction angle, depth of the water table from the top of the sheet pile wall, total depth of embedment below the dredge line, yield strength of steel, section modulus of steel sheet pile, and anchor pull are all treated as random variables. The sheet pile wall system is modeled as a series of failure mode combination. Penetration depth, anchor pull, and section modulus are calculated for various target component and system reliability indices based on three limit states. These are: rotational failure about the position of the anchor rod, expressed in terms of moment ratio; sliding failure mode, expressed in terms of force ratio; and flexural failure of the steel sheet pile wall, expressed in terms of the section modulus ratio. An attempt is made to propose reliability based design charts considering the failure criteria as well as the variability in the parameters. The results of the study are compared with studies in the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Through the example of a spacecraft equipment deck, which is generally made of honeycomb sandwich construction, it is shown that modal energy distribution can be used as an effective guideline in improving the deck's frequencies to meet the restrictions imposed upon it. The kinetic energy distribution is employed as a basis for redistributing various packages on the deck. Strain energy distribution is used to identify areas which can be stiffened by bonding �doublers� on the face sheets and the doubler thickness is obtained from a sensitivity analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, an analytical model is proposed for fatigue crack propagation in plain concrete based on population growth exponential law and in conjunction with principles of dimensional analysis and self-similarity. This model takes into account parameters such as loading history, fracture toughness, crack length, loading ratio and structural size. The predicted results are compared with experimental crack growth data for constant and variable amplitude loading and are found to capture the size effect apart from showing a good agreement. Using this model, a sensitivity analysis is carried out to study the effect of various parameters that influence fatigue failure. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The source localization algorithms in the earlier works, mostly used non-planar arrays. If we consider scenarios like human-computer communication, or human-television communication where the microphones need to be placed on the computer monitor or television front panel, i.e we need to use the planar arrays. The algorithm proposed in 1], is a Linear Closed Form source localization algorithm (LCF algorithm) which is based on Time Difference of Arrivals (TDOAs) that are obtained from the data collected using the microphones. It assumes non-planar arrays. The LCF algorithm is applied to planar arrays in the current work. The relationship between the error in the source location estimate and the perturbation in the TDOAs is derived using first order perturbation analysis and validated using simulations. If the TDOAs are erroneous, both the coefficient matrix and the data matrix used for obtaining source location will be perturbed. So, the Total least squares solution for source localization is proposed in the current work. The sensitivity analysis of the source localization algorithm for planar arrays and non-planar arrays is done by introducing perturbation in the TDOAs and the microphone locations. It is shown that the error in the source location estimate is less when we use planar array instead of the particular non-planar array considered for same perturbation in the TDOAs or microphone location. The location of the reference microphone is proved to be important for getting an accurate source location estimate if we are using the LCF algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The weighted-least-squares method using sensitivity-analysis technique is proposed for the estimation of parameters in water-distribution systems. The parameters considered are the Hazen-Williams coefficients for the pipes. The objective function used is the sum of the weighted squares of the differences between the computed and the observed values of the variables. The weighted-least-squares method can elegantly handle multiple loading conditions with mixed types of measurements such as heads and consumptions, different sets and number of measurements for each loading condition, and modifications in the network configuration due to inclusion or exclusion of some pipes affected by valve operations in each loading condition. Uncertainty in parameter estimates can also be obtained. The method is applied for the estimation of parameters in a metropolitan urban water-distribution system in India.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with the evaluation of the component-laminate load-carrying capacity, i.e., to calculate the loads that cause the failure of the individual layers and the component-laminate as a whole in four-bar mechanism. The component-laminate load-carrying capacity is evaluated using the Tsai-Wu-Hahn failure criterion for various layups. The reserve factor of each ply in the component-laminate is calculated by using the maximum resultant force and the maximum resultant moment occurring at different time steps at the joints of the mechanism. Here, all component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (beam). Each component of the mechanism is modeled as a beam based on geometrically nonlinear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and nonlinear 1-D analyses along the three beam reference curves. For the thin rectangular cross-sections considered here, the 2-D cross-sectional nonlinearity is also overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the nonlinear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the nonlinear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict more quickly and accurately than would otherwise be possible. Local 3-D stress, strain and displacement fields for representative sections in the component-bars are recovered, based on the stress resultants from the 1-D global beam analysis. A numerical example is presented which illustrates the failure of each component-laminate and the mechanism as a whole.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three classification techniques, namely, K-means Cluster Analysis (KCA), Fuzzy Cluster Analysis (FCA), and Kohonen Neural Networks (KNN) were employed to group 25 microwatersheds of Kherthal watershed, Rajasthan into homogeneous groups for formulating the basis for suitable conservation and management practices. Ten parameters, mainly, morphological, namely, drainage density (D-d), bifurcation ratio (R-b), stream frequency (F-u), length of overland flow (L-o), form factor (R-f), shape factor (B-s), elongation ratio (R-e), circulatory ratio (R-c), compactness coefficient (C-c) and texture ratio (T) are used for the classification. Optimal number of groups is chosen, based on two cluster validation indices Davies-Bouldin and Dunn's. Comparative analysis of various clustering techniques revealed that 13 microwatersheds out of 25 are commonly suggested by KCA, FCA and KNN i.e., 52%; 17 microwatersheds out of 25 i.e., 68% are commonly suggested by KCA and FCA whereas these are 16 out of 25 in FCA and KNN (64%) and 15 out of 25 in KNN and CA (60%). It is observed from KNN sensitivity analysis that effect of various number of epochs (1000, 3000, 5000) and learning rates (0.01, 0.1-0.9) on total squared error values is significant even though no fixed trend is observed. Sensitivity analysis studies revealed that microwatershecls have occupied all the groups even though their number in each group is different in case of further increase in the number of groups from 5 to 6, 7 and 8. (C) 2010 International Association of Hydro-environment Engineering and Research, Asia Pacific Division. Published by Elsevier B.V. All rights reserved.