830 resultados para Global sensitivity analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

As an initial step in establishing mechanistic relationships between environmental variability and recruitment in Atlantic cod Gadhus morhua along the coast of the western Gulf of Maine, we assessed transport success of larvae from major spawning grounds to nursery areas with particle tracking using the unstructured grid model FVCOM (finite volume coastal ocean model). In coastal areas, dispersal of early planktonic life stages of fish and invertebrate species is highly dependent on the regional dynamics and its variability, which has to be captured by our models. With state-of-the-art forcing for the year 1995, we evaluate the sensitivity of particle dispersal to the timing and location of spawning, the spatial and temporal resolution of the model, and the vertical mixing scheme. A 3 d frequency for the release of particles is necessary to capture the effect of the circulation variability into an averaged dispersal pattern of the spawning season. The analysis of sensitivity to model setup showed that a higher resolution mesh, tidal forcing, and current variability do not change the general pattern of connectivity, but do tend to increase within-site retention. Our results indicate strong downstream connectivity among spawning grounds and higher chances for successful transport from spawning areas closer to the coast. The model run for January egg release indicates 1 to 19 % within-spawning ground retention of initial particles, which may be sufficient to sustain local populations. A systematic sensitivity analysis still needs to be conducted to determine the minimum mesh and forcing resolution that adequately resolves the complex dynamics of the western Gulf of Maine. Other sources of variability, i.e. large-scale upstream forcing and the biological environment, also need to be considered in future studies of the interannual variability in transport and survival of the early life stages of cod.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

AIMS Metformin use has been associated with a decreased risk of some cancers, although data on head and neck cancer (HNC) are scarce. We explored the relation between the use of antidiabetic drugs and the risk of HNC. METHODS We conducted a case-control analysis in the UK-based Clinical Practice Research Datalink (CPRD) of people with incident HNC between 1995 and 2013 below the age of 90 years. Six controls per case were matched on age, sex, calendar time, general practice and number of years of active history in the CPRD prior to the index date. Other potential confounders including body mass index (BMI), smoking, alcohol consumption and comorbidities were also evaluated. The final analyses were adjusted for BMI, smoking and diabetes mellitus (or diabetes duration in a sensitivity analysis). Results are presented as odds ratios (ORs) with 95% confidence intervals (CIs). RESULTS Use of metformin was neither associated with a statistically significant altered risk of HNC overall (1-29 prescriptions: adjusted OR 0.87, 95% CI 0.61-1.24 and ≥ 30 prescriptions adjusted OR 0.80, 95% CI 0.53-1.22), nor was long-term use of sulphonylureas (adjusted OR 0.87, 95% CI 0.59-1.30), or any insulin use (adjusted OR 0.92, 95% CI 0.63-1.35). However, we found a (statistically non-significant) decreased risk of laryngeal cancer associated with long-term metformin use (adjusted OR 0.41, 95% CI 0.17-1.03). CONCLUSIONS In this population-based study, the use of antidiabetic drugs was not associated with a materially altered risk of HNC. Our data suggest a protective effect of long-term metformin use for laryngeal cancer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND The Endoscopic Release of Carpal Tunnel Syndrome (ECTR) is a minimal invasive approach for the treatment of Carpal Tunnel Syndrome. There is scepticism regarding the safety of this technique, based on the assumption that this is a rather "blind" procedure and on the high number of severe complications that have been reported in the literature. PURPOSE To evaluate whether there is evidence supporting a higher risk after ECTR in comparison to the conventional open release. METHODS We searched MEDLINE (January 1966 to November 2013), EMBASE (January 1980 to November 2013), the Cochrane Neuromuscular Disease Group Specialized Register (November 2013) and CENTRAL (2013, issue 11 in The Cochrane Library). We hand-searched reference lists of included studies. We included all randomized or quasi-randomized controlled trials (e.g. study using alternation, date of birth, or case record number) that compare any ECTR with any OCTR technique. Safety was assessed by the incidence of major, minor and total number of complications, recurrences, and re-operations.The total time needed before return to work or to return to daily activities was also assessed. We synthesized data using a random-effects meta-analysis in STATA. We conducted a sensitivity analysis for rare events using binomial likelihood. We judged the conclusiveness of meta-analysis calculating the conditional power of meta-analysis. CONCLUSIONS ECTR is associated with less time off work or with daily activities. The assessment of major complications, reoperations and recurrence of symptoms does not favor either of the interventions. There is an uncertain advantage of ECTR with respect to total minor complications (more transient paresthesia but fewer skin-related complications). Future studies are unlikely to alter these findings because of the rarity of the outcome. The effect of a learning curve might be responsible for reduced recurrences and reoperations with ECTR in studies that are more recent, although formal statistical analysis failed to provide evidence for such an association. LEVEL OF EVIDENCE I.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Advisory Committee on Immunization Practices (ACIP) develops written recommendations for the routine administration of vaccines to children and adults in the U.S. civilian population. The ACIP is the only entity in the federal government that makes such recommendations. ACIP elaborates on selection of its members and rules out concerns regarding its integrity, but fails to provide information about the importance of economic analysis in vaccine selection. ACIP recommendations can have large health and economic consequences. Emphasis on economic evaluation in health is a likely response to severe pressures of the federal and state health budget. This study describes the economic aspects considered by the ACIP while sanctioning a vaccine, and reviews the economic evaluations (our economic data) provided for vaccine deliberations. A five year study period from 2004 to 2009 is adopted. Publicly available data from ACIP web database is used. Drummond et al. (2005) checklist serves as a guide to assess the quality of economic evaluations presented. Drummond et al.'s checklist is a comprehensive hence it is unrealistic to expect every ACIP deliberation to meet all of their criteria. For practical purposes we have selected seven criteria that we judge to be significant criteria provided by Drummond et al. Twenty-four data points were obtained in a five year period. Our results show that out of the total twenty-four data point‘s (economic evaluations) only five data points received a score of six; that is six items on the list of seven were met. None of the data points received a perfect score of seven. Seven of the twenty-four data points received a score of five. A minimum of a two score was received by only one of the economic analyses. The type of economic evaluation along with the model criteria and ICER/QALY criteria met at 0.875 (87.5%). These three criteria were met at the highest rate among the seven criteria studied. Our study findings demonstrate that the perspective criteria met at 0.583 (58.3%) followed by source and sensitivity analysis criteria both tied at 0.541 (54.1%). The discount factor was met at 0.250 (25.0%).^ Economic analysis is not a novel concept to the ACIP. It has been practiced and presented at these meetings on a regular basis for more than five years. ACIP‘s stated goal is to utilize good quality epidemiologic, clinical and economic analyses to help policy makers choose among alternatives presented and thus achieve a better informed decision. As seen in our study the economic analyses over the years are inconsistent. The large variability coupled with lack of a standardized format may compromise the utility of the economic information for decision-making. While making recommendations, the ACIP takes into account all available information about a vaccine. Thus it is vital that standardized high quality economic information is provided at the ACIP meetings. Our study may provide a call for the ACIP to further investigate deficiencies within the system and thereby to improve economic evaluation data presented. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Marine Isotope Stage (MIS) 11 (424-374 ka) was characterized by a protracted deglaciation and an unusually long climatic optimum. It remains unclear to what degree the climate development during this interglacial reflects the unusually weak orbital forcing or greenhouse gas trends. Previously, arguments about the duration and timing of the MIS11 climatic optimum and about the pace of the deglacial warming were based on a small number of key records, which appear to show regional differences. In order to obtain a global signal of climate evolution during MIS11, we compiled a database of 78 sea surface temperature (SST) records from 57 sites spanning MIS11, aligned these individually on the basis of benthic (N = 28) or planktonic (N = 31) stable oxygen isotope curves to a common time frame and subjected 48 of them to an empirical orthogonal function (EOF) analysis. The analysis revealed a high commonality among all records, with the principal SST trend explaining almost 49% of the variability. This trend indicates that on the global scale, the surface ocean underwent rapid deglacial warming during Termination V, in pace with carbon dioxide rise, followed by a broad SST optimum centered at ~410 kyr. The second EOF, which explained ~18% of the variability, revealed the existence of a different SST trend, characterized by a delayed onset of the temperature optimum during MIS11 at ~398 kyr, followed by a prolonged warm period lasting beyond 380 kyr. This trend is most consistently manifested in the mid-latitude North Atlantic and Mediterranean Sea and is here attributed to the strength of the Atlantic meridional overturning circulation. A sensitivity analysis indicates that these results are robust to record selection and to age-model uncertainties of up to 3-6 kyr, but more sensitive to SST seasonal attribution and SST uncertainties >1 °C. In order to validate the CCSM3 (Community Climate System Model, version 3) predictive potential, the annual and seasonal SST anomalies recorded in a total of 74 proxy records were compared with runs for three time slices representing orbital configuration extremes during the peak interglacial of MIS11. The modeled SST anomalies are characterized by a significantly lower variance compared to the reconstructions. Nevertheless, significant correlations between proxy and model data are found in comparisons on the seasonal basis, indicating that the model captures part of the long-term variability induced by astronomical forcing, which appears to have left a detectable signature in SST trends.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A sensitivity analysis on the multiplication factor, keffkeff, to the cross section data has been carried out for the MYRRHA critical configuration in order to show the most relevant reactions. With these results, a further analysis on the 238Pu and 56Fe cross sections has been performed, comparing the evaluations provided in the JEFF-3.1.2 and ENDF/B-VII.1 libraries for these nuclides. Then, the effect in MYRRHA of the differences between evaluations are analysed, presenting the source of the differences. With these results, recommendations for the 56Fe and 238Pu evaluations are suggested. These calculations have been performed with SCALE6.1 and MCNPX-2.7e.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advanced control techniques like V2, Vout hysteresis or V2Ic can strongly reduce the required output capacitance in PowerSoC converters. Techniques to analyze power converters based on the analysis of the frequency response are not suitable for ripple-based controllers that use fast-scale dynamics to control the power stage. This paper proves that the use of discrete modeling together with Floquet theory is a very powerful tool to model the system and derive stable region diagrams for sensitivity analysis. It is applied to V 2Ic control, validating experimentally that Floquet theory predicts accurately subharmonic oscillations. This method is applied to several ripplebased controllers, providing higher accuracy when it is compared with other techniques based on the frequency response. The paper experimentally validates the usefulness of the discrete modeling and the Floquet theory on a 5 MHz Buck converter with a V 2Ic control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ripple-based controls can strongly reduce the required output capacitance in PowerSoC converter thanks to a very fast dynamic response. Unfortunately, these controls are prone to sub-harmonic oscillations and several parameters affect the stability of these systems. This paper derives and validates a simulation-based modeling and stability analysis of a closed-loop V 2Ic control applied to a 5 MHz Buck converter using discrete modeling and Floquet theory to predict stability. This allows the derivation of sensitivity analysis to design robust systems. The work is extended to different V 2 architectures using the same methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a novel bidirectional multiple port dc/dc transformer topology is presented. The novel concept for dc/dc transformer is based on the Series Resonant Converter (SRC) topology operated at its resonant frequency point. This allows for higher switching frequency to be adopted and enables high efficiency/high power density operation. The feasibility of the proposed concept is verified on a 300W, 700 kHz three port prototype with 390V input voltage and 48V and 12V output voltages. A peak overall efficiency of 93% is measured at full load. A very good load and cross regulation characteristic of the converter is observed in the whole load range, from full load to open circuit. The sensitivity analysis of the resonant capacitance is also performed showing very slight deterioration in the converter performances when a resonant capacitor is changed ±30% of its nominal value.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To better understand destruction mechanisms of wake-vortices behind aircraft, the point vortex method for stability (inviscid) used by Crow is here compared with viscous modal global stability analysis of the linearized Navier-Stokes equations acting on a two-dimensional basic flow, i.e. BiGlobal stability analysis. The fact that the BiGlobal method is viscous, and uses a flnite área vortex model, gives rise to results somewhat different from the point vortex model. It adds more parameters to the problem, but is more realistic.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents a new and computationally efficient method of analysis of a railway track modelled as a continuous beam of 2N spans supported by elastic vertical springs. The main feature of this method is its important reduction in computational effort with respect to standard matrix methods of structural analysis. In this article, the whole structure is considered to be a repetition of a single one. The analysis presented is applied to a simple railway track model, i.e. to a repetitive beam supported on vertical springs (sleepers). The proposed method of analysis is based on the general theory of spatially periodic structures. The main feature of this theory is the possibility to apply Discrete Fourier Transform (DFT) in order to reduce a large system of q(2N + 1) linear stiffness equilibrium equations to a set of 2N + 1 uncoupled systems of q equations each. In this way, a dramatic reduction of the computational effort of solving the large system of equations is achieved. This fact is particularly important in the analysis of railway track structures, in which N is a very large number (around several thousands), and q = 2, the vertical displacement and rotation, is very small. The proposed method allows us to easily obtain the exact solution given by Samartín [1], i.e. the continuous beam railway track response. The comparison between the proposed method and other methods of analysis of railway tracks, such as Lorente de Nó and Zimmermann-Timoshenko, clearly shows the accuracy of the obtained results for the proposed method, even for low values of N. In addition, identical results between the proposed and the Lorente methods have been found, although the proposed method seems to be of simpler application and computationally more efficient than the Lorente one. Small but significative differences occur between these two methods and the one developed by Zimmermann-Timoshenko. This article also presents a detailed sensitivity analysis of the vertical displacement of the sleepers. Although standard matrix methods of structural analysis can handle this railway model, one of the objectives of this article is to show the efficiency of DFT method with respect to standard matrix structural analysis. A comparative analysis between standard matrix structural analysis and the proposed method (DFT), in terms of computational time, input, output and also software programming, will be carried out. Finally, a URL link to a MatLab computer program list, based on the proposed method, is given

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, a model for the determination of displacements, strains, and stresses of a submarine pipeline during its construction is presented. Typically, polyethylene outfall pipelines are the ones treated by this model. The process is carried out from an initial floating situation to the final laying position on the seabed. The following control variables are considered in the laying process: the axial load in the pipe, the flooded inner length, and the distance of the control barge from the coast. External loads such as self-weight, dead loads, and forces due to currents and small waves are also taken into account.This paper describes both the conceptual framework for the proposed model and its practical application in a real engineering situation. The authors also consider how the model might be used as a tool to study how sensitive the behavior of the pipeline is to small changes in the values of the control variables. A detailed description of the actions is considered, especially the ones related to the marine environment such as buoyancy, current, and sea waves. The structural behavior of the pipeline is simulated in the framework of a geometrically nonlinear dynamic analysis. The pipeline is assumed to be a two-dimensional Navier_Bernoulli beam. In the nonlinear analysis an updated Lagrangian formulation is used, and special care is taken regarding the numerical aspects of sea bed contact, follower forces due to external water pressures, and dynamic actions. The paper concludes by describing the implementation of the proposed techniques, using the ANSYS computer program with a number of subroutines developed by the authors. This implementation permits simulation of the two-dimensional structural pipe behavior of the whole construction process. A sensitivity analysis of the bending moments, axial forces, and stresses for different values of the control variables is carried out. Using the techniques described, the engineer may optimize the construction steps in the pipe laying process

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used.