967 resultados para Railroad safety, Bayesian methods, Accident modification factor, Countermeasure selection
Resumo:
Pós-graduação em Agronomia (Genética e Melhoramento de Plantas) - FCAV
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
In this thesis three measurements of top-antitop differential cross section at an energy in the center of mass of 7 TeV will be shown, as a function of the transverse momentum, the mass and the rapidity of the top-antitop system. The analysis has been carried over a data sample of about 5/fb recorded with the ATLAS detector. The events have been selected with a cut based approach in the "one lepton plus jets" channel, where the lepton can be either an electron or a muon. The most relevant backgrounds (multi-jet QCD and W+jets) have been extracted using data driven methods; the others (Z+ jets, diboson and single top) have been simulated with Monte Carlo techniques. The final, background-subtracted, distributions have been corrected, using unfolding methods, for the detector and selection effects. At the end, the results have been compared with the theoretical predictions. The measurements are dominated by the systematic uncertainties and show no relevant deviation from the Standard Model predictions.
Resumo:
The motion of lung tumors during respiration makes the accurate delivery of radiation therapy to the thorax difficult because it increases the uncertainty of target position. The adoption of four-dimensional computed tomography (4D-CT) has allowed us to determine how a tumor moves with respiration for each individual patient. Using information acquired during a 4D-CT scan, we can define the target, visualize motion, and calculate dose during the planning phase of the radiotherapy process. One image data set that can be created from the 4D-CT acquisition is the maximum-intensity projection (MIP). The MIP can be used as a starting point to define the volume that encompasses the motion envelope of the moving gross target volume (GTV). Because of the close relationship that exists between the MIP and the final target volume, we investigated four MIP data sets created with different methodologies (3 using various 4D-CT sorting implementations, and one using all available cine CT images) to compare target delineation. It has been observed that changing the 4D-CT sorting method will lead to the selection of a different collection of images; however, the clinical implications of changing the constituent images on the resultant MIP data set are not clear. There has not been a comprehensive study that compares target delineation based on different 4D-CT sorting methodologies in a patient population. We selected a collection of patients who had previously undergone thoracic 4D-CT scans at our institution, and who had lung tumors that moved at least 1 cm. We then generated the four MIP data sets and automatically contoured the target volumes. In doing so, we identified cases in which the MIP generated from a 4D-CT sorting process under-represented the motion envelope of the target volume by more than 10% than when measured on the MIP generated from all of the cine CT images. The 4D-CT methods suffered from duplicate image selection and might not choose maximum extent images. Based on our results, we suggest utilization of a MIP generated from the full cine CT data set to ensure a representative inclusive tumor extent, and to avoid geometric miss.
Resumo:
OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
The selection of metrics for ecosystem restoration programs is critical for improving the quality of monitoring programs and characterizing project success. Moreover it is oftentimes very difficult to balance the importance of multiple ecological, social, and economical metrics. Metric selection process is a complex and must simultaneously take into account monitoring data, environmental models, socio-economic considerations, and stakeholder interests. We propose multicriteria decision analysis (MCDA) methods, broadly defined, for the selection of optimal sets of metrics to enhance evaluation of ecosystem restoration alternatives. Two MCDA methods, a multiattribute utility analysis (MAUT), and a probabilistic multicriteria acceptability analysis (ProMAA), are applied and compared for a hypothetical case study of a river restoration involving multiple stakeholders. Overall, the MCDA results in a systematic, unbiased, and transparent solution, informing restoration alternatives evaluation. The two methods provide comparable results in terms of selected metrics. However, because ProMAA can consider probability distributions for weights and utility values of metrics for each criteria, it is suggested as the best option if data uncertainty is high. Despite the increase in complexity in the metric selection process, MCDA improves upon the current ad-hoc decision practice based on the consultations with stakeholders and experts, and encourages transparent and quantitative aggregation of data and judgement, increasing the transparency of decision making in restoration projects. We believe that MCDA can enhance the overall sustainability of ecosystem by enhancing both ecological and societal needs.
Resumo:
While the elegance and efficiency of enzymatic catalysis have long tempted chemists and biochemists with reductionist leanings to try to mimic the functions of natural enzymes in much smaller peptides, such efforts have only rarely produced catalysts with biologically interesting properties. However, the advent of genetic engineering and hybridoma technology and the discovery of catalytic RNA have led to new and very promising alternative means of biocatalyst development. Synthetic chemists have also had some success in creating nonpeptide catalysts with certain enzyme-like characteristics, although their rates and specificities are generally much poorer than those exhibited by the best novel biocatalysts based on natural structures. A comparison of the various approaches from theoretical and practical viewpoints is presented. It is suggested that, given our current level of understanding, the most fruitful methods may incorporate both iterative selection strategies and rationally chosen small perturbations, superimposed on frameworks designed by nature.
Resumo:
Vol. 5 issued by the National League for Nursing, Division of Nursing Education.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focussed recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the best choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The manufacture of copper alloy flat rolled metals involves hot and cold rolling operations, together with annealing and other secondary processes, to transform castings (mainly slabs and cakes) into such shapes as strip, plate, sheet, etc. Production is mainly to customer orders in a wide range of specifications for dimensions and properties. However, order quantities are often small and so process planning plays an important role in this industry. Much research work has been done in the past in relation to the technology of flat rolling and the details of the operations, however, there is little or no evidence of any research in the planning of processes for this type of manufacture. Practical observation in a number of rolling mills has established the type of manual process planning traditionally used in this industry. This manual approach, however, has inherent drawbacks, being particularly dependent on the individual planners who gain their knowledge over a long span of practical experience. The introduction of the retrieval CAPP approach to this industry was a first step to reduce these problems. But this could not provide a long-term answer because of the need for an experienced planner to supervise generation of any plan. It also fails to take account of the dynamic nature of the parameters involved in the planning, such as the availability of resources, operation conditions and variations in the costs. The other alternative is the use of a generative approach to planning in the rolling mill context. In this thesis, generative methods are developed for the selection of optimal routes for single orders and then for batches of orders, bearing in mind equipment restrictions, production costs and material yield. The batch order process planning involves the use of a special cluster analysis algorithm for optimal grouping of the orders. This research concentrates on cold-rolling operations. A prototype model of the proposed CAPP system, including both single order and batch order planning options, has been developed and tested on real order data in the industry. The results were satisfactory and compared very favourably with the existing manual and retrieval methods.
Resumo:
A környezeti hatások rendszerint túlmutatnak egy vállalat határain, éppen ezért az ellátási lánc kontextusban a környezeti szempontok érvényesítése során fontos szerep jut a beszerzési döntéseknek is. Számos olyan példát lehetne említeni, amikor egy adott szempont szerint egy alternatíva környezetileg előnyös, de az ellátási lánc egészét nézve már környezetterhelő. A környezeti hatások ellátási lánc szinten való mérése azonban komoly kihívásokat jelent. Ezzel jelentős kutatásokat és fejlesztéseket inspirált a téma. Az egyik olyan terület, amelyben komoly kutatási eredmények születtek, az a környezeti szempontok beszállítói értékelésbe való beépítése. A kutatások ezen irányához csatlakozva a szerzők tanulmányunkban azt keresik, hogyan lehet meghatározni az egyik legáltalánosabban használt szállítóértékelési módszerben, a súlyozott pontrendszerben egy adott szemponthoz azt a súlyt, amely mellett az adott szempont már döntésbefolyásoló tényezővé válik. Ehhez a DEA (Data Envelopment Analysis) összetett indikátorok (Composite Indicators, CI) módszerét alkalmazzák. A szempontok közös súlyának fontossága megállapításához a lineáris programozás elméletét használják. _____ Management decisions often have an environmental effect not just within the company, but outside as well, this is why supply chain context is highlighted in literature. Measuring environmental issues of supply decisions raise a lot of problems from methodological and practical point of view. This inspires a rapidly growing literature as a lot of studies were published focusing on how to incorporate environmental issues into supplier evaluation. This paper contributes to this stream of research as it develops a method to help weight selection. In the authors’ paper the method of Data Envelope Analysis (DEA) is used to study the extension of traditional supplier selection methods with environmental factors. The selection of the weight system can control the result of the selection process.
Resumo:
Infrared selection is a potentially powerful way to identify heavily obscured AGNs missed in even the deepest X-ray surveys. Using a 24 μm-selected sample in GOODS-S, we test the reliability and completeness of three infrared AGN selection methods: (1) IRAC color-color selection, (2) IRAC power-law selection, and (3) IR-excess selection; we also evaluate a number of IR-excess approaches. We find that the vast majority of non-power-law IRAC color-selected AGN candidates in GOODS-S have colors consistent with those of star-forming galaxies. Contamination by star-forming galaxies is most prevalent at low 24 μm flux densities (~100 μJy) and high redshifts (z ~ 2), but the fraction of potential contaminants is still high (~50%) at 500 μJy, the highest flux density probed reliably by our survey. AGN candidates selected via a simple, physically motivated power-law criterion ("power-law galaxies," or PLGs), however, appear to be reliable. We confirm that the IR-excess methods successfully identify a number of AGNs, but we also find that such samples may be significantly contaminated by star-forming galaxies. Adding only the secure Spitzer-selected PLG, color-selected, IR-excess, and radio/IR-selected AGN candidates to the deepest X-ray-selected AGN samples directly increases the number of known X-ray AGNs (84) by 54%-77%, and implies an increase to the number of 24 μm-detected AGNs of 71%-94%. Finally, we show that the fraction of MIR sources dominated by an AGN decreases with decreasing MIR flux density, but only down to f_24 μ m = 300 μJy. Below this limit, the AGN fraction levels out, indicating that a nonnegligible fraction (~10%) of faint 24 μm sources (the majority of which are missed in the X-ray) are powered not by star formation, but by the central engine. The fraction of all AGNs (regardless of their MIR properties) exceeds 15% at all 24 μm flux densities.
Resumo:
BACKGROUND: Though guidelines emphasize low-density lipoprotein cholesterol (LDL-C) lowering as an essential strategy for cardiovascular risk reduction, achieving target levels may be difficult. PATIENTS AND METHODS: The authors conducted a prospective, controlled, open-label trial examining the effectiveness and safety of high-dose fluvastatin or a standard dosage of simvastatin plus ezetimibe, both with an intensive guideline-oriented cardiac rehabilitation program, in achieving the new ATP III LDL-C targets in patients with proven coronary artery disease. 305 consecutive patients were enrolled in the study. Patients were divided into two groups: the simvastatin (40 mg/d) plus ezetimibe (10 mg/d) and the fluvastatin-only group (80 mg/d). Patients in both study groups received the treatment for 21 days in addition to nonpharmacological measures, including advanced physical, dietary, psychosocial, and educational activities. RESULTS: After 21 days of treatment, a significant reduction in LDL-C was found in both study groups as compared to the initial values, however, the reduction in LDL-C was significantly stronger in the simvastatin plus ezetimibe group: simvastatin plus ezetimibe treatment decreased LDL-C to a mean level of 57.7 +/- 1.7 mg/ml, while fluvastatin achieved a reduction to 84.1 +/- 2.4 mg/ml (p < 0.001). In the simvastatin plus ezetimibe group, 95% of the patients reached the target level of LDL-C < 100 mg/dl. This percentage was significantly higher than in patients treated with fluvastatin alone (75%; p < 0.001). The greater effectiveness of simvastatin plus ezetimibe was more impressive when considering the optional goal of LDL-C < 70 mg/dl (75% vs. 32%, respectively; p < 0.001). There was no difference in occurrence of adverse events between both groups. CONCLUSION: Simvastatin 40 mg/d plus ezetimibe 10 mg/d, on the background of a guideline-oriented standardized intensive cardiac rehabilitation program, can reach 95% effectiveness in achieving challenging goals (LDL < 100 mg/dl) using lipid-lowering medication in patients at high cardiovascular risk.
Resumo:
Polymer binder modification with inorganic nanomaterials (NM) could be a potential and efficient solution to control matrix flammability of polymer concrete (PC) materials without sacrificing other important properties. Occupational exposures can occur all along the life cycle of a NM and “nanoproducts” from research through scale-up, product development, manufacturing, and end of life. The main objective of the present study is to analyse and compare different qualitative risk assessment methods during the production of polymer mortars (PM) with NM. The laboratory scale production process was divided in 3 main phases (pre-production, production and post-production), which allow testing the assessment methods in different situations. The risk assessment involved in the manufacturing process of PM was made by using the qualitative analyses based on: French Agency for Food, Environmental and Occupational Health & Safety method (ANSES); Control Banding Nanotool (CB Nanotool); Ecole Polytechnique Fédérale de Lausanne method (EPFL); Guidance working safely with nanomaterials and nanoproducts (GWSNN); Istituto Superiore per la Prevenzione e la Sicurezza del Lavoro, Italy method (ISPESL); Precautionary Matrix for Synthetic Nanomaterials (PMSN); and Stoffenmanager Nano. It was verified that the different methods applied also produce different final results. In phases 1 and 3 the risk assessment tends to be classified as medium-high risk, while for phase 2 the more common result is medium level. It is necessary to improve the use of qualitative methods by defining narrow criteria for the methods selection for each assessed situation, bearing in mind that the uncertainties are also a relevant factor when dealing with the risk related to nanotechnologies field.