502 resultados para BENCHMARKS
Resumo:
STUDY QUESTION: What is the effect of the minimally invasive surgical treatment of endometriosis on health and on quality of work life (e.g. working performance) of affected women? SUMMARY ANSWER: Absence from work, performance loss and the general negative impact of endometriosis on the job are reduced significantly by the laparoscopic surgery. WHAT IS KNOWN ALREADY: The benefits of surgery overall and of the laparoscopic method in particular for treating endometriosis have been described before. However, previous studies focus on medical benchmarks without including the patient's perspective in a quantitative manner. STUDY DESIGN, SIZE, DURATION: A retrospective questionnaire-based survey covering 211 women with endometriosis and a history of specific laparoscopic surgery in a Swiss university hospital, tertiary care center. Data were returned anonymously and were collected from the beginning of 2012 until March 2013. PARTICIPANTS/MATERIALS, SETTING, METHODS: Women diagnosed with endometriosis and with at least one specific laparoscopic surgery in the past were enrolled in the study. The study investigated the effect of the minimally invasive surgery on health and on quality of work life of affected women. Questions used were obtained from the World Endometriosis Research Foundation (WERF) Global Study on Women's Health (GSWH) instrument. The questionnaire was shortened and adapted for the purpose of the present study. MAIN RESULTS AND THE ROLE OF CHANCE: Of the 587 women invited to participate in the study, 232 (232/587 = 40%) returned the questionnaires. Twenty-one questionnaires were excluded due to incomplete data and 211 sets (211/587 = 36%) were included in the study. Our data show that 62% (n = 130) of the study population declared endometriosis as influencing the job during the period prior to surgery, compared with 28% after surgery (P < 0.001). The mean (maximal) absence from work due to endometriosis was reduced from 2.0 (4.9) to 0.5 (1.4) hours per week (P < 0.001). The mean (maximal) loss in working performance after the surgery averaged out at 5.7% (12.6%) compared with 17.5% (30.5%) before this treatment (P < 0.001). LIMITATIONS, REASONS FOR CAUTION: The mediocre response rate of the study weakens the representativeness of the investigated population. Considering the anonymous setting a non-responder investigation was not performed. A bias due to selection, information and negativity effects within a retrospective survey cannot be excluded, although study-sensitive questions were provided in multiple ways. The absence of a control group (sham group; e.g. patients undergoing specific diagnostic laparoscopy without treatment) is a further limitation of the study. WIDER IMPLICATIONS OF THE FINDINGS: Our study shows that indicated minimally invasive surgery has a clear positive effect on the wellbeing and working performance of women suffering from moderate to severe endometriosis. Furthermore, national net savings in indirect costs with the present number of surgeries is estimated to be €10.7 million per year. In an idealized setting (i.e. without any diagnosis delay) this figure could be more than doubled. STUDY FUNDING/COMPETING INTERESTS: The study was performed on behalf of the University Hospital of Bern (Inselspital) as one of the leading Swiss tertiary care centers. The authors do not declare any competing interests.
Resumo:
isk Management today has moved from being the topic of top level conferences and media discussions to being a permanent issue in the board and top management agenda. Several new directives and regulations in Switzerland, Germany and EU make it obligatory for the firms to have a risk management strategy and transparently disclose the risk management process to their stakeholders. Shareholders, insurance providers, banks, media, analysts, employees, suppliers and other stakeholders expect the board members to be pro-active in knowing the critical risks facing their organization and provide them with a reasonable assurance vis-à-vis the management of those risks. In this environment however, the lack of standards and training opportunities makes this task difficult for board members. This book with the help of real life examples, analysis of drivers, interpretation of the Swiss legal requirements, and information based on international benchmarks tries to reach out to the forward looking leaders of today's businesses. The authors have collectively brought their years of scientific and practical experience in risk management, Swiss law and board memberships together to provide the board members practical solutions in risk management. The desire is that this book will clear the fear regarding risk management from the minds of the company leadership and help them in making risk savvy decisions in quest to achieve their strategic objectives.
Resumo:
Objectives. Minimal Important Differences (MIDs) establish benchmarks for interpreting mean differences in clinical trials involving quality of life outcomes and inform discussions of clinically meaningful change in patient status. As such, the purpose of this study was to assess MIDs for the Functional Assessment of Cancer Therapy–Melanoma (FACT-M). ^ Methods. A prospective validation study of the FACT-M was performed with 273 patients with stage I to IV melanoma. FACT-M, Karnofsky Performance Status (KPS), and Eastern Cooperative Oncology Group Performance Status (ECOG-PS) scores were obtained at baseline and 3 months following enrollment. Anchor- and distribution-based methods were used to assess MIDs, and the correspondence between MID ranges derived from each method was evaluated. ^ Results. This study indicates that an approximate range for MIDs of the FACT-M subscales is between 5 to 8 points for the Trial Outcome Index, 4 to 5 points for the Melanoma Combined Subscale, 2 to 4 points for the Melanoma Subscale, and 1 to 2 points for the Melanoma Surgery Subscale. Each method produced similar but not identical ranges of MIDs. ^ Conclusions. The properties of the anchor instrument employed to derive MIDs directly affect resulting MID ranges and point values. When MIDs are offered as supportive evidence of a clinically meaningful change, the anchor instrument used to derive thresholds should be clearly stated along with evidence supporting the choice of anchor instrument as the most appropriate for the domain of interest. In this analysis, the KPS was a more appropriate measure than the ECOG-PS for assessing MIDs. ^
Resumo:
The study purpose was to analyze the effects Integrated Health Solutions (IHS), an employee wellness program that has been implemented for one year on the corporate campus of a major private sector petrochemical company in Houston, TX, has on employee health. ^ Chronic diseases are the leading causes of morbidity and mortality in the United States and are the most preventable of all health problems. The costs of chronic diseases in the working-age adult population include not only health problems and a decrease in quality of life, but also an increase the cost of health care and costs to businesses and employers, both directly and indirectly. These emerging costs to employers as well as the fact that adults now spend the majority of waking hours at the office have increased the interest in worksite health promotion programs that address many of the behavioral factors that lead to chronic conditions. Therefore, implementing and evaluating programs that are aimed at promoting health and decreasing the prevalence of chronic diseases at worksites is very important. ^ Data came from existing data that were collected by IHS staff during employee biometric screenings at the company in 2010 and 2011. Data from employees who participated in screenings in both 2010 and 2011 were grouped into a cohort by IHS staff. ^ One-tailed t-tests were conducted to determine if there were significant improvements in the biometric measures of body fat percentage, BMI, waist circumference, systolic and diastolic blood pressures, total, HDL, and LDL cholesterol levels, triglycerides, blood glucose levels, and cardiac risk ratios. Sensitivity analysis was conducted to determine if there were differences in program outcomes when stratified by age, gender, job type, and time between screenings. ^ Mean differences for the variables from 2010 to 2011 were small and not always in the desired direction for health improvement indicators. Through conducting t-tests, it was found that there were significant improvements in HDL, cardiac risk ratio, and glucose levels. There were significant increases in cholesterol, LDL, and diastolic blood pressures. For the IHS program, it appears that gender, job type, and time between screenings were possible modifiers of program effectiveness. When program outcome measures were stratified by these factors, results suggest that corporate employees had better outcomes than field employees, males had better outcomes overall than females, and more positive program effects were seen for employees with less time between their two screenings. ^ Recommendations for the program based on the results include ensuring validity of instruments and initial and periodic training of measurement procedures and equipment handling, using normative data or benchmarks to decrease chances for biased estimates of program effectiveness, measuring behaviors as well as biometric and physiologic statuses and changes, and collecting level of engagement data.^
Resumo:
Although the permanently to seasonally ice-covered Arctic Ocean is a unique and sensitive component in the Earth's climate system, the knowledge of its long-term climate history remains very limited due to the restricted number of pre-Quaternary sedimentary records. During Polarstern Expedition PS87/2014, we discovered multiple submarine landslides over a distance of >350 km along Lomonosov Ridge. Removal of younger sediments from steep headwalls has led to exhumation of Miocene to early Quaternary sediments close to the seafloor, allowing the retrieval of such old sediments with gravity cores. Multi-proxy biomarker analyses of these gravity cores reveal for the first time that the late Miocene central Arctic Ocean was relatively warm (4-7°C) and ice-free during summer, whereas sea ice occurred during spring and autumn/winter. A comparison of our proxy data with Miocene climate simulations seems to favour relatively high late Miocene atmospheric CO2 concentrations. These new findings from the Arctic region provide new benchmarks for groundtruthing global climate reconstructions and modeling.
Resumo:
Pollen analyses have been proven to possess the possibility to decipher rapid vegetational and climate shifts in Neogene sedimentary records. Herein, a c. 21-kyr-long transgression-regression cycle from the Lower Austrian locality Stetten is analysed in detail to evaluate climatic benchmarks for the early phase of the Middle Miocene Climate Optimum and to estimate the pace of environmental change. Based on the Coexistence Approach, a very clear signal of seasonality can be reconstructed. A warm and wet summer season with c. 204-236 mm precipitation during the wettest month was opposed by a rather dry winter season with precipitation of c. 9-24 mm during the driest month. The mean annual temperature ranged between 15.7 and 20.8 °C, with about 9.6-13.3 °C during the cold season and 24.7-27.9 °C during the warmest month. In contrast, today's climate of this area, with an annual temperature of 9.8 °C and 660 mm rainfall, is characterized by the winter season (mean temperature: -1.4 °C, mean precipitation: 39 mm) and a summer mean temperature of 19.9 °C (mean precipitation: 84 mm). Different modes of environmental shifts shaped the composition of the vegetation. Within few millennia, marshes and salt marshes with abundant Cyperaceae rapidly graded into Taxodiaceae swamps. This quick but gradual process was interrupted by swift marine ingressions which took place on a decadal to centennial scale. The transgression is accompanied by blooms of dinoflagellates and of the green alga Prasinophyta and an increase in Abies and Picea. Afterwards, the retreat of the sea and the progradation of estuarine and wetland settings were a gradual progress again. Despite a clear sedimentological cyclicity, which is related to the 21-kyr precessional forcing, the climate data show little variation. This missing pattern might be due to the buffering of the precessional-related climate signal by the subtropical vegetation. Another explanation could be the method-inherent broad range of climate-parameter estimates that could cover small scale climatic changes.
Resumo:
Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields
Resumo:
Owing to the complexity of Ambient Assisted Living (AAL) systems and platforms, the evaluation of AAL solutions is a complex task that will challenge researchers for years to come. However, the analysis and comparison of proposed solutions is paramount to enable us to assess research results in this area. We have thus organized an international contest called EvAAL: Evaluating AAL Systems through Competitive Benchmarking. Its aims are to raise interest within the research and developer communities in the multidisciplinary research fields enabling AAL, and to create benchmarks for the evaluation and comparison of AAL systems.
Resumo:
The liberalization of electricity markets more than ten years ago in the vast majority of developed countries has introduced the need of modelling and forecasting electricity prices and volatilities, both in the short and long term. Thus, there is a need of providing methodology that is able to deal with the most important features of electricity price series, which are well known for presenting not only structure in conditional mean but also time-varying conditional variances. In this work we propose a new model, which allows to extract conditionally heteroskedastic common factors from the vector of electricity prices. These common factors are jointly estimated as well as their relationship with the original vector of series, and the dynamics affecting both their conditional mean and variance. The estimation of the model is carried out under the state-space formulation. The new model proposed is applied to extract seasonal common dynamic factors as well as common volatility factors for electricity prices and the estimation results are used to forecast electricity prices and their volatilities in the Spanish zone of the Iberian Market. Several simplified/alternative models are also considered as benchmarks to illustrate that the proposed approach is superior to all of them in terms of explanatory and predictive power.
Resumo:
The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known subchannel code COBRA-TF, namely CTF, to the critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks
Resumo:
La importancia de la seguridad en la aplicación de la tecnología nuclear impregna todas las tareas asociadas a la utilización de esta fuente de energía, comenzando por la fase de diseño, explotación y posterior desmantelamiento o gestión de residuos. En todos estos pasos, las herramientas de simulación computacional juegan un papel esencial como guía para el diseño, apoyo durante la operación o predicción de la evolución isotópica de los materiales del reactor. Las constantes mejoras en cuanto a recursos computacionales desde mediados del siglo XX hasta este momento así como los avances en los métodos de cálculo utilizados, permiten tratar la complejidad de estas situaciones con un detalle cada vez mayor, que en ocasiones anteriores fue simplemente descartado por falta de capacidad de cálculo o herramientas adecuadas. El presente trabajo se centra en el desarrollo de un método de cálculo neutrónico para reactores de agua ligera basado en teoría de difusión corregida con un nivel de detalle hasta la barra de combustible, considerando un número de grupos de energía mayor que los tradicionales rápido y térmico, y modelando la geometría tridimensional del núcleo del reactor. La capacidad de simular tanto situaciones estacionarias con posible búsqueda de criticidad, como la evolución durante transitorios del flujo neutrónico ha sido incluida, junto con un algoritmo de cálculo de paso de tiempo adaptativo para mejorar el rendimiento de las simulaciones. Se ha llevado a cabo un estudio de optimización de los métodos de cálculo utilizados para resolver la ecuación de difusión, tanto en el lazo de iteración de fuente como en los métodos de resolución de sistemas lineales empleados en las iteraciones internas. Por otra parte, la cantidad de memoria y tiempo de computación necesarios para resolver problemas de núcleo completo en malla fina obliga a introducir un método de paralelización en el cálculo; habiéndose aplicado una descomposición en subdominios basada en el método alternante de Schwarz acompañada de una aceleración nodal. La aproximación de difusión debe ser corregida si se desea reproducir los valores con una precisión cercana a la obtenida con la ecuación de transporte. Los factores de discontinuidad de la interfase utilizados para esta corrección no pueden en la práctica ser calculados y almacenados para cada posible configuración de una barra de combustible de composición determinada en el interior del reactor. Por esta razón, se ha estudiado una parametrización del factor de discontinuidad según la vecindad que permitiría tratar este factor como una sección eficaz más, parametrizada en función de valores significativos del entorno de la barra de material. Por otro lado, también se ha contemplado el acoplamiento con códigos termohidráulicos, lo que permite realizar simulaciones multifísica y producir resultados más realistas. Teniendo en cuenta la demanda creciente de la industria nuclear para que los resultados realistas sean suministrados junto con sus márgenes de confianza, se ha desarrollado la posibilidad de obtener las sensibilidades de los resultados mediante el cálculo del flujo adjunto, para posteriormente propagar las incertidumbres de las secciones eficaces a los cálculos de núcleo completo. Todo este trabajo se ha integrado en el código COBAYA3 que forma parte de la plataforma de códigos desarrollada en el proyecto europeo NURESIM del 6º Programa Marco. Los desarrollos efectuados han sido verificados en cuanto a su capacidad para modelar el problema a tratar; y la implementación realizada en el código ha sido validada numéricamente frente a los datos del benchmark de transitorio accidental en un reactor PWR con combustible UO2/MOX de la Agencia de Energía Nuclear de la OCDE, así como frente a otros benchmarks de LWR definidos en los proyectos europeos NURESIM y NURISP.
Resumo:
Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures.
Resumo:
Several models for context-sensitive analysis of modular programs have been proposed, each with different characteristics and representing different trade-offs. The advantage of these context-sensitive analyses is that they provide information which is potentially more accurate than that provided by context-free analyses. Such information can then be applied to validating/debugging the program and/or to specializing the program in order to obtain important performance improvements. Some very preliminary experimental results have also been reported for some of these models which provided initial evidence on their potential. However, further experimentation, which is needed in order to understand the many issues left open and to show that the proposed modes scale and are usable in the context of large, real-life modular programs, was left as future work. The aim of this paper is two-fold. On one hand we provide an empirical comparison of the different models proposed in previous work, as well as experimental data on the different choices left open in those designs. On the other hand we explore the scalability of these models by using larger modular programs as benchmarks. The results have been obtained from a realistic implementation of the models, integrated in a production-quality compiler (CiaoPP/Ciao). Our experimental results shed light on the practical implications of the different design choices and of the models themselves. We also show that contextsensitive analysis of modular programs is indeed feasible in practice, and that in certain critical cases it provides better performance results than those achievable by analyzing the whole program at once, specially in terms of memory consumption and when reanalyzing after making changes to a program, as is often the case during program development.
Resumo:
In this paper we propose a complete scheme for automatic exploitation of independent and-parallelism in CLP programs. We first discuss the new problems involved because of the different properties of the independence notions applicable to CLP. We then show how independence can be derived from a number of standard analysis domains for CLP. Finally, we perform a preliminary evaluation of the efficiency, accuracy, and effectiveness of the approach by implementing a parallehzing compiler for CLP based on the proposed ideas and applying it on a number of CLP benchmarks.
Resumo:
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.