962 resultados para Risk sources
Resumo:
La sequía es un fenómeno natural que se origina por el descenso de las precipitaciones con respecto a una media, y que resulta en la disponibilidad insuficiente de agua para alguna actividad. La creciente presión que se ha venido ejerciendo sobre los recursos hídricos ha hecho que los impactos de la sequía se hayan visto agravados a la vez que ha desencadenado situaciones de escasez de agua en muchas partes del planeta. Los países con clima mediterráneo son especialmente vulnerables a las sequías, y, su crecimiento económico dependiente del agua da lugar a impactos importantes. Para reducir los impactos de la sequía es necesaria una reducción de la vulnerabilidad a las sequías que viene dada por una gestión más eficiente y por una mejor preparación. Para ello es muy importante disponer de información acerca de los impactos y el alcance de este fenómeno natural. Esta investigación trata de abarcar el tema de los impactos de las sequías, de manera que plantea todos los tipos de impactos que pueden darse y además compara sus efectos en dos países (España y Chile). Para ello se proponen modelos de atribución de impactos que sean capaces de medir las pérdidas económicas causadas por la falta de agua. Los modelos propuestos tienen una base econométrica en la que se incluyen variables clave a la hora de evaluar los impactos como es una variable relacionada con la disponibilidad de agua, y otras de otra naturaleza para distinguir los efectos causados por otras fuentes de variación. Estos modelos se adaptan según la fase del estudio en la que nos encontremos. En primer lugar se miden los impactos directos sobre el regadío y se introduce en el modelo un factor de aleatoriedad para evaluar el riesgo económico de sequía. Esto se hace a dos niveles geográficos (provincial y de Unidad de Demanda Agraria) y además en el último se introduce no solo el riesgo de oferta sino también el riesgo de demanda de agua. La introducción de la perspectiva de riesgo en el modelo da lugar a una herramienta de gestión del riesgo económico que puede ser utilizada para estrategias de planificación. Más adelante una extensión del modelo econométrico se desarrolla para medir los impactos en el sector agrario (impactos directos sobre el regadío y el secano e impactos indirectos sobre la Agro Industria) para ello se adapta el modelo y se calculan elasticidades concatenadas entre la falta de agua y los impactos secundarios. Por último se plantea un modelo econométrico para el caso de estudio en Chile y se evalúa el impacto de las sequías debidas al fenómeno de La Niña. iv Los resultados en general muestran el valor que brinda el conocimiento más preciso acerca de los impactos, ya que en muchas ocasiones se tiende a sobreestimar los daños realmente producidos por la falta de agua. Los impactos indirectos de la sequía confirman su alcance a la vez que son amortiguados a medida que nos acercamos al ámbito macroeconómico. En el caso de Chile, su diferente gestión muestra el papel que juegan el fenómeno de El Niño y La Niña sobre los precios de los principales cultivos del país y sobre el crecimiento del sector. Para reducir las pérdidas y su alcance se deben plantear más medidas de mitigación que centren su esfuerzo en una gestión eficiente del recurso. Además la prevención debe jugar un papel muy importante para reducir los riesgos que pueden sufrirse ante situaciones de escasez. ABSTRACT Drought is a natural phenomenon that originates by the decrease in rainfall in comparison to the average, and that results in water shortages for some activities. The increasing pressure on water resources has augmented the impact of droughts just as water scarcity has become an additional problem in many parts of the planet. Countries with Mediterranean climate are especially vulnerable to drought, and its waterdependent economic growth leads to significant impacts. To reduce the negative impacts it is necessary to deal with drought vulnerability, and to achieve this objective a more efficient management is needed. The availability of information about the impacts and the scope of droughts become highly important. This research attempts to encompass the issue of drought impacts, and therefore it characterizes all impact types that may occur and also compares its effects in two different countries (Spain and Chile). Impact attribution models are proposed in order to measure the economic losses caused by the lack of water. The proposed models are based on econometric approaches and they include key variables for measuring the impacts. Variables related to water availability, crop prices or time trends are included to be able to distinguish the effects caused by any of the possible sources. These models are adapted for each of the parts of the study. First, the direct impacts on irrigation are measured and a source of variability is introduced into the model to assess the economic risk of drought. This is performed at two geographic levels provincial and Agricultural Demand Unit. In the latter, not only the supply risk is considered but also the water demand risk side. The introduction of the risk perspective into the model results in a risk management tool that can be used for planning strategies. Then an extension of the econometric model is developed to measure the impacts on the agricultural sector (direct impacts on irrigated and rainfed productions and indirect impacts on the Agri-food Industry). For this aim the model is adapted and concatenated elasticities between the lack of water and the impacts are estimated. Finally an econometric model is proposed for the Chilean case study to evaluate the impact of droughts, especially caused by El Niño Southern Oscillation. The overall results show the value of knowing better about the precise impacts that often tend to be overestimated. The models allow for measuring accurate impacts due to the lack of water. Indirect impacts of drought confirm their scope while they confirm also its dilution as we approach the macroeconomic variables. In the case of Chile, different management strategies of the country show the role of ENSO phenomena on main crop prices and on economic trends. More mitigation measures focused on efficient resource management are necessary to reduce drought losses. Besides prevention must play an important role to reduce the risks that may be suffered due to shortages.
Resumo:
The road to the automation of the agricultural processes passes through the safe operation of the autonomous vehicles. This requirement is a fact in ground mobile units, but it still has not well defined for the aerial robots (UAVs) mainly because the normative and legislation are quite diffuse or even inexistent. Therefore, to define a common and global policy is the challenge to tackle. This characterization has to be addressed from the field experience. Accordingly, this paper presents the work done in this direction, based on the analysis of the most common sources of hazards when using UAV's for agricultural tasks. The work, based on the ISO 31000 normative, has been carried out by applying a three-step structure that integrates the identification, assessment and reduction procedures. The present paper exposes how this method has been applied to analyze previous accidents and malfunctions during UAV operations in order to obtain real failure causes. It has allowed highlighting common risks and hazardous sources and proposing specific guards and safety measures for the agricultural context.
Resumo:
The 12 January 2010, an earthquake hit the city of Port-au-Prince, capital of Haiti. The earthquake reached a magnitude Mw 7.0 and the epicenter was located near the town of Léogâne, approximately 25 km west of the capital. The earthquake occurred in the boundary region separating the Caribbean plate and the North American plate. This plate boundary is dominated by left-lateral strike slip motion and compression, and accommodates about 20 mm/y slip, with the Caribbean plate moving eastward with respect to the North American plate (DeMets et al., 2000). Initially the location and focal mechanism of the earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillo-Plantain Garden fault system (EPGFZ), however Hayes et al., (2010) combined seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process involved slip on multiple faults. Besides, the authors showed that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the EPGFZ. In December 2010, a Spanish cooperation project financed by the Politechnical University of Madrid started with a clear objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. One of the tasks of the project was devoted to vulnerability assessment of the current building stock and the estimation of seismic risk scenarios. The study was carried out by following the capacity spectrum method as implemented in the software SELENA (Molina et al., 2010). The method requires a detailed classification of the building stock in predominant building typologies (according to the materials in the structure and walls, number of stories and age of construction) and the use of the building (residential, commercial, etc.). Later, the knowledge of the soil characteristics of the city and the simulation of a scenario earthquake will provide the seismic risk scenarios (damaged buildings). The initial results of the study show that one of the highest sources of uncertainties comes from the difficulty of achieving a precise building typologies classification due to the craft construction without any regulations. Also it is observed that although the occurrence of big earthquakes usually helps to decrease the vulnerability of the cities due to the collapse of low quality buildings and the reconstruction of seismically designed buildings, in the case of Port-au-Prince the seismic risk in most of the districts remains high, showing very vulnerable areas. Therefore the local authorities have to drive their efforts towards the quality control of the new buildings, the reinforcement of the existing building stock, the establishment of seismic normatives and the development of emergency planning also through the education of the population.
Resumo:
New European directives have proposed the direct application of compost and digestate produced from municipal solid wastes as organic matter sources in agricultural soils. Therefore information about phosphorus leaching from these residues when they are applied to the soil is increasingly mportant. Leaching experiments were conducted to determine the P mobility in compost and digestate mixtures, supplying equivalent amounts to 100 kg P ha?1 to three different types of soils. The tests were performed in accordance with CEN/TS 14405:2004 analyzing the maximum dissolved reactive P and the kinetic rate in the leachate. P biowaste fractionation indicated that digestate has a higher level of available P than compost has. In contrast, P losses in leaching experiments with soil-compost mixtureswere higher than in soil-digestate mixtures. For bothwastes, therewas no correlation between disolved reactive P lost and the water soluble P.The interaction between soil and waste, the long experimentation time, and the volume of leachate obtained caused the waste?s wettability to become an influential parameter in P leaching behavior. The overall conclusion is that kinetic data analysis provides valuable information concerning the sorption mechanism that can be used for predicting the large-scale behavior of soil systems.
Resumo:
This paper presents a model for determining value at operational risk ?OpVaR? in electric utilities, with the aim to confirm the versatility of the Bank for International Settlements (BIS) proposals. The model intends to open a new methodological approach in risk management, paying special attention to underlying operational sources of risk.
Resumo:
La robótica ha evolucionado exponencialmente en las últimas décadas, permitiendo a los sistemas actuales realizar tareas sumamente complejas con gran precisión, fiabilidad y velocidad. Sin embargo, este desarrollo ha estado asociado a un mayor grado de especialización y particularización de las tecnologías implicadas, siendo estas muy eficientes en situaciones concretas y controladas, pero incapaces en entornos cambiantes, dinámicos y desestructurados. Por eso, el desarrollo de la robótica debe pasar por dotar a los sistemas de capacidad de adaptación a las circunstancias, de entendedimiento sobre los cambios observados y de flexibilidad a la hora de interactuar con el entorno. Estas son las caracteristicas propias de la interacción del ser humano con su entorno, las que le permiten sobrevivir y las que pueden proporcionar a un sistema inteligencia y capacidad suficientes para desenvolverse en un entorno real de forma autónoma e independiente. Esta adaptabilidad es especialmente importante en el manejo de riesgos e incetidumbres, puesto que es el mecanismo que permite contextualizar y evaluar las amenazas para proporcionar una respuesta adecuada. Así, por ejemplo, cuando una persona se mueve e interactua con su entorno, no evalúa los obstáculos en función de su posición, velocidad o dinámica (como hacen los sistemas robóticos tradicionales), sino mediante la estimación del riesgo potencial que estos elementos suponen para la persona. Esta evaluación se consigue combinando dos procesos psicofísicos del ser humano: por un lado, la percepción humana analiza los elementos relevantes del entorno, tratando de entender su naturaleza a partir de patrones de comportamiento, propiedades asociadas u otros rasgos distintivos. Por otro lado, como segundo nivel de evaluación, el entendimiento de esta naturaleza permite al ser humano conocer/estimar la relación de los elementos con él mismo, así como sus implicaciones en cuanto a nivel de riesgo se refiere. El establecimiento de estas relaciones semánticas -llamado cognición- es la única forma de definir el nivel de riesgo de manera absoluta y de generar una respuesta adecuada al mismo. No necesariamente proporcional, sino coherente con el riesgo al que se enfrenta. La investigación que presenta esta tesis describe el trabajo realizado para trasladar esta metodología de análisis y funcionamiento a la robótica. Este se ha centrado especialmente en la nevegación de los robots aéreos, diseñando e implementado procedimientos de inspiración humana para garantizar la seguridad de la misma. Para ello se han estudiado y evaluado los mecanismos de percepción, cognición y reacción humanas en relación al manejo de riesgos. También se ha analizado como los estímulos son capturados, procesados y transformados por condicionantes psicológicos, sociológicos y antropológicos de los seres humanos. Finalmente, también se ha analizado como estos factores motivan y descandenan las reacciones humanas frente a los peligros. Como resultado de este estudio, todos estos procesos, comportamientos y condicionantes de la conducta humana se han reproducido en un framework que se ha estructurado basadandose en factores análogos. Este emplea el conocimiento obtenido experimentalmente en forma de algoritmos, técnicas y estrategias, emulando el comportamiento humano en las mismas circunstancias. Diseñado, implementeado y validado tanto en simulación como con datos reales, este framework propone una manera innovadora -tanto en metodología como en procedimiento- de entender y reaccionar frente a las amenazas potenciales de una misión robótica. ABSTRACT Robotics has undergone a great revolution in the last decades. Nowadays this technology is able to perform really complex tasks with a high degree of accuracy and speed, however this is only true in precisely defined situations with fully controlled variables. Since the real world is dynamic, changing and unstructured, flexible and non context-dependent systems are required. The ability to understand situations, acknowledge changes and balance reactions is required by robots to successfully interact with their surroundings in a fully autonomous fashion. In fact, it is those very processes that define human interactions with the environment. Social relationships, driving or risk/incertitude management... in all these activities and systems, context understanding and adaptability are what allow human beings to survive: contrarily to the traditional robotics, people do not evaluate obstacles according to their position but according to the potential risk their presence imply. In this sense, human perception looks for information which goes beyond location, speed and dynamics (the usual data used in traditional obstacle avoidance systems). Specific features in the behaviour of a particular element allows the understanding of that element’s nature and therefore the comprehension of the risk posed by it. This process defines the second main difference between traditional obstacle avoidance systems and human behaviour: the ability to understand a situation/scenario allows to get to know the implications of the elements and their relationship with the observer. Establishing these semantic relationships -named cognition- is the only way to estimate the actual danger level of an element. Furthermore, only the application of this knowledge allows the generation of coherent, suitable and adjusted responses to deal with any risk faced. The research presented in this thesis summarizes the work done towards translating these human cognitive/reasoning procedures to the field of robotics. More specifically, the work done has been focused on employing human-based methodologies to enable aerial robots to navigate safely. To this effect, human perception, cognition and reaction processes concerning risk management have been experimentally studied; as well as the acquisition and processing of stimuli. How psychological, sociological and anthropological factors modify, balance and give shape to those stimuli has been researched. And finally, the way in which these factors motivate the human behaviour according to different mindsets and priorities has been established. This associative workflow has been reproduced by establishing an equivalent structure and defining similar factors and sources. Besides, all the knowledge obtained experimentally has been applied in the form of algorithms, techniques and strategies which emulate the analogous human behaviours. As a result, a framework capable of understanding and reacting in response to stimuli has been implemented and validated.
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble “miscodes” of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated “strength” of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was “weak,” indicating that the two types of errors were “linked.” It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding “strategy” that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved “strongly” encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the “carried over” information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the “dynamic” nature of the role hippocampus plays in delay type memory tasks.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Background. Genetic influences have been shown to play a major role in determining the risk of alcohol dependence (AD) in both women and men; however, little attention has been directed to identifying the major sources of genetic variation in AD risk. Method. Diagnostic telephone interview data from young adult Australian twin pairs born between 1964 and 1971 were analyzed. Cox regression models were fitted to interview data from a total of 2708 complete twin pairs (690 MZ female, 485 MZ male, 500 DZ female, 384 DZ male, and 649 DZ female/male pairs). Structural equation models were fitted to determine the extent of residual genetic and environmental influences on AD risk while controlling for effects of sociodemographic and psychiatric predictors on risk. Results. Risk of AD was increased in males, in Roman Catholics, in those reporting a history of major depression, social anxiety problems, and conduct disorder, or (in females only) a history of suicide attempt and childhood sexual abuse; but was decreased in those reporting Baptist, Methodist, or Orthodox religion, in those who reported weekly church attendance, and in university-educated males. After allowing for the effects of sociodemographic and psychiatric predictors, 47 % (95 % CI 28-55) of the residual variance in alcoholism risk was attributable to additive genetic effects, 0 % (95 % CI 0-14) to shared environmental factors, and 53 % (95 % CI 45-63) to non-shared environmental influences. Conclusions. Controlling for other risk factors, substantial residual heritability of AD was observed, suggesting that psychiatric and other risk factors play a minor role in the inheritance of AD.
Resumo:
In a dividend imputation tax system, equity investors have three potential sources of return: dividends, capital gains and franking (tax) credits. However, the standard procedures for estimating the market risk premium (MRP) for use in the capital asset pricing model, ignore the value of franking credits. Officer (1994) notes that if franking credits do affect the corporate cost of capital, their value must be added to the standard estimates of MRP. In the present paper, we explicitly derive the relationship between the value of franking credits (gamma) and the MRP. We show that the standard parameter estimates that have been adopted in practice (especially by Australian regulators) violate this deterministic mathematical relationship. We also show how information on dividend yields and effective tax rates bounds the values that can be reasonably used for gamma and the MRP. We make recommendations for how estimates of the MRP should be adjusted to reflect the value of franking credits in an internally consistent manner.
Resumo:
Reliable, comparable information about the main causes of disease and injury in populations, and how these are changing, is a critical input for debates about priorities in the health sector. Traditional sources of information about the descriptive epidemiology of diseases, injuries and risk factors are generally incomplete, fragmented and of uncertain reliability and comparability. Lack of a standardized measurement framework to permit comparisons across diseases and injuries, as well as risk factors, and failure to systematically evaluate data quality have impeded comparative analyses of the true public health importance of various conditions and risk factors. As a consequence the impact of major conditions and hazards on population health has been poorly appreciated, often leading to a lack of public health investment. Global disease and risk factor quantification improved dramatically in the early 1990s with the completion of the first Global Burden of Disease Study. For the first time, the comparative importance of over 100 diseases and injuries, and ten major risk factors, for global and regional health status could be assessed using a common metric (Disability-Adjusted Life Years) which simultaneously accounted for both premature mortality and the prevalence, duration and severity of the non-fatal consequences of disease and injury. As a consequence, mental health conditions and injuries, for which non-fatal outcomes are of particular significance, were identified as being among the leading causes of disease/injury burden worldwide, with clear implications for policy, particularly prevention. A major achievement of the Study was the complete global descriptive epidemiology, including incidence, prevalence and mortality, by age, sex and Region, of over 100 diseases and injuries. National applications, further methodological research and an increase in data availability have led to improved national, regional and global estimates for 2000, but substantial uncertainty around the disease burden caused by major conditions, including, HIV, remains. The rapid implementation of cost-effective data collection systems in developing countries is a key priority if global public policy to promote health is to be more effectively informed.
Risk of serious NSAID-related gastrointestinal events during long-term exposure: a systematic review
Resumo:
Objective: Exposure to non-steroidal anti-inflammatory drugs (NSAIDs) is associated wit increased risk of serious gastrointestinal (GI) events compared with non-exposure. We investigated whether that risk is sustained over time. Data sources: Cochrane Controlled Trials Register (to 2002); MEDLINE, EMBASE, Derwent Drug File and Current Contents (1999-2002); manual searching of reviews (1999-2002). Study selection: From 479 search results reviewed and 221 articles retrieved, seven studies of patients exposed to prescription non-selective NSAIDs for more than 6 months and reporting time-dependent serious GI event rates were selected for quantitative data synthesis. These were stratified into two groups by study design. Data extraction: Incidence of GI events and number of patients at specific time points were extracted. Data synthesis: Meta-regression analyses were performed. Change in risk was evaluated by testing whether the slope of the regression line declined over time. Four randomised controlled trials (RCTs) provided evaluable data from five NSAID arms (aspirin, naproxen, two ibuprofen arms, and diclofenac). When the RCT data were combined, a small significant decline in annualised risk was seen: -0.005% (95% Cl, -0.008% to -0.001%) per month. Sensitivity analyses were conducted because there was disparity within the RCT data. The pooled estimate from three cohort studies showed no significant decline in annualised risk over periods up to 2 years: -0.003% (95% Cl, -0.008% to 0.003%) per month. Conclusions: Small decreases in risk over time were observed; these were of negligible clinical importance. For patients who need long-term (> 6 months) treatment, precautionary measures should be considered to reduce the net probability of serious GI events over the anticipated treatment duration. The effect of intermittent versus regular daily therapy on long-term risk needs further investigation.
Resumo:
In the last decades, increasing scientific evidence has correlated the regular consumption of (poly)phenol-rich foods to a potential reduction of chronic disease incidence and mortality. However, epidemiological evidence on the role of (poly)phenol intake against the risk of some chronic diseases is promising, but not conclusive. In this framework a proper approach to (poly)phenol research is requested, using a step by step strategy. The plant kingdom produces an overwhelming array of structurally diverse secondary metabolites, among which flavonoids and related phenolic and (poly)phenolic compounds constitute one of the most numerous and widely distributed group of natural products. To date, more than 8000 structures have been classified as members of the phytochemical class of (poly)phenol, and among them over 4000 flavonoids have been identified. For this reason, a detailed food (poly)phenolic characterization is essential to identify the compounds that will likely enter the human body upon consumption, to predict the metabolites that will be generated and to unravel the potential effects of phenolic rich food sources on human health. In the first part of this work the attention was focused on the phenolic characterization of fruit and vegetable supplements, considering the increasing attention recently addressed to the so called "nutraceuticals", and on the main coffee industry by-product, namely coffee silverskin. The interest oriented toward (poly)phenols is then extended to their metabolism within the human body, paramount in the framework of their putative health promoting effects. Like all nutrients and non-nutrients, once introduced through the diet, (poly)phenols are subjected to an intense metabolism, able to convert the native compounds into similar conjugated, as well as smaller and deeply modified molecules, which in turn could be further conjugated. Although great strides have been made in the last decades, some steps of the (poly)phenol metabolism remain unclear and are interesting points of research. In the second part of this work the research was focused on a specific bran fraction, namely aleurone, added in feed pellets and in bread to investigate the absorption, metabolism and bioavailability of its phenolic compounds in animal and humans, with a preliminary in vitro step to determine their potential bioaccesibility. This part outlines the best approaches to assess the bioavailability of specific phenolics in several experimental models. The physiological mechanisms explaining the epidemiological and observational data on phenolics and health, are still far from being unraveled or understood in full. Many published results on phenolic actions at cell levels are biased by the fact that aglycones or native compounds have been used, not considering the previously mentioned chemical and biological transformations. In the last part of this thesis work, a new approach in (poly)phenol bioactivity investigation is proposed, consisting of a medium-long term treatment of animals with a (poly)phenol source, in this specific case resveratrol, the detection of its metabolites to determine their possible specific tissue accumulation, and the evaluation of specific parameters and/or mechanism of action at target tissue level. To conclude, this PhD work has contributed to advancing the field, as novel sources of (poly)phenols have been described, the bioavailability of (poly)phenols contained in a novel specific bran fraction used as ingredient has been evaluated in animal and in humans, and, finally, the tissue accumulation of specific (poly)phenol metabolites and the evaluation of specific parameters and/or mechanism of action has been carried out. For these reasons, this PhD work should be considered an example of adequate approach to the investigation of (poly)phenols and of their bioactivity, unavoidable in the process of unequivocally defining their effects on human health.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.