8 resultados para Constant Relative Risk Aversion

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To investigate the association between the four traditional coronary heart disease (CHD) risk factors (hypertension, smoking, hypercholesterolemia, and diabetes) and outcomes of first ACS. Methods: Data were drawn from the ISACS Archives. The study participants consisted of 70953 patients with first ACS, but without prior CHD. Primary outcomes were patient’ age at hospital presentation and 30-day all-cause mortality. The risk ratios for mortality among subgroups were calculated using a balancing strategy by inverse probability weighting. Trends were evaluated by Pearson's correlation coefficient (r). Results: For fatal ACS (n=6097), exposure to at least one traditional CHD-risk factor ranged from 77.6% in women to 74.5% in men. The presence of all four CHD-risk factors significantly decreased the age at time of ACS event and death by nearly half a decade compared with the absence of any traditional risk factors in both women (from 67.1±12.0 to 61.9±10.3 years; r=-0.089, P<0.001) and men (from 62.8±12.2 to 58.9±9.9 years; r=-0.096, P<0.001). By contrast, there was an inverse association between the number of traditional CHD-risk factors and 30-day mortality. The mortality rates in women ranged from 7.7% with four traditional CHD-risk factors to 16.3% with no traditional risk factors (r=0.073, P<0.001). The corresponding rates in men were 4.8% and 11.5% (r=0.078, P<0.001), respectively. The risk ratios among individuals with at least one CHD-risk factors vs. those with no traditional risk factors were 0.72 (95%CI:0.65-0.79) in women and 0.64 (95%CI:0.59-0.70) in men. This association was consistent among patient subgroups managed with guideline-recommended therapeutic options. Conclusions: The vast majority of patients who die for ACS have traditional CHD-risk factor exposure. Patients with CHD-risk factors die much earlier in life, but they have a lower relative risk of 30-day mortality than those with no traditional CHD-risk factors, even in the context of equitable evidence‐based treatments after hospital admission.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kidney transplantation is the best treatment option for the restoration of excretory and endocrine kidney function in patients with end-stage renal disease. The success of the transplant is linked to the genetic compatibility between donor and recipient, and upon progress in surgery and immunosuppressive therapy. Numerous studies have established the importance of innate immunity in transplantation tolerance, in particular natural killer (NK) cells represent a population of cells involved in defense against infectious agents and tumor cells. NK cells express on their surface the Killer-cell Immunoglobulin-like Receptors (KIR) which, by recognizing and binding to MHC class I antigens, prevent the killing of autologous cells. In solid organ transplantation context, and in particular the kidney, recent studies show some correlation between the incompatibility KIR / HLA and outcome of transplantation so as to represent an interesting perspective, especially as regards setting of immunosuppressive therapy. The purpose of this study was therefore to assess whether the incompatibility between recipient KIR receptors and HLA class I ligands of the donor could be a useful predictor in order to improve the survival of the transplanted kidney and also to select patients who might benefit of a reduced regimen. One hundred and thirteen renal transplant patients from 1999 to 2005 were enrolled. Genomic DNA was extracted for each of them and their donors and genotyping of HLA A, B, C and 14 KIR genes was carried out. Data analysis was conducted on two case-control studies: one aimed at assessing the outcome of acute rejection and the other to assess the long term transplant outcome. The results showed that two genes, KIR2DS1 and KIR3DS1, are associated with the development of acute rejection (p = 0.02 and p = 0.05, respectively). The presence of the KIR2DS3 gene is associated with a better performance of serum creatinine and glomerular filtration rate (MDRD) over time (4 and 5 years after transplantation, p <0.05), while in the presence of ligand, the serum creatinine and MDRD trend seems to get worse in the long term. The analysis performed on the population, according to whether there was deterioration of renal function or not in the long term, showed that the absence of the KIR2DL1 gene is strongly associated with an increase of 20% of the creatinine value at 5 years, with a relative risk to having a greater creatinine level than the median 5-year equal to 2.7 95% (95% CI: 1.7788 - 2.6631). Finally, the presence of a kidney resulting negative for HLA-A3 / A11, compared to a positive result, in patients with KIR3DL2, showed a relative risk of having a serum creatinine above the median at 5 years after transplantation of 0.6609 (95% CI: 0.4529 -0.9643), suggesting a protective effect given to the absence of this ligand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il danno epatico indotto dall'assunzione di farmaci viene comunemente indicato con il termine inglese DILI (Drug-Induced Liver Injury). Il paracetamolo rappresenta la causa più comune di DILI, seguito da antibiotici, FANS e farmaci antitubercolari. In particolare, i FANS sono una delle classi di farmaci maggiormente impiegate in terapia. Numerosi case report descrivono pazienti che hanno sviluppato danno epatico fatale durante il trattamento con FANS; molti di questi farmaci sono stati ritirati dal commercio in seguito a gravi reazioni avverse a carico del fegato. L'ultimo segnale di epatotossicità indotto da FANS è associato alla nimesulide; in alcuni paesi europei come la Finlandia, la Spagna e l'Irlanda, la nimesulide è stata sospesa dalla commercializzazione perché associata ad un'alta frequenza di epatotossicità. Sulla base dei dati disponibili fino a questo momento, l'Agenzia Europea dei Medicinali (EMA) ha recentemente concluso che i benefici del farmaco superano i rischi; un possibile aumento del rischio di epatotossicità associato a nimesulide rimane tuttavia una discussione aperta di cui ancora molto si dibatte. Tra le altre classi di farmaci che possono causare danno epatico acuto la cui incidenza tuttavia non è sempre ben definita sono gli antibiotici, quali amoxicillina e macrolidi, le statine e gli antidepressivi.Obiettivo dello studio è stato quello di determinare il rischio relativo di danno epatico indotto da farmaci con una prevalenza d'uso nella popolazione italiana maggiore o uguale al 6%. E’ stato disegnato uno studio caso controllo sviluppato intervistando pazienti ricoverati in reparti di diversi ospedali d’Italia. Il nostro studio ha messo in evidenza che il danno epatico da farmaci riguarda numerose classi farmacologiche e che la segnalazione di tali reazioni risulta essere statisticamente significativa per numerosi principi attivi. I dati preliminari hanno mostrato un valore di odds ratio significativo statisticamente per la nimesulide, i FANS, alcuni antibiotici come i macrolidi e il paracetamolo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La ricerca è strutturata in due sezioni: nella prima, dopo una premessa storica sul suicidio ed una lettura dei relativi dati statistici italiani integrata dall’analisi delle principali teorie sociologiche e dei principali aspetti psicopatologici e di psicologia clinica, vengono esaminati i risultati forniti da numerosi studi scientifici sul tema complementare delle morti equivoche, con particolare riferimento alle categorie a rischio rappresentate da anziani, carcerati, piloti di aerei, soggetti dediti a pratiche di asfissia autoerotica o roulette russa, istigatori delle forze di polizia e suicida stradali. Successivamente sono esaminati gli aspetti investigativi e medico-legali in tema di suicidi e morti equivoche con particolare riferimento alla tecnica dell’autopsia psicologica analizzandone le origini ed evoluzioni, il suo ambito di utilizzo ed i relativi aspetti metodologici. Nella seconda sezione del lavoro il tema dei suicidi e delle morti equivoche viene approfondito grazie all’apporto di professionisti di discipline diverse esperti in materia di autopsia psicologica ed indagini giudiziarie. A questi è stata presentata, con l’utilizzo della tecnica qualitativa “Dephi, una iniziale ipotesi di protocollo di autopsia psicologica, con le relative modalità applicative, al fine di procedere ad una sua revisione ed adattamento alle esigenze operative italiane grazie alle specifiche esperienze professionali e multidisciplinari maturate dagli esperti. I dati raccolti hanno permesso di giungere alla formulazione di un protocollo di autopsia psicologica, basato sulla elaborazione di domande generali, specifiche e conclusive, a risposta aperta, che possono esser formulate, secondo le modalità previste, alle persone affettivamente significative per la vittima nei confronti della quale si intende procedere con tale strumento investigativo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is the result of a project aimed at the study of a crucial topic in finance: default risk, whose measurement and modelling have achieved increasing relevance in recent years. We investigate the main issues related to the default phenomenon, under both a methodological and empirical perspective. The topics of default predictability and correlation are treated with a constant attention to the modelling solutions and reviewing critically the literature. From the methodological point of view, our analysis results in the proposal of a new class of models, called Poisson Autoregression with Exogenous Covariates (PARX). The PARX models, including both autoregressive end exogenous components, are able to capture the dynamics of default count time series, characterized by persistence of shocks and slowly decaying autocorrelation. Application of different PARX models to the monthly default counts of US industrial firms in the period 1982-2011 allows an empirical insight of the defaults dynamics and supports the identification of the main default predictors at an aggregate level.