902 resultados para Costs-consequences analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the notions of illusions and beliefs, discussing some advantages offered by the study of these phenomena based on the concepts of superstitious behavior, superstition and superstitious rules. Among these advantages, the study highlights the possibility of researching these relationships in different levels of analysis, not only at the individual level, focusing on cultural level, this paper presents Cultural Materialism as an anthropological proposal for the consideration of these phenomena on the cultural level and based on adaptive principles, besides it discusses the experimental analysis of cultural practices and points Out how they can help to understand how people in groups behave such as they are being effective in the control of the surrounding environment (when, sometimes, in fact, they are not). The paper offers an integrative proposal which makes easier behavior analysts' dialogue with social psychologists and offers some routes from cultural analysis of illusions and beliefs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Microbiological studies frequently involve exchanges of strains between laboratories and/or stock centers. The integrity of exchanged strains is vital for archival reasons and to ensure reproducible experimental results. For at least 50 years, one of the most common means of shipping bacteria was by inoculating bacterial samples in agar stabs. Long-term cultures in stabs exhibit genetic instabilities and one common instability is in rpoS. The sigma factor RpoS accumulates in response to several stresses and in the stationary phase. One consequence of RpoS accumulation is the competition with the vegetative sigma factor σ70. Under nutrient limiting conditions mutations in rpoS or in genes that regulate its expression tend to accumulate. Here, we investigate whether short-term storage and mailing of cultures in stabs results in genetic heterogeneity. Results We found that samples of the E. coli K-12 strain MC4100TF exchanged on three separate occasions by mail between our laboratories became heterogeneous. Reconstruction studies indicated that LB-stabs exhibited mutations previously found in GASP studies in stationary phase LB broth. At least 40% of reconstructed stocks and an equivalent proportion of actually mailed stock contained these mutations. Mutants with low RpoS levels emerged within 7 days of incubation in the stabs. Sequence analysis of ten of these segregants revealed that they harboured each of three different rpoS mutations. These mutants displayed the classical phenotypes of bacteria lacking rpoS. The genetic stability of MC4100TF was also tested in filter disks embedded in glycerol. Under these conditions, GASP mutants emerge only after a 3-week period. We also confirm that the intrinsic high RpoS level in MC4100TF is mainly due to the presence of an IS1 insertion in rssB. Conclusions Given that many E. coli strains contain high RpoS levels similar to MC4100TF, the integrity of such strains during transfers and storage is questionable. Variations in important collections may be due to storage-transfer related issues. These results raise important questions on the integrity of bacterial archives and transferred strains, explain variation like in the ECOR collection between laboratories and indicate a need for the development of better methods of strain transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate the efficacy of radiotherapy (RT) with total dose of 20 Gy (RT 20 Gy) in the treatment of Graves' ophthalmopathy. METHODS: A systematic review and meta-analysis of randomized controlled trials was performed comparing RT 20 Gy with or without glucocorticoid to clinical treatments for Graves' ophthalmopathy. The MEDLINE, EMBASE, Cochrane Library databases and recent relevant journals were searched. Relevant reports were reviewed by two reviewers. Response to radiotherapy was defined as clinical success according to each trial. We also evaluated the quality of life and whether RT to produce fewer side effects than other treatments. RESULTS: A total of 8 randomized controlled trials (439 patients) were identified. In the subgroup analysis, the overall response to treatment rates was better for: RT 20 Gy plus glucocorticoid vs glucocorticoids alone, OR=17.5 (CI95% 1.85-250, p=0.04), RT 20 Gy vs sham RT, OR= 3.15 (CI95%1.59-6.23, p=0.003) and RT 20Gy plus intravenous glucocorticoid vs RT 20Gy plus oral glucocorticoid, OR=4.15(CI95% 1.34-12.87, p=0.01). There were no differences between RT 20 Gy versus other fractionations and RT 20 Gy versus glucocorticoid alone. RT 20 Gy with or without glucocorticoids showed an improvement in diplopia grade, visual acuity, optic neuropathy, lid width, proptosis and ocular motility. No difference was seen for costs, intraocular pressure and quality of life. CONCLUSION: Our data have shown that RT 20 Gy should be offered as a valid therapeutic option to patients with moderate to severe ophthalmopathy. The effectiveness of orbital radiotherapy can be increased by the synergistic interaction with glucocorticoids. Moreover, RT 20 Gy is useful to improve a lot of ocular symptoms, excluding intraocular pressure, without any difference in quality of life and costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies involving amplified fragment length polymorphism (cDNA-AFLP) have often used polyacrylamide gels with radiolabeled primers in order to establish best primer combinations, to analyze, and to recover transcript-derived fragments. Use of automatic sequencer to establish best primer combinations is convenient, because it saves time, reduces costs and risks of contamination with radioactive material and acrylamide, and allows objective band-matching and more precise evaluation of transcript-derived fragments intensities. This study aimed at examining the gene expression of commercial cultivars of P. guajava subjected to water and mechanical injury stresses, combining analyses by automatic sequencer and fluorescent kits for polyacrylamide gel electrophoresis. Firstly, 64 combinations of EcoRI and MseI primers were tested. Ten combinations with higher number of polymorphic fragments were then selected for transcript-derived fragments recovering and cluster analysis, involving 45 saplings of P. guajava. Two groups were obtained, one composed by the control samplings, and another formed by samplings undergoing stress, with no clear distinction between stress treatments. The results revealed the convenience of using a combination of automatic sequencer and fluorescent kits for polyacrylamide gel electrophoreses to examine gene expression profiles. The Unweighted Pair Group Method with Arithmetic Mean analysis using Euclidean distances points out a similar induced response mechanism of P. guajava undergoing water stress and mechanical injury.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Máster en Gestión Sostenible de Recursos Pesqueros

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project points out a brief overview of several concepts, as Renewable Energy Resources, Distributed Energy Resources, Distributed Generation, and describes the general architecture of an electrical microgrid, isolated or connected to the Medium Voltage Network. Moreover, the project focuses on a project carried out by GRECDH Department in collaboration with CITCEA Department, both belonging to Universitat Politécnica de Catalunya: it concerns isolated microgrids employing renewable energy resources in two communities in northern Peru. Several solutions found using optimization software regarding different generation systems (wind and photovoltaic) and different energy demand scenarios are commented and analyzed from an electrical point of view. Furthermore, there are some proposals to improve microgrid performances, in particular to increase voltage values for each load connected to the microgrid. The extra costs required by the proposed solutions are calculated and their effect on the total microgrid cost are taken into account; finally there are some considerations about the impact the project has on population and on people's daily life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life is full of uncertainties. Legal rules should have a clear intention, motivation and purpose in order to diminish daily uncertainties. However, practice shows that their consequences are complex and hard to predict. For instance, tort law has the general objectives of deterring future negligent behavior and compensating the victims of someone else's negligence. Achieving these goals are particularly difficult in medical malpractice cases. To start with, when patients search for medical care they are typically sick in the first place. In case harm materializes during the treatment, it might be very hard to assess if it was due to substandard medical care or to the patient's poor health conditions. Moreover, the practice of medicine has a positive externality on the society, meaning that the design of legal rules is crucial: for instance, it should not result in physicians avoiding practicing their activity just because they are afraid of being sued even when they acted according to the standard level of care. The empirical literature on medical malpractice has been developing substantially in the past two decades, with the American case being the most studied one. Evidence from civil law tradition countries is more difficult to find. The aim of this thesis is to contribute to the empirical literature on medical malpractice, using two civil law countries as a case-study: Spain and Italy. The goal of this thesis is to investigate, in the first place, some of the consequences of having two separate sub-systems (administrative and civil) coexisting within the same legal system, which is common in civil law tradition countries with a public national health system (such as Spain, France and Portugal). When this holds, different procedures might apply depending on the type of hospital where the injury took place (essentially whether it is a public hospital or a private hospital). Therefore, a patient injured in a public hospital should file a claim in administrative courts while a patient suffering an identical medical accident should file a claim in civil courts. A natural question that the reader might pose is why should both administrative and civil courts decide medical malpractice cases? Moreover, can this specialization of courts influence how judges decide medical malpractice cases? In the past few years, there was a general concern with patient safety, which is currently on the agenda of several national governments. Some initiatives have been taken at the international level, with the aim of preventing harm to patients during treatment and care. A negligently injured patient might present a claim against the health care provider with the aim of being compensated for the economic loss and for pain and suffering. In several European countries, health care is mainly provided by a public national health system, which means that if a patient harmed in a public hospital succeeds in a claim against the hospital, public expenditures increase because the State takes part in the litigation process. This poses a problem in a context of increasing national health expenditures and public debt. In Italy, with the aim of increasing patient safety, some regions implemented a monitoring system on medical malpractice claims. However, if properly implemented, this reform shall also allow for a reduction in medical malpractice insurance costs. This thesis is organized as follows. Chapter 1 provides a review of the empirical literature on medical malpractice, where studies on outcomes and merit of claims, costs and defensive medicine are presented. Chapter 2 presents an empirical analysis of medical malpractice claims arriving to the Spanish Supreme Court. The focus is on reversal rates for civil and administrative decisions. Administrative decisions appealed by the plaintiff have the highest reversal rates. The results show a bias in lower administrative courts, which tend to focus on the State side. We provide a detailed explanation for these results, which can rely on the organization of administrative judges career. Chapter 3 assesses predictors of compensation in medical malpractice cases appealed to the Spanish Supreme Court and investigates the amount of damages attributed to patients. The results show horizontal equity between administrative and civil decisions (controlling for observable case characteristics) and vertical inequity (patients suffering more severe injuries tend to receive higher payouts). In order to execute these analyses, a database of medical malpractice decisions appealed to the Administrative and Civil Chambers of the Spanish Supreme Court from 2006 until 2009 (designated by the Spanish Supreme Court Medical Malpractice Dataset (SSCMMD)) has been created. A description of how the SSCMMD was built and of the Spanish legal system is presented as well. Chapter 4 includes an empirical investigation of the effect of a monitoring system for medical malpractice claims on insurance premiums. In Italy, some regions adopted this policy in different years, while others did not. The study uses data on insurance premiums from Italian public hospitals for the years 2001-2008. This is a significant difference as most of the studies use the insurance company as unit of analysis. Although insurance premiums have risen from 2001 to 2008, the increase was lower for regions adopting a monitoring system for medical claims. Possible implications of this system are also provided. Finally, Chapter 5 discusses the main findings, describes possible future research and concludes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eine zielgerichtete Steuerung und Durchführung von organischen Festkörperreaktionen wird unter anderem durch genaue Kenntnis von Packungseffekten ermöglicht. Im Rahmen dieser Arbeit konnte durch den kombinierten Einsatz von Einkristallröntgenanalyse und hochauf-lösender Festkörper-NMR an ausgewählten Beispielen ein tieferes Verständnis und Einblicke in die Reaktionsmechanismen von organischen Festkörperreaktionen auf molekularer Ebene gewonnen werden. So konnten bei der topotaktischen [2+2] Photodimerisierung von Zimt-säure Intermediate isoliert und strukturell charakterisiert werden. Insbesondere anhand statischer Deuteronen- und 13C-CPMAS NMR Spektren konnten eindeutig dynamische Wasserstoffbrücken nachgewiesen werden, die transient die Zentrosymmetrie des Reaktions-produkts aufheben. Ein weiterer Nachweis gelang daraufhin mittels Hochtemperatur-Röntgen-untersuchung, sodass der scheinbare Widerspruch von NMR- und Röntgenuntersuchungen gelöst werden konnte. Eine Veresterung der Zimtsäure entfernt diese Wasserstoffbrücken und erhält somit die Zentrosymmetrie des Photodimers. Weiterhin werden Ansätze zur Strukturkontrolle in Festkörpern basierend auf der molekularen Erkennung des Hydroxyl-Pyridin (OH-N) Heterosynthon in Co-Kristallen beschrieben, wobei vor allem die Stabilität des Synthons in Gegenwart funktioneller Gruppen mit Möglichkeit zu kompetetiver Wasserstoffbrückenbildung festgestellt wurde. Durch Erweiterung dieses Ansatzes wurde die molekulare Spezifität des Hydroxyl-Pyridin (OH-N) Heterosynthons bei gleichzeitiger Co-Kristallisation mit mehreren Komponenten erfolgreich aufgezeigt. Am Beispiel der Co-Kristallisation von trans--1,2-bis(4-pyridyl)ethylen (bpe) mit Resorcinol (res) in Gegenwart von trans-1,2-bis(4-pyridyl)ethan (bpet) konnten Zwischenprodukte der Fest-körperreaktionen und neuartige Polymorphe isoliert werden, wobei eine lückenlose Aufklärung des Reaktionswegs mittels Röntgenanalyse gelang. Dabei zeigte sich, dass das Templat Resorcinol aus den Zielverbindungen entfernbar ist. Ferner gelang die Durchführung einer seltenen, nicht-idealen Einkristall-Einkristall-Umlagerung von trans--1,2-bis(4-pyridyl)ethylen (bpe) mit Resorcinol (res). In allen Fällen konnten die Fragen zur Struktur und Dynamik der untersuchten Verbindungen nur durch gemeinsame Nutzung von Röntgenanalyse und NMR-Spektroskopie bei vergleichbaren Temperaturen eindeutig und umfassend geklärt werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of the banks and financial markets relies on the fact that they promote economic efficiency by allocating savings efficiently to profitable investment opportunities.An efficient banking system is a key determinant for the financial stability.The theory of market failure forms the basis for understanding financial regulation.Following the detrimental economic and financial consequences in theaftermath of the crisis, academics and policymakers started to focus their attention on the construction of an appropriate regulatory and supervisory framework of the banking sector. This dissertation aims at understanding the impact of regulations and supervision on banks’ performance focusing on two emerging market economies, Turkey and Russia. It aims at examining the way in which regulations matter for financial stability and banking performance from a law & economics perspective. A review of the theory of banking regulation, particularly as applied to emerging economies, shows that the efficiency of certain solutions regarding banking regulation is open to debate. Therefore, in the context of emerging countries, whether a certain approach is efficient or not will be presented as an empirical question to which this dissertation will try to find an answer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern food systems are characterized by a high energy intensity as well as by the production of large amounts of waste, residuals and food losses. This inefficiency presents major consequences, in terms of GHG emissions, waste disposal, and natural resource depletion. The research hypothesis is that residual biomass material could contribute to the energetic needs of food systems, if recovered as an integrated renewable energy source (RES), leading to a sensitive reduction of the impacts of food systems, primarily in terms of fossil fuel consumption and GHG emissions. In order to assess these effects, a comparative life cycle assessment (LCA) has been conducted to compare two different food systems: a fossil fuel-based system and an integrated system with the use of residual as RES for self-consumption. The food product under analysis has been the peach nectar, from cultivation to end-of-life. The aim of this LCA is twofold. On one hand, it allows an evaluation of the energy inefficiencies related to agro-food waste. On the other hand, it illustrates how the integration of bioenergy into food systems could effectively contribute to reduce this inefficiency. Data about inputs and waste generated has been collected mainly through literature review and databases. Energy balance, GHG emissions (Global Warming Potential) and waste generation have been analyzed in order to identify the relative requirements and contribution of the different segments. An evaluation of the energy “loss” through the different categories of waste allowed to provide details about the consequences associated with its management and/or disposal. Results should provide an insight of the impacts associated with inefficiencies within food systems. The comparison provides a measure of the potential reuse of wasted biomass and the amount of energy recoverable, that could represent a first step for the formulation of specific policies on the integration of bioenergies for self-consumption.