15 resultados para Costs-consequences analysis
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.
Resumo:
The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
Life is full of uncertainties. Legal rules should have a clear intention, motivation and purpose in order to diminish daily uncertainties. However, practice shows that their consequences are complex and hard to predict. For instance, tort law has the general objectives of deterring future negligent behavior and compensating the victims of someone else's negligence. Achieving these goals are particularly difficult in medical malpractice cases. To start with, when patients search for medical care they are typically sick in the first place. In case harm materializes during the treatment, it might be very hard to assess if it was due to substandard medical care or to the patient's poor health conditions. Moreover, the practice of medicine has a positive externality on the society, meaning that the design of legal rules is crucial: for instance, it should not result in physicians avoiding practicing their activity just because they are afraid of being sued even when they acted according to the standard level of care. The empirical literature on medical malpractice has been developing substantially in the past two decades, with the American case being the most studied one. Evidence from civil law tradition countries is more difficult to find. The aim of this thesis is to contribute to the empirical literature on medical malpractice, using two civil law countries as a case-study: Spain and Italy. The goal of this thesis is to investigate, in the first place, some of the consequences of having two separate sub-systems (administrative and civil) coexisting within the same legal system, which is common in civil law tradition countries with a public national health system (such as Spain, France and Portugal). When this holds, different procedures might apply depending on the type of hospital where the injury took place (essentially whether it is a public hospital or a private hospital). Therefore, a patient injured in a public hospital should file a claim in administrative courts while a patient suffering an identical medical accident should file a claim in civil courts. A natural question that the reader might pose is why should both administrative and civil courts decide medical malpractice cases? Moreover, can this specialization of courts influence how judges decide medical malpractice cases? In the past few years, there was a general concern with patient safety, which is currently on the agenda of several national governments. Some initiatives have been taken at the international level, with the aim of preventing harm to patients during treatment and care. A negligently injured patient might present a claim against the health care provider with the aim of being compensated for the economic loss and for pain and suffering. In several European countries, health care is mainly provided by a public national health system, which means that if a patient harmed in a public hospital succeeds in a claim against the hospital, public expenditures increase because the State takes part in the litigation process. This poses a problem in a context of increasing national health expenditures and public debt. In Italy, with the aim of increasing patient safety, some regions implemented a monitoring system on medical malpractice claims. However, if properly implemented, this reform shall also allow for a reduction in medical malpractice insurance costs. This thesis is organized as follows. Chapter 1 provides a review of the empirical literature on medical malpractice, where studies on outcomes and merit of claims, costs and defensive medicine are presented. Chapter 2 presents an empirical analysis of medical malpractice claims arriving to the Spanish Supreme Court. The focus is on reversal rates for civil and administrative decisions. Administrative decisions appealed by the plaintiff have the highest reversal rates. The results show a bias in lower administrative courts, which tend to focus on the State side. We provide a detailed explanation for these results, which can rely on the organization of administrative judges career. Chapter 3 assesses predictors of compensation in medical malpractice cases appealed to the Spanish Supreme Court and investigates the amount of damages attributed to patients. The results show horizontal equity between administrative and civil decisions (controlling for observable case characteristics) and vertical inequity (patients suffering more severe injuries tend to receive higher payouts). In order to execute these analyses, a database of medical malpractice decisions appealed to the Administrative and Civil Chambers of the Spanish Supreme Court from 2006 until 2009 (designated by the Spanish Supreme Court Medical Malpractice Dataset (SSCMMD)) has been created. A description of how the SSCMMD was built and of the Spanish legal system is presented as well. Chapter 4 includes an empirical investigation of the effect of a monitoring system for medical malpractice claims on insurance premiums. In Italy, some regions adopted this policy in different years, while others did not. The study uses data on insurance premiums from Italian public hospitals for the years 2001-2008. This is a significant difference as most of the studies use the insurance company as unit of analysis. Although insurance premiums have risen from 2001 to 2008, the increase was lower for regions adopting a monitoring system for medical claims. Possible implications of this system are also provided. Finally, Chapter 5 discusses the main findings, describes possible future research and concludes.
Resumo:
The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.
Resumo:
The importance of the banks and financial markets relies on the fact that they promote economic efficiency by allocating savings efficiently to profitable investment opportunities.An efficient banking system is a key determinant for the financial stability.The theory of market failure forms the basis for understanding financial regulation.Following the detrimental economic and financial consequences in theaftermath of the crisis, academics and policymakers started to focus their attention on the construction of an appropriate regulatory and supervisory framework of the banking sector. This dissertation aims at understanding the impact of regulations and supervision on banks’ performance focusing on two emerging market economies, Turkey and Russia. It aims at examining the way in which regulations matter for financial stability and banking performance from a law & economics perspective. A review of the theory of banking regulation, particularly as applied to emerging economies, shows that the efficiency of certain solutions regarding banking regulation is open to debate. Therefore, in the context of emerging countries, whether a certain approach is efficient or not will be presented as an empirical question to which this dissertation will try to find an answer.
Resumo:
Modern food systems are characterized by a high energy intensity as well as by the production of large amounts of waste, residuals and food losses. This inefficiency presents major consequences, in terms of GHG emissions, waste disposal, and natural resource depletion. The research hypothesis is that residual biomass material could contribute to the energetic needs of food systems, if recovered as an integrated renewable energy source (RES), leading to a sensitive reduction of the impacts of food systems, primarily in terms of fossil fuel consumption and GHG emissions. In order to assess these effects, a comparative life cycle assessment (LCA) has been conducted to compare two different food systems: a fossil fuel-based system and an integrated system with the use of residual as RES for self-consumption. The food product under analysis has been the peach nectar, from cultivation to end-of-life. The aim of this LCA is twofold. On one hand, it allows an evaluation of the energy inefficiencies related to agro-food waste. On the other hand, it illustrates how the integration of bioenergy into food systems could effectively contribute to reduce this inefficiency. Data about inputs and waste generated has been collected mainly through literature review and databases. Energy balance, GHG emissions (Global Warming Potential) and waste generation have been analyzed in order to identify the relative requirements and contribution of the different segments. An evaluation of the energy “loss” through the different categories of waste allowed to provide details about the consequences associated with its management and/or disposal. Results should provide an insight of the impacts associated with inefficiencies within food systems. The comparison provides a measure of the potential reuse of wasted biomass and the amount of energy recoverable, that could represent a first step for the formulation of specific policies on the integration of bioenergies for self-consumption.
Resumo:
Italy registers a fast increase of low income population. Academics and policy makers consider income inequalities as a key determinant for low or inadequate healthy food consumption. Thus the objective is to understand how to overcome the agrofood chain barriers towards healthy food production, commercialisation and consumption for population at risk of poverty (ROP) in Italy. The study adopts a market oriented food chain approach, focusing the research ambit on ROP consumers, processing industries and retailers. The empirical investigation adopts a qualitative methodology with an explorative approach. The actors are investigated through 4 focus groups for consumers and carrying out 27 face to face semi-structured interviews for industries and retailers’ representatives. The results achieved provide the perceptions of each actor integrated into an overall chain approach. The analysis shows that all agrofood actors lack of an adequate level of knowledge towards healthy food definition. Food industries and retailers also show poor awareness about ROP consumers’ segment. In addition they perceive that the high costs for producing healthy food conflict with the low economic performances expected from ROP consumers’ segment. These aspects induce a scarce interest in investing on commercialisation strategies for healthy food for ROP consumers. Further ROP consumers show other notable barriers to adopt healthy diets caused, among others, by a personal strong negative attitude and lack of motivation. The personal barriers are also negatively influenced by several external socio-economic factors. The solutions to overcome the barriers shall rely on the improvement of the agrofood chain internal relations to identify successful strategies for increasing interest on low cost healthy food. In particular the focus should be on improved collaboration on innovation adoption and marketing strategies, considering ROP consumers’ preferences and needs. An external political intervention is instead necessary to fill the knowledge and regulations’ gaps on healthy food issues.
Resumo:
Climate-change related impacts, notably coastal erosion, inundation and flooding from sea level rise and storms, will increase in the coming decades enhancing the risks for coastal populations. Further recourse to coastal armoring and other engineered defenses to address risk reduction will exacerbate threats to coastal ecosystems. Alternatively, protection services provided by healthy ecosystems is emerging as a key element in climate adaptation and disaster risk management. I examined two distinct approaches to coastal defense on the base of their ecological and ecosystem conservation values. First, I analyzed the role of coastal ecosystems in providing services for hazard risk reduction. The value in wave attenuation of coral reefs was quantitatively demonstrated using a meta-analysis approach. Results indicate that coral reefs can provide wave attenuation comparable to hard engineering artificial defenses and at lower costs. Conservation and restoration of existing coral reefs are cost-effective management options for disaster risk reduction. Second, I evaluated the possibility to enhance the ecological value of artificial coastal defense structures (CDS) as habitats for marine communities. I documented the suitability of CDS to support native, ecologically relevant, habitat-forming canopy algae exploring the feasibility of enhancing CDS ecological value by promoting the growth of desired species. Juveniles of Cystoseira barbata can be successfully transplanted at both natural and artificial habitats and not affected by lack of surrounding adult algal individuals nor by substratum orientation. Transplantation success was limited by biotic disturbance from macrograzers on CDS compared to natural habitats. Future work should explore the reasons behind the different ecological functioning of artificial and natural habitats unraveling the factors and mechanisms that cause it. The comprehension of the functioning of systems associated with artificial habitats is the key to allow environmental managers to identify proper mitigation options and to forecast the impact of alternative coastal development plans.
Resumo:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
Resumo:
From the late 1980s, the automation of sequencing techniques and the computer spread gave rise to a flourishing number of new molecular structures and sequences and to proliferation of new databases in which to store them. Here are presented three computational approaches able to analyse the massive amount of publicly avalilable data in order to answer to important biological questions. The first strategy studies the incorrect assignment of the first AUG codon in a messenger RNA (mRNA), due to the incomplete determination of its 5' end sequence. An extension of the mRNA 5' coding region was identified in 477 in human loci, out of all human known mRNAs analysed, using an automated expressed sequence tag (EST)-based approach. Proof-of-concept confirmation was obtained by in vitro cloning and sequencing for GNB2L1, QARS and TDP2 and the consequences for the functional studies are discussed. The second approach analyses the codon bias, the phenomenon in which distinct synonymous codons are used with different frequencies, and, following integration with a gene expression profile, estimates the total number of codons present across all the expressed mRNAs (named here "codonome value") in a given biological condition. Systematic analyses across different pathological and normal human tissues and multiple species shows a surprisingly tight correlation between the codon bias and the codonome bias. The third approach is useful to studies the expression of human autism spectrum disorder (ASD) implicated genes. ASD implicated genes sharing microRNA response elements (MREs) for the same microRNA are co-expressed in brain samples from healthy and ASD affected individuals. The different expression of a recently identified long non coding RNA which have four MREs for the same microRNA could disrupt the equilibrium in this network, but further analyses and experiments are needed.
Resumo:
L’attività di ricerca della presente tesi di dottorato ha riguardato sistemi tribologici complessi di interesse industriale per i quali sono stati individuati, mediante failure analysis, i meccanismi di usura dominanti. Per ciascuno di essi sono state studiate soluzioni migliorative sulla base di prove tribologiche di laboratorio. Nella realizzazione di maglie per macchine movimentazione terra sono ampiamente utilizzati i tradizionali acciai da bonifica. La possibilità di utilizzare i nuovi microlegati a medio tenore di carbonio, consentirebbe una notevole semplificazione del ciclo produttivo e benefici in termini di costi. Una parte della tesi ha riguardato lo studio del comportamento tribologico di tali acciai. E’ stato anche affrontato lo studio tribologico di motori idraulici, con l’obiettivo di riuscire a migliorarne la resistenza ad usura e quindi la vita utile. Sono state eseguite prove a banco, per valutare i principali meccanismi di usura, e prove di laboratorio atte a riprodurre le reali condizioni di utilizzo, valutando tecniche di modificazione superficiale che fossero in grado di ridurre l’usura dei componenti. Sono state analizzate diverse tipologie di rivestimenti Thermal Spray in termini di modalità di deposizione (AFS-APS) e di leghe metalliche depositate (Ni,Mo,Cu/Al). Si sono infine caratterizzati contatti tribologici nel settore del packaging, dove l’utilizzo di acciai inox austenitici è in alcuni casi obbligatorio. L’acciaio inossidabile AISI 316L è ampiamente utilizzato in applicazioni in cui siano richieste elevate resistenze alla corrosione, tuttavia la bassa resistenza all’usura, ne limitano l’impiego in campo tribologico. In tale ambito, è stata analizzata una problematica tribologica relativa a macchine automatiche per il dosaggio di polveri farmaceutiche. Sono state studiate soluzioni alternative che hanno previsto sia la completa sostituzione dei materiali della coppia tribologica, sia l’individuazione di tecniche di modificazione superficiale innovative quali la cementazione a bassa temperatura anche seguita dalla deposizione di un rivestimento di carbonio amorfo idrogenato a-C:H
Resumo:
This research primarily represents a contribution to the lobbying regulation research arena. It introduces an index which for the first time attempts to measure the direct compliance costs of lobbying regulation. The Cost Indicator Index (CII) offers a brand new platform for qualitative and quantitative assessment of adopted lobbying laws and proposals of those laws, both in the comparative and the sui generis dimension. The CII is not just the only new tool introduced in the last decade, but it is the only tool available for comparative assessments of the costs of lobbying regulations. Beside the qualitative contribution, the research introduces an additional theoretical framework for complementary qualitative analysis of the lobbying laws. The Ninefold theory allows a more structured assessment and classification of lobbying regulations, both by indication of benefits and costs. Lastly, this research introduces the Cost-Benefit Labels (CBL). These labels might improve an ex-ante lobbying regulation impact assessment procedure, primarily in the sui generis perspective. In its final part, the research focuses on four South East European countries (Slovenia, Serbia, Montenegro and Macedonia), and for the first time brings them into the discussion and calculates their CPI and CII scores. The special focus of the application was on Serbia, whose proposal on the Law on Lobbying has been extensively analysed in qualitative and quantitative terms, taking into consideration specific political and economic circumstances of the country. Although the obtained results are of an indicative nature, the CII will probably find its place within the academic and policymaking arena, and will hopefully contribute to a better understanding of lobbying regulations worldwide.
Resumo:
This dissertation seeks to improve the usage of direct democracy in order to minimize agency cost. It first explains why insights from corporate governance can help to improve constitutional law and then identifies relevant insights from corporate governance that can make direct democracy more efficient. To accomplish this, the dissertation examines a number of questions. What are the key similarities in corporate and constitutional law? Do these similarities create agency problems that are similar enough for a comparative analysis to yield valuable insights? Once the utility of corporate governance insights is established, the dissertation answers two questions. Are initiatives necessary to minimize agency cost if referendums are already provided for? And, must the results of direct democracy be binding in order for agency cost to be minimized?