15 resultados para level of fault-tolerance
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this work we study the relation between crustal heterogeneities and complexities in fault processes. The first kind of heterogeneity considered involves the concept of asperity. The presence of an asperity in the hypocentral region of the M = 6.5 earthquake of June 17-th, 2000 in the South Iceland Seismic Zone was invoked to explain the change of seismicity pattern before and after the mainshock: in particular, the spatial distribution of foreshock epicentres trends NW while the strike of the main fault is N 7◦ E and aftershocks trend accordingly; the foreshock depths were typically deeper than average aftershock depths. A model is devised which simulates the presence of an asperity in terms of a spherical inclusion, within a softer elastic medium in a transform domain with a deviatoric stress field imposed at remote distances (compressive NE − SW, tensile NW − SE). An isotropic compressive stress component is induced outside the asperity, in the direction of the compressive stress axis, and a tensile component in the direction of the tensile axis; as a consequence, fluid flow is inhibited in the compressive quadrants while it is favoured in tensile quadrants. Within the asperity the isotropic stress vanishes but the deviatoric stress increases substantially, without any significant change in the principal stress directions. Hydrofracture processes in the tensile quadrants and viscoelastic relaxation at depth may contribute to lower the effective rigidity of the medium surrounding the asperity. According to the present model, foreshocks may be interpreted as induced, close to the brittle-ductile transition, by high pressure fluids migrating upwards within the tensile quadrants; this process increases the deviatoric stress within the asperity which eventually fails, becoming the hypocenter of the mainshock, on the optimally oriented fault plane. In the second part of our work we study the complexities induced in fault processes by the layered structure of the crust. In the first model proposed we study the case in which fault bending takes place in a shallow layer. The problem can be addressed in terms of a deep vertical planar crack, interacting with a shallower inclined planar crack. An asymptotic study of the singular behaviour of the dislocation density at the interface reveals that the density distribution has an algebraic singularity at the interface of degree ω between -1 and 0, depending on the dip angle of the upper crack section and on the rigidity contrast between the two media. From the welded boundary condition at the interface between medium 1 and 2, a stress drop discontinuity condition is obtained which can be fulfilled if the stress drop in the upper medium is lower than required for a planar trough-going surface: as a corollary, a vertically dipping strike-slip fault at depth may cross the interface with a sedimentary layer, provided that the shallower section is suitably inclined (fault "refraction"); this results has important implications for our understanding of the complexity of the fault system in the SISZ; in particular, we may understand the observed offset of secondary surface fractures with respect to the strike direction of the seismic fault. The results of this model also suggest that further fractures can develop in the opposite quadrant and so a second model describing fault branching in the upper layer is proposed. As the previous model, this model can be applied only when the stress drop in the shallow layer is lower than the value prescribed for a vertical planar crack surface. Alternative solutions must be considered if the stress drop in the upper layer is higher than in the other layer, which may be the case when anelastic processes relax deviatoric stress in layer 2. In such a case one through-going crack cannot fulfil the welded boundary conditions and unwelding of the interface may take place. We have solved this problem within the theory of fracture mechanics, employing the boundary element method. The fault terminates against the interface in a T-shaped configuration, whose segments interact among each other: the lateral extent of the unwelded surface can be computed in terms of the main fault parameters and the stress field resulting in the shallower layer can be modelled. A wide stripe of high and nearly uniform shear stress develops above the unwelded surface, whose width is controlled by the lateral extension of unwelding. Secondary shear fractures may then open within this stripe, according to the Coulomb failure criterion, and the depth of open fractures opening in mixed mode may be computed and compared with the well studied fault complexities observed in the field. In absence of the T-shaped decollement structure, stress concentration above the seismic fault would be difficult to reconcile with observations, being much higher and narrower.
Resumo:
The research was carried out to investigate of main elements of salt stress response in two strawberry cultivars, Elsanta and Elsinore. Plants were grown under 0, 10, 20 and 40 mM NaCl for 80 days. Salinity dramatically affected growth in both cultivars, although Elsinore appeared to be more impaired than Elsanta. Moreover a significant reduction of leaf photosynthesis, evaporation, and stomatal conductance was recorded 24 hrs after the stress was applied in both cultivars, whereas physiological functions were differentially restored after acclimation. However, cv. Elsanta had more efficient leaf gas exchange and water status than cv. Elsinore. In general, Fruit yield reduced upon salinization, wheares fruit quality concerning fruit taste, aroma, appearance, total soluble solids and titratable acidity, did not change but rather was enhanced under moderate salinity. On the other hand fruit quality was impaired at severe salt stress. Fruit antioxidant content and antioxidant capacity were enhanced significantly by increasing salt concentration in both cultivars. The oxidative effects of the stress were defined by the measures of some enzymatic activities and lipid peroxidation. Consistently, an increase in superoxide dismutase (SOD), catalase (CAT), peroxide dismutase (POD) enzymes and higher content of proline and soluble proteins were observed in cv. Elsinore than in cv. Elsanta. The increase coincided with a decrease in lipid peroxidation. The research confirmed that although strawberry cultivars were sensitive to salinity, difference between cultivars exist; The experiment revealed that cv. Elsanta could stand severe salt stress, which was lethal to cv. Elsinore. The parameters measured in the previous experiment were proposed as early screening tools for the salt stress response in nine strawberry genotypes. The results showed that, wheares Elsanta and Elsinore cultivars had a lower dry weight reduction at 40 mM NaCl among cultivars, Naiad, Kamila, and Camarosa were the least salt-sensitive cultivars among the screened.
Resumo:
According to much evidence, observing objects activates two types of information: structural properties, i.e., the visual information about the structural features of objects, and function knowledge, i.e., the conceptual information about their skilful use. Many studies so far have focused on the role played by these two kinds of information during object recognition and on their neural underpinnings. However, to the best of our knowledge no study so far has focused on the different activation of this information (structural vs. function) during object manipulation and conceptualization, depending on the age of participants and on the level of object familiarity (familiar vs. non-familiar). Therefore, the main aim of this dissertation was to investigate how actions and concepts related to familiar and non-familiar objects may vary across development. To pursue this aim, four studies were carried out. A first study led to the creation of the Familiar and Non-Familiar Stimuli Database, a set of everyday objects classified by Italian pre-schoolers, schoolers, and adults, useful to verify how object knowledge is modulated by age and frequency of use. A parallel study demonstrated that factors such as sociocultural dynamics may affect the perception of objects. Specifically, data for familiarity, naming, function, using and frequency of use of the objects used to create the Familiar And Non-Familiar Stimuli Database were collected with Dutch and Croatian children and adults. The last two studies on object interaction and language provide further evidence in support of the literature on affordances and on the link between affordances and the cognitive process of language from a developmental point of view, supporting the perspective of a situated cognition and emphasizing the crucial role of human experience.
Resumo:
This doctoral thesis presents a project carried out in secondary schools located in the city of Ferrara with the primary objective of demonstrating the effectiveness of an intervention based on Well-Being Therapy (Fava, 2016) in reducing alcohol use and improving lifestyles. In the first part (chapters 1-3), an introduction on risky behaviors and unhealthy lifestyle in adolescence is presented, followed by an examination of the phenomenon of binge drinking and of the concept of psychological well-being. In the second part (chapters 4-6), the experimental study is presented. A three-arm cluster randomized controlled trial including three test periods was implemented. The study involved eleven classes that were randomly assigned to receive well-being intervention (WBI), lifestyle intervention (LI) or not receive intervention (NI). Results were analyzed by linear mixed model and mixed-effects logistic regression with the aim to test the efficacy of WBI in comparison with LI and NI. AUDIT-C total score increased more in NI in comparison with WBI (p=0.008) and LI (p=0.003) at 6-month. The odds to be classified as at-risk drinker was lower in WBI (OR 0.01; 95%CI 0.01–0.14) and LI (OR 0.01; 95%CI 0.01–0.03) than NI at 6-month. The odds to use e-cigarettes at 6-month (OR 0.01; 95%CI 0.01–0.35) and cannabis at post-test (OR 0.01; 95%CI 0.01–0.18) were less in WBI than NI. Sleep hours at night decreased more in NI than in WBI (p = 0.029) and LI (p = 0.006) at 6-month. Internet addiction scores decreased more in WBI (p = 0.003) and LI (p = 0.004) at post-test in comparison with NI. Conclusions about the obtained results, limitations of the study, and future implications are discussed. In the seventh chapter, the data of the project collected during the pandemic are presented and compared with those from recent literature.
1° level of automation: the effectiveness of adaptive cruise control on driving and visual behaviour
Resumo:
The research activities have allowed the analysis of the driver assistance systems, called Advanced Driver Assistance Systems (ADAS) in relation to road safety. The study is structured according to several evaluation steps, related to definite on-site tests that have been carried out with different samples of users, according to their driving experience with the ACC. The evaluation steps concern: •The testing mode and the choice of suitable instrumentation to detect the driver’s behaviour in relation to the ACC. •The analysis modes and outputs to be obtained, i.e.: - Distribution of attention and inattention; - Mental workload; - The Perception-Reaction Time (PRT), the Time To Collision (TTC) and the Time Headway (TH). The main purpose is to assess the interaction between vehicle drivers and ADAS, highlighting the inattention and variation of the workloads they induce regarding the driving task. The research project considered the use of a system for monitoring visual behavior (ASL Mobile Eye-XG - ME), a powerful GPS that allowed to record the kinematic data of the vehicle (Racelogic Video V-BOX) and a tool for reading brain activity (Electroencephalographic System - EEG). Just during the analytical phase, a second and important research objective was born: the creation of a graphical interface that would allow exceeding the frame count limit, making faster and more effective the labeling of the driver’s points of view. The results show a complete and exhaustive picture of the vehicle-driver interaction. It has been possible to highlight the main sources of criticalities related to the user and the vehicle, in order to concretely reduce the accident rate. In addition, the use of mathematical-computational methodologies for the analysis of experimental data has allowed the optimization and verification of analytical processes with neural networks that have made an effective comparison between the manual and automatic methodology.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The study defines a new farm classification and identifies the arable land management. These aspects and several indicators are taken into account to estimate the sustainability level of farms, for organic and conventional regimes. The data source is Italian Farm Account Data Network (RICA) for years 2007-2011, which samples structural and economical information. An environmental data has been added to the previous one to better describe the farm context. The new farm classification describes holding by general informations and farm structure. The general information are: adopted regime and farm location in terms of administrative region, slope and phyto-climatic zone. The farm structures describe the presence of main productive processes and land covers, which are recorded by FADN database. The farms, grouped by homogeneous farm structure or farm typology, are evaluated in terms of sustainability. The farm model MAD has been used to estimate a list of indicators. They describe especially environmental and economical areas of sustainability. Finally arable lands are taken into account to identify arable land managements and crop rotations. Each arable land has been classified by crop pattern. Then crop rotation management has been analysed by spatial and temporal approaches. The analysis reports a high variability inside regimes. The farm structure influences indicators level more than regimes, and it is not always possible to compare the two regimes. However some differences between organic and conventional agriculture have been found. Organic farm structures report different frequency and geographical location than conventional ones. Also different connections among arable lands and farm structures have been identified.
Resumo:
This thesis is focused on the paleomagnetic rotation pattern inside the deforming zone of strike-slip faults, and the kinematics and geodynamics describing it. The paleomagnetic investigation carried out along both the LOFZ and the fore-arc sliver (38º-42ºS, southern Chile) revealed an asymmetric rotation pattern. East of the LOFZ and adjacent to it, rotations are up to 170° clockwise (CW) and fade out ~10 km east of fault. West of the LOFZ at 42ºS (Chiloé Island) and around 39°S (Villarrica domain) systematic CCW rotations have been observed, while at 40°-41°S (Ranco-Osorno domain) and adjacent to the LOFZ CW rotations reach up to 136° before evolving to CCW rotations at ~30 km from the fault. These data suggest a directed relation with subduction interface plate coupling. Zones of high coupling yield to a wide deforming zone (~30 km) west of the LOFZ characterized by CW rotations. Low coupling implies a weak LOFZ and a fore-arc dominated by CCW rotations related to NW-sinistral fault kinematics. The rotation pattern is consistent with a quasi-continuous crust kinematics. However, it seems unlikely that the lower crust flux can control block rotation in the upper crust, considering the cold and thick fore-arc crust. I suggest that rotations are consequence of forces applied directly on both the block edges and along the main fault, within the upper crust. Farther south, at the Austral Andes (54°S) I measured the anisotropy of magnetic susceptibility (AMS) of 22 Upper Cretaceous to Upper Eocene sites from the Magallanes fold-thrust belt internal domains. The data document continuous compression from the Early Cretaceous until the Late Oligocene. AMS data also show that the tectonic inversion of Jurassic extensional faults during the Late Cretaceous compressive phase may have controlled the Cenozoic kinematic evolution of the Magallanes fold-thrust belt, yielding slip partitioning.
Resumo:
The challenging requirements set on new full composite aeronautical structures are mostly related to the demonstration of damage tolerance capability of their primary structures, required by the airworthiness bodies. And while composite-made structures inherently demonstrate exceptional fatigue properties, when put in real life working conditions, a number of external factors can lead to impact damages thus reducing drastically their fatigue resistance due to fiber delamination, disbonding or breaking. This PhD aims towards contributing to the better understanding of the behavior of the primary composite aeronautical structure after near-edge impacts which are inevitable during the service life of an aircraft. The behavior of CFRP structures after impacts in only one small piece of the big picture which is the certification of CFRP built aircraft, where several other parameters need to be evaluated in order to fulfill the airworthiness requirements. These parameters are also discussed in this PhD thesis in order to give a better understanding of the complex task of CFRP structure certification, in which behavior of the impacted structure plays an important role. An experimental and numerical campaign was carried out in order to determine the level of delamination damage in CFRP specimens after near-edge impacts. By calibrating the numerical model with experimental data, it was possible, for different configurations and energy levels, to predict the extension of a delamination in a CFRP structure and to estimate its residual static strength using a very simple but robust technique. The original contribution of this work to the analysis of CFRP structures is the creation of a model which could be applicable to wide range of thicknesses and stacking sequences of CFRP structures, thus potentially being suitable for industrial application, as well.
Resumo:
Heat stress negatively affects wheat performance during its entire cycle, particularly during the reproductive stage. In view of the climate change and the prediction of a continued increase in temperature in the new future, it is urgent to concentrate efforts to discover novel genetic sources able to improve the resilience of wheat to heat stress. In this direction, this study addressed two different experiments in durum wheat to identify novel QTLs suitable to be applied in marker-assisted selection for heat tolerance. Chlorophyll fluorescence (ChlF) is a valuable indicator of plant response to environmental changes allowing a detailed assessment of PSII activity in view of its non-invasive measurement and high-throughput phenotyping. In the first study (Chapter 2), the Light-Induced Fluorescence Transient (LIFT) method was used to access ChlF data to map QTLs for ChlF-related traits during the vegetative growth stage in durum wheat under heat stress condition. Our results provide evidence that LIFT consistently measures ChlF at the level of high-throughput phenotyping combined with high accuracy which is required for Genome-Wide Association Study (GWAS) aimed at identifying genomic regions affecting PSII activity. The 50 QTLs identified for ChlF-related traits under heat stress mostly clustered into five chromosomes hotspots unrelated to phenology, a feature that makes these QTLs a valuable asset for marker-assisted breeding programs across different latitudes. In the second study (Chapter 3), a set of 183 accessions suitable for GWAS, was exposed to optimal and high temperature during two crop seasons under field conditions. Important agronomic traits were evaluated in order to identify valuable QTLs for GY and its components. The GWAS analysis identified several QTLs in the single years as well as in the joint analysis. From the total QTLs identified, 13 QTL clusters can be highlighted to be affecting heat tolerance across different years and/or different traits.
Resumo:
In chronic pain, opioids represent the gold standard analgesics, but their use is hampered by the development of several side effects, as development of analgesic tolerance and opioid-induced hyperalgesia. Evidence showed that many molecular mechanisms (changes in opioid receptors, neurotransmitter release, and glia/microglia activation) are involved in their appearance, as well as in chronic pain. Recently, a crucial role has been proposed for oxidative stress and proteasome in chronic pain and in treatment-related side effects. To better elucidate these aspects, the aim of this PhD thesis was to investigate the effects of opioids on cell oxidative stress, antioxidant enzymatic machinery and proteasome expression and activity in vitro. Also, the involvement of proteasome in the development of chronic pain conditions was investigated utilizing an experimental model of oxaliplatin-induced neuropathy (OXAIN), in vivo. Data showed that morphine, fentanyl, buprenorphine and tapentadol alter differently ROS production. The ROS increasing effect of morphine is not shared by the other opioids, suggesting that the different pharmacological profile could influence this parameter. Moreover, these drugs produced different alterations of β2trypsin-like and β5chymotrypsin-like activities. In fact, while morphine and fentanyl increased the proteolytic activity after prolonged exposure, a different picture was observed for buprenorphine and tapentadol, suggesting that the level of MOR agonism could be strongly related with proteasome activation. In vivo studies revealed that rats treated with oxaliplatin showed a significant increase in β5, in the thalamus (TH) and somatosensory cortex (SSCx). Moreover, a selective up-regulation of β5 and LMP7 subunit gene expression was assessed in the SSCx. Furthermore, our study revealed that oprozomib, a selective β5 inhibitor normalized the spinal prodynorphin gene expression upregulation induced by oxaliplatin, and reverted mechanical/thermal allodynia and mechanical hyperalgesia in oxaliplatin-treated rats. These results underline the role of proteasome in the OXAIN and suggest new pharmacological targets to counteract it.
Resumo:
There are various methods to analyse waste, which differ from each other according to the level of detail of the compositio. Waste composed by plastic and used for packaging, for example, can be classified by chemical composition of the polymer used for the specific product. At a more basal level, before dividing a waste according to the specific chemical material of which it is composed it is possible and also important to classify it according to the material category. So, if the secondary aim is to consider the particular polymer that constitutes a plastic waste, or what kind of natural polymer composes a specific waste made of wood, the first aim is to classify the product category of the material that makes up the waste, so, if it is wood made, or plastic, or glass made or metal, or organic. There are not specific instruments to make this subdivision, not specific chemical tests, but only a manual recognition of the material that makes up the product or waste. The first steps of this study is a recognition of the materials of which the waste is composed, the second is a the quantification of differentiated and unsorted waste produced in the area under study, the third is a mass balance of the portions of waste sent for recovery in order to obtain information on quantities that can be effectively recovered and ready for new life cycle as raw material; the fourth and last step is an environmental assessment that provides information on the environmental cost of the recovery process. This process scheme is applied to various specific kinds of waste from separate collection generated in a specific area with the aim to find a model analysis appliable to other portions of territory in order to improve knowledge of recovery technologies.