13 resultados para Humanitariansim and complex emergencies
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.
Resumo:
Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.
Resumo:
In this work we discuss the secondary market for life insurance policies in the United States of America. First, we give an overview of the life settlement market: how it came into existence, its growth prospects and the ethical issues it arises. Secondly, we discuss the characteristics of the different life insurance products present in the market and describe how life settlements are originated. Life settlement transactions tend to be long and complex transactions that require the involvement of a number of parties. Also, a direct investment into life insurance policies is fraught with a number of practical issues and entails risks that are not directly related to longevity. This may reduce the efficiency of a direct investment in physical policies. For these reasons, a synthetic longevity market has evolved. The number of parties involved in a synthetic longevity transaction is typically smaller and the broker-dealer transferring the longevity exposure will be retaining most or all of the risks a physical investment entails. Finally, we describe the main methods used in the market to evaluate life settlement investments and the role of life expectancy providers.
Resumo:
It is well known that the best grape quality can occur only through the achievement of optimal source/sink ratio. Vine balance is in fact a key parameter in controlling berry sugar, acidity and secondary metabolites content (Howell, 2001; Vanden Heuvel et al., 2004). Despite yield reduction and quality improvement are not always strictly related, cluster thinning is considered a technique which could lead to improvement in grape sugar and anthocyanin composition (Dokoozlian and Hirschfelt, 1995; Guidoni et al., 2002). Among several microclimatic variables which may impact grape composition, the effect of cluster light exposure and temperature, which probably act in synergistic and complex way, has been widely explored showing positive even sometimes contradictory results (Spayd et al., 2001; Tarara et al., 2008). Pre-bloom and véraison defoliation are very efficient techniques in inducing cluster microclimatic modification. Furthermore pre-bloom defoliation inducing a lower berry set percentage On these basis the aim of the first experiment of the thesis was to verify in cv Sangiovese the effects on ripening and berry composition of management techniques which may increase source/sink ratio and /or promote light incidence on berries throughout grape ripening. An integrated agronomic, biochemical and microarray approach, aims to understand which mechanisms are involved in berry composition and may be conditioned in the berries during ripening in vines submitted to three treatments. In particular the treatments compared were: a) cluster thinning (increasing in source/sink ratio) b) leaf removal at véraison (increasing cluster light exposure) c) pre-bloom defoliation (increasing source sink ratio and cluster light exposure). Vine response to leaf removal at véraison was further evaluated in the second experiment on three different varieties (Cabernet Sauvignon, Nero d’Avola, Raboso Piave) chosen for their different genetic traits in terms of anthocyanin amount and composition. The integrated agronomic, biochemical and microarray approach, employed in order to understand those mechanisms involved in berry composition of Sangiovese vines submitted to management techniques which may increase source/sink ratio and induce microclimatic changes, bring to interesting results. This research confirmed the main role of source/sink ratio in conditioning sugars metabolism and revealed also that carbohydrates availability is a crucial issue in triggering anthocyanin biosynthesis. More complex is the situation of pre-bloom defoliation, where source/sink and cluster light increase effects are associated to determine final berry composition. It results that the application of pre-bloom defoliation may be risky, as too much dependent on seasonal conditions (rain and temperature) and physiological vine response (leaf area recovery, photosynthetic compensation, laterals regrowth). Early induced stress conditions could bring cluster at véraison in disadvantage to trigger optimal berry ripening processes compared to untreated vines. This conditions could be maintained until harvest, if no previously described physiological recovery occurs. Certainly, light exposure increase linked to defoliation treatments, showed a positive and solid effect on flavonol biosynthesis, as in our conditions temperature was not so different among treatments. Except the last aspects, that could be confirmed also for véraison defoliation, microclimatic changes by themselves seemed not able to induce any modification in berry composition. Further studies are necessary to understand if the peculiar anthocyanic and flavonols composition detected in véraison defoliation could play important role in both color intensity and stability of wines.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Weaning is an important and complex step involving many stresses that interfere deeply with feed intake, gastro-intestinal tract (GIT) development and adaptation to the weaning diet in young pigs. The health of the pig at weaning, its nutrition in the immediate post-weaning period, and the physical, microbiological and psychological environment are all factors that interact to determine food intake and subsequent growth. GIT disorders, infections and diarrhoea increase at the time of weaning, in fact pathogens such as enterotoxigenic Escherichia coli (ETEC) are major causes of mucosal damage in post-weaning disease contributing to diarrhoea in suckling and post-weaned pigs. The European ban in 2006 put on antibiotic growth promoters (AGP) has stimulated research on the mechanisms of GIT disorders and on nutritional approaches for preventing or reducing such disturbances avoiding AGPs. Concerning these aspects here are presented five studies based on the interplay among nutrition, genomic, immunity and physiology with the aim to clarify some of these problematic issues around weaning period in piglets. The first three evaluate the effects of diets threonine or tryptophan enriched on gut defence and health as possible alternatives to AGP in the gut. The fourth is focused on the possible immunological function related with the development of the stomach. The fifth is a pilot study on the gastric sensing and orexygenic signal given by fasting or re-feeding conditions. Although some results are controversial, it appears that both tryptophan and threonine supplementation in weaning diets have a preventive role in E.coli PWD and favorable effects in the gut especially in relation to ETEC susceptible genotype. While the stomach is believed as almost aseptic organ, it shows an immune activity related with the mucosal maturation. Moreover it shows an orexygenic role of both oxyntic mucosa and pyloric mucosa, and its possible relation with nutrient sensing stimuli.
Resumo:
Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.
Resumo:
The goal of many plant scientists’ research is to explain natural phenotypic variation in term of simple changes in DNA sequence. DNA-based molecular markers are extensively used for the construction of genome-wide molecular maps and to perform genetic analysis for simple and complex traits. The PhD thesis was divided into two main research lines according to the different approaches adopted. The first research line is to analyze the genetic diversity in an Italian apple germplasm collection for the identification of markers tightly linked to targeted genes by an association genetic method. This made it possible to identify synomym and homonym accessions and triploids. The fruit red skin color trait has been used to test the reliability of the genetic approaches in this species. The second line is related to the development of molecular markers closely linked to the Rvi13 and Rvi5 scab resistance genes, previously mapped on apple’s chromosome 10 and 17 respectively by using the traditional linkage mapping method. Both region have been fine-mapped with various type of markers that could be used for marker-assisted selection in future breeding programs and to isolate the two resistance genes.
Resumo:
With the increasing importance that nanotechnologies have in everyday life, it is not difficult to realize that also a single molecule, if properly designed, can be a device able to perform useful functions: such a chemical species is called chemosensor, that is a molecule of abiotic origin that signals the presence of matter or energy. Signal transduction is the mechanism by which an interaction of a sensor with an analyte yields a measurable form of energy. When dealing with the design of a chemosensor, we need to take into account a “communication requirement” between its three component: the receptor unit, responsible for the selective analyte binding, the spacer, which controls the geometry of the system and modulates the electronic interaction between the receptor and the signalling unit, whose physico-chemical properties change upon complexation. A luminescent chemosensor communicates a variation of the physico-chemical properties of the receptor unit with a luminescence output signal. This thesis work consists in the characterization of new molecular and nanoparticle-based system which can be used as sensitive materials for the construction of new optical transduction devices able to provide information about the concentration of analytes in solution. In particular two direction were taken. The first is to continue in the development of new chemosensors, that is the first step for the construction of reliable and efficient devices, and in particular the work will be focused on chemosensors for metal ions for biomedical and environmental applications. The second is to study more efficient and complex organized systems, such as derivatized silica nanoparticles. These system can potentially have higher sensitivity than molecular systems, and present many advantages, like the possibility to be ratiometric, higher Stokes shifts and lower signal-to-noise ratio.
Resumo:
The surface properties of minerals have important implications in geology, environment, industry and biotechnology and for certain aspects in the research on the origin of life. This research project aims to widen the knowledge on the nanoscale surface properties of chlorite and phlogopite by means of advanced methodologies, and also to investigate the interaction of fundamental biomolecules, such as nucleotides, RNA, DNA and amino acid glycine with the surface of the selected phyllosilicates. Multiple advanced and complex experimental approaches based on scanning probe microscopy and spatially resolved spectroscopy were used and in some cases specifically developed. The results demonstrate that chlorite exposes at the surface atomically flat terraces with 0.5 nm steps typically generated by the fragmentation of the octahedral sheet of the interlayer (brucitic-type). This fragmentation at the nanoscale generates a high anisotropy and inhomogeneity with surface type and isomorphous cationic substitutions determining variations of the effective surface potential difference, ranging between 50-100 mV and 400-500 mV, when measured in air, between the TOT surface and the interlayer brucitic sheet. The surface potential was ascribed to be the driving force of the observed high affinity of the surface with the fundamental biomolecules, like single molecules of nucleotides, DNA, RNA and amino acids. Phlogopite was also observed to present an extended atomically flat surface, featuring negative surface potential values of some hundreds of millivolts and no significant local variations. Phlogopite surface was sometimes observed to present curvature features that may be ascribed to local substitutions of the interlayer cations or the presence of a crystal lattice mismatch or structural defects, such as stacking faults or dislocation loops. Surface chemistry was found similar to the bulk. The study of the interaction with nucleotides and glycine revealed a lower affinity with respect to the brucite-like surface of chlorite.
Resumo:
Evidence accumulated in the last ten years has demonstrated that a large proportion of the mitochondrial respiratory chain complexes in a variety of organisms is arranged in supramolecular assemblies called supercomplexes or respirasomes. Besides conferring a kinetic advantage (substrate channeling) and being required for the assembly and stability of Complex I, indirect considerations support the view that supercomplexes may also prevent excessive formation of reactive oxygen species (ROS) from the respiratory chain. Following this line of thought we have decided to directly investigate ROS production by Complex I under conditions in which the complex is arranged as a component of the supercomplex I1III2 or it is dissociated as an individual enzyme. The study has been addressed both in bovine heart mitochondrial membranes and in reconstituted proteoliposomes composed of complexes I and III in which the supramolecular organization of the respiratory assemblies is impaired by: (i) treatment either of bovine heart mitochondria or liposome-reconstituted supercomplex I-III with dodecyl maltoside; (ii) reconstitution of Complexes I and III at high phospholipids to protein ratio. The results of this investigation provide experimental evidence that the production of ROS is strongly increased in either model; supporting the view that disruption or prevention of the association between Complex I and Complex III by different means enhances the generation of superoxide from Complex I . This is the first demonstration that dissociation of the supercomplex I1III2 in the mitochondrial membrane is a cause of oxidative stress from Complex I. Previous work in our laboratory demonstrated that lipid peroxidation can dissociate the supramolecular assemblies; thus, here we confirm that preliminary conclusion that primary causes of oxidative stress may perpetuate reactive oxygen species (ROS) generation by a vicious circle involving supercomplex dissociation as a major determinant.
Resumo:
The present work reports the outcome of the GIMEMA CML WP study CML0811, an independent trial investigating nilotinib as front-line treatment in chronic phase chronic myeloid leukemia (CML). Moreover, the results of the proteomic analysis of the CD34+ cells collected at CML diagnosis, compared to the counterpart from healthy donors, are reported. Our study confirmed that nilotinib is highly effective in the prevention of the progression to accelerated/blast phase, a condition that today is still associated with high mortality rates. Despite the relatively short follow-up, cardiovascular issues, particularly atherosclerotic adverse events (AE), have emerged, and the frequency of these AEs may counterbalance the anti-leukemic efficacy. The deep molecular response rates in our study compare favorably to those obtained with imatinib, in historic cohorts, and confirm the findings of the Company-sponsored ENESTnd study. Considering the increasing rates of deep MR over time we observed, a significant proportion of patients will be candidate to treatment discontinuation in the next years, with higher probability of remaining disease-free in the long term. The presence of the additional and complex changes we found at the proteomic level in CML CD34+ cells should be taken into account for the investigation on novel targeted therapies, aimed at the eradication of the disease.
Resumo:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.