15 resultados para probability and reinforcement proportion
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The design process of any electric vehicle system has to be oriented towards the best energy efficiency, together with the constraint of maintaining comfort in the vehicle cabin. Main aim of this study is to research the best thermal management solution in terms of HVAC efficiency without compromising occupant’s comfort and internal air quality. An Arduino controlled Low Cost System of Sensors was developed and compared against reference instrumentation (average R-squared of 0.92) and then used to characterise the vehicle cabin in real parking and driving conditions trials. Data on the energy use of the HVAC was retrieved from the car On-Board Diagnostic port. Energy savings using recirculation can reach 30 %, but pollutants concentration in the cabin builds up in this operating mode. Moreover, the temperature profile appeared strongly nonuniform with air temperature differences up to 10° C. Optimisation methods often require a high number of runs to find the optimal configuration of the system. Fast models proved to be beneficial for these task, while CFD-1D model are usually slower despite the higher level of detail provided. In this work, the collected dataset was used to train a fast ML model of both cabin and HVAC using linear regression. Average scaled RMSE over all trials is 0.4 %, while computation time is 0.0077 ms for each second of simulated time on a laptop computer. Finally, a reinforcement learning environment was built in OpenAI and Stable-Baselines3 using the built-in Proximal Policy Optimisation algorithm to update the policy and seek for the best compromise between comfort, air quality and energy reward terms. The learning curves show an oscillating behaviour overall, with only 2 experiments behaving as expected even if too slow. This result leaves large room for improvement, ranging from the reward function engineering to the expansion of the ML model.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
Fibre Reinforced Concretes are innovative composite materials whose applications are growing considerably nowadays. Being composite materials, their performance depends on the mechanical properties of both components, fibre and matrix and, above all, on the interface. The variables to account for the mechanical characterization of the material, could be proper of the material itself, i.e. fibre and concrete type, or external factors, i.e. environmental conditions. The first part of the research presented is focused on the experimental and numerical characterization of the interface properties and short term response of fibre reinforced concretes with macro-synthetic fibers. The experimental database produced represents the starting point for numerical models calibration and validation with two principal purposes: the calibration of a local constitutive law and calibration and validation of a model predictive of the whole material response. In the perspective of the design of sustainable admixtures, the optimization of the matrix of cement-based fibre reinforced composites is realized with partial substitution of the cement amount. In the second part of the research, the effect of time dependent phenomena on MSFRCs response is studied. An extended experimental campaign of creep tests is performed analysing the effect of time and temperature variations in different loading conditions. On the results achieved, a numerical model able to account for the viscoelastic nature of both concrete and reinforcement, together with the environmental conditions, is calibrated with the LDPM theory. Different type of regression models are also elaborated correlating the mechanical properties investigated, bond strength and residual flexural behaviour, regarding the short term analysis and creep coefficient on time, for the time dependent behaviour, with the variable investigated. The experimental studies carried out emphasize the several aspects influencing the material mechanical performance allowing also the identification of those properties that the numerical approach should consider in order to be reliable.
Resumo:
The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.
Resumo:
This PhD work was aimed to design, develop, and characterize gelatin-based scaffolds, for the repair of defects in the muscle-skeletal system. Gelatin is a biopolymer widely used for pharmaceutical and medical applications, thanks to its biodegradability and biocompatibility. It is obtained from collagen via thermal denaturation or chemical-physical degradation. Despite its high potential as biomaterial, gelatin exhibits poor mechanical properties and a low resistance in aqueous environment. Crosslinking treatment and enrichment with reinforcement materials are thus required for biomedical applications. In this work, gelatin based scaffolds were prepared following three different strategies: films were prepared through the solvent casting method, electrospinning technique was applied for the preparation of porous mats, and 3D porous scaffolds were prepared through freeze-drying. The results obtained on films put into evidence the influence of pH, crosslinking and reinforcement with montmorillonite (MMT), on the structure, stability and mechanical properties of gelatin and MMT/gelatin composites. The information acquired on the effect of crosslinking in different conditions was utilized to optimize the preparation procedure of electrospun and freeze-dried scaffolds. A successful method was developed to prepare gelatin nanofibrous scaffolds electrospun from acetic acid/water solution and stabilized with a non-toxic crosslinking agent, genipin, able to preserve their original morphology after exposure to water. Moreover, the co-electrospinning technique was used to prepare nanofibrous scaffolds at variable content of gelatin and polylactic acid. Preliminary in vitro tests indicated that the scaffolds are suitable for cartilage tissue engineering, and that their potential applications can be extended to cartilage-bone interface tissue engineering. Finally, 3D porous gelatin scaffolds, enriched with calcium phosphate, were prepared with the freeze-drying method. The results indicated that the crystallinity of the inorganic phase influences porosity, interconnectivity and mechanical properties. Preliminary in vitro tests show good osteoblast response in terms of proliferation and adhesion on all the scaffolds.
Resumo:
La tesi di ricerca si propone di indagare il riflesso che i principi/valori producono sul parametro nel sindacato di legittimità costituzionale, al fine di verificarne le implicazioni sulla legalità, in termini di prevedibilità e certezza. In particolare, delineata la connessione tra principi e valori costituzionali e, ricostruito, secondo la teoria dell'ordinamento, il rapporto tra valori e normatività,si analizzano i riflessi prodotti, sul piano interpretativo, dall’apertura del parametro costituzionale alla logica dei valori, enfatizzandone le ricadute sul controllo di costituzionalità delle leggi. Identificato il nesso tra principi e valori nella capacità funzionale dei primi di realizzare i diritti fondamentali, si è inteso rimarcare come la più estesa realizzazione dei principi-valori costituzionali potrebbe compiersi a spese della legge e della certezza del diritto, in una relazione inversamente proporzionale. Ciò apparirebbe evidente dall’ottica privilegiata della materia penale, per cui una legalità materiale, letta alla luce di criteri di adeguatezza e di ragionevole proporzione, seppur vicina alle esigenze di giustizia del caso concreto, se spinta in eccessi interpretativi rischia di invadere il campo del legislatore, unico deputato a compiere scelte di valore.
Resumo:
Este estudo investiga a otimização da resistência ao cisalhamento no plano de juntas de sobreposição co-curadas do compósito termoplástico unidirecional auto-reforçado de polietileno de baixa densidade reciclado reforçado por fibras de polietileno de ultra alto peso molecular através da relação desta resistência com os parâmetros processuais de prensagem a quente para a conformação da junta (pressão, temperatura, tempo e comprimento). A matriz teve sua estrutura química analisada para verificar potenciais degradações devidas à sua origem de reciclagem. Matriz e reforço foram caracterizados termicamente para definir a janela de temperatura de processamento de junta a ser estudada. A elaboração das condições de cura dos corpos de prova foi feita de acordo com a metodologia de Projeto de Experimento de Superfície de Resposta e a relação entre a resistência ao cisalhamento das juntas e os respectivos parâmetros de cura foi obtida através de equação de regressão gerada pelo método dos Mínimos Quadrados Ordinários. A caracterização mecânica em tração do material foi analisada micro e macromecanicamente. A análise química da matriz não demonstrou a presença de grupos carboxílicos que evidenciassem degradação por ramificações de cadeia e reticulação advindos da reciclagem do material. As metodologias de ensaio propostas demonstraram ser eficazes, podendo servir como base para a constituição de normas técnicas. Demonstrou-se que é possível obter juntas com resistência ótima ao cisalhamento de 6,88 MPa quando processadas a 1 bar, 115°C, 5 min e com 12 mm. A análise da fratura revelou que a ruptura por cisalhamento das juntas foi precedida por múltiplas fissuras longitudinais induzidas por sucessivos debondings, tanto dentro quanto fora da junta, devido à tensão transversal acumulada na mesma, proporcional a seu comprimento. A temperatura demonstrou ser o parâmetro de processamento mais relevante para a performance da junta, a qual é pouco afetada por variações na pressão e tempo de cura.
Resumo:
This thesis consists of three essays on information economics. I explore how information is strategically communicated or designed by senders who aim to influence the decisions of a receiver. In the first chapter, I study a cheap talk game between two imperfectly informed experts and a decision maker. The experts receive noisy signals about the state and sequentially communicate the relevant information to the decision maker. I refine the self-serving belief system under uncertainty and Ι characterise the most informative equilibrium that might arise in such environments.In the second chapter, I consider the case where a decision maker seeks advice from a biased expert who cares also about establishing a reputation of being competent. The expert has the incentives to misreport her information but she faces a trade-off between the gain from misrepresentation and the potential reputation loss. I show that the equilibrium is fully-revealing if the expert is not too biased and not too highly reputable. If there is competition between two experts the information transmission is always improved. However, in cases where the experts are more than two the result is ambiguous, and it depends on the players’ prior belief over states.In the last chapter, I consider a model of strategic communication where a privately and imperfectly informed sender can persuade a receiver. The sender may receive favorable or unfavorable private information about her preferred state. I describe two ways that are adopted in real life situations and theoretically improve equilibrium informativeness given sender's private information. First, a policy that suggests symmetry constraints to the experiments' choice. Second, an approval strategy characterised by a low precision threshold where the receiver will accept the sender with a positive probability and a higher one where the sender will be accepted with certainty.
Resumo:
Massive Internet of Things is expected to play a crucial role in Beyond 5G (B5G) wireless communication systems, offering seamless connectivity among heterogeneous devices without human intervention. However, the exponential proliferation of smart devices and IoT networks, relying solely on terrestrial networks, may not fully meet the demanding IoT requirements in terms of bandwidth and connectivity, especially in areas where terrestrial infrastructures are not economically viable. To unleash the full potential of 5G and B5G networks and enable seamless connectivity everywhere, the 3GPP envisions the integration of Non-Terrestrial Networks (NTNs) into the terrestrial ones starting from Release 17. However, this integration process requires modifications to the 5G standard to ensure reliable communications despite typical satellite channel impairments. In this framework, this thesis aims at proposing techniques at the Physical and Medium Access Control layers that require minimal adaptations in the current NB-IoT standard via NTN. Thus, firstly the satellite impairments are evaluated and, then, a detailed link budget analysis is provided. Following, analyses at the link and the system levels are conducted. In the former case, a novel algorithm leveraging time-frequency analysis is proposed to detect orthogonal preambles and estimate the signals’ arrival time. Besides, the effects of collisions on the detection probability and Bit Error Rate are investigated and Non-Orthogonal Multiple Access approaches are proposed in the random access and data phases. The system analysis evaluates the performance of random access in case of congestion. Various access parameters are tested in different satellite scenarios, and the performance is measured in terms of access probability and time required to complete the procedure. Finally, a heuristic algorithm is proposed to jointly design the access and data phases, determining the number of satellite passages, the Random Access Periodicity, and the number of uplink repetitions that maximize the system's spectral efficiency.
Resumo:
Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.
Resumo:
The theory of the 3D multipole probability tomography method (3D GPT) to image source poles, dipoles, quadrupoles and octopoles, of a geophysical vector or scalar field dataset is developed. A geophysical dataset is assumed to be the response of an aggregation of poles, dipoles, quadrupoles and octopoles. These physical sources are used to reconstruct without a priori assumptions the most probable position and shape of the true geophysical buried sources, by determining the location of their centres and critical points of their boundaries, as corners, wedges and vertices. This theory, then, is adapted to the geoelectrical, gravity and self potential methods. A few synthetic examples using simple geometries and three field examples are discussed in order to demonstrate the notably enhanced resolution power of the new approach. At first, the application to a field example related to a dipole–dipole geoelectrical survey carried out in the archaeological park of Pompei is presented. The survey was finalised to recognize remains of the ancient Roman urban network including roads, squares and buildings, which were buried under the thick pyroclastic cover fallen during the 79 AD Vesuvius eruption. The revealed anomaly structures are ascribed to wellpreserved remnants of some aligned walls of Roman edifices, buried and partially destroyed by the 79 AD Vesuvius pyroclastic fall. Then, a field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging as accurately as possible the differential mass density structure within the first few km of depth inside the volcanic apparatus. An assemblage of vertical prismatic blocks appears to be the most probable gravity model of the Etna apparatus within the first 5 km of depth below sea level. Finally, an experimental SP dataset collected in the Mt. Somma-Vesuvius volcanic district (Naples, Italy) is elaborated in order to define location and shape of the sources of two SP anomalies of opposite sign detected in the northwestern sector of the surveyed area. The modelled sources are interpreted as the polarization state induced by an intense hydrothermal convective flow mechanism within the volcanic apparatus, from the free surface down to about 3 km of depth b.s.l..
Resumo:
The present work reports the outcome of the GIMEMA CML WP study CML0811, an independent trial investigating nilotinib as front-line treatment in chronic phase chronic myeloid leukemia (CML). Moreover, the results of the proteomic analysis of the CD34+ cells collected at CML diagnosis, compared to the counterpart from healthy donors, are reported. Our study confirmed that nilotinib is highly effective in the prevention of the progression to accelerated/blast phase, a condition that today is still associated with high mortality rates. Despite the relatively short follow-up, cardiovascular issues, particularly atherosclerotic adverse events (AE), have emerged, and the frequency of these AEs may counterbalance the anti-leukemic efficacy. The deep molecular response rates in our study compare favorably to those obtained with imatinib, in historic cohorts, and confirm the findings of the Company-sponsored ENESTnd study. Considering the increasing rates of deep MR over time we observed, a significant proportion of patients will be candidate to treatment discontinuation in the next years, with higher probability of remaining disease-free in the long term. The presence of the additional and complex changes we found at the proteomic level in CML CD34+ cells should be taken into account for the investigation on novel targeted therapies, aimed at the eradication of the disease.