29 resultados para DETAILED ANALYSIS
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
Forecasting the time, location, nature, and scale of volcanic eruptions is one of the most urgent aspects of modern applied volcanology. The reliability of probabilistic forecasting procedures is strongly related to the reliability of the input information provided, implying objective criteria for interpreting the historical and monitoring data. For this reason both, detailed analysis of past data and more basic research into the processes of volcanism, are fundamental tasks of a continuous information-gain process; in this way the precursor events of eruptions can be better interpreted in terms of their physical meanings with correlated uncertainties. This should lead to better predictions of the nature of eruptive events. In this work we have studied different problems associated with the long- and short-term eruption forecasting assessment. First, we discuss different approaches for the analysis of the eruptive history of a volcano, most of them generally applied for long-term eruption forecasting purposes; furthermore, we present a model based on the characteristics of a Brownian passage-time process to describe recurrent eruptive activity, and apply it for long-term, time-dependent, eruption forecasting (Chapter 1). Conversely, in an effort to define further monitoring parameters as input data for short-term eruption forecasting in probabilistic models (as for example, the Bayesian Event Tree for eruption forecasting -BET_EF-), we analyze some characteristics of typical seismic activity recorded in active volcanoes; in particular, we use some methodologies that may be applied to analyze long-period (LP) events (Chapter 2) and volcano-tectonic (VT) seismic swarms (Chapter 3); our analysis in general are oriented toward the tracking of phenomena that can provide information about magmatic processes. Finally, we discuss some possible ways to integrate the results presented in Chapters 1 (for long-term EF), 2 and 3 (for short-term EF) in the BET_EF model (Chapter 4).
Resumo:
In this research work I analyzed the instrumental seismicity of Southern Italy in the area including the Lucanian Apennines and Bradano foredeep, making use of the most recent seismological database available so far. I examined the seismicity occurred during the period between 2001 and 2006, considering 514 events with magnitudes M ≥ 2.0. In the first part of the work, P- and S-wave arrival times, recorded by the Italian National Seismic Network (RSNC) operated by the Istituto Nazionale di Geofisica e Vulcanologia (INGV), were re-picked along with those of the SAPTEX temporary array (2001–2004). For some events located in the Upper Val d'Agri, I also used data from the Eni-Agip oil company seismic network. I computed the VP/VS ratio obtaining a value of 1.83 and I carried out an analysis for the one-dimensional (1D) velocity model that approximates the seismic structure of the study area. After this preliminary analysis, making use of the records obtained in the SeSCAL experiment, I incremented the database by handpicking new arrival times. My final dataset consists of 15,666 P- and 9228 S-arrival times associated to 1047 earthquakes with magnitude ML ≥ 1.5. I computed 162 fault-plane solutions and composite focal mechanisms for closely located events. I investigated stress field orientation inverting focal mechanism belonging to the Lucanian Apennine and the Pollino Range, both areas characterized by more concentrated background seismicity. Moreover, I applied the double difference technique (DD) to improve the earthquake locations. Considering these results and different datasets available in the literature, I carried out a detailed analysis of single sub-areas and of a swarm (November 2008) recorded by SeSCAL array. The relocated seismicity appears more concentrated within the upper crust and it is mostly clustered along the Lucanian Apennine chain. In particular, two well-defined clusters were located in the Potentino and in the Abriola-Pietrapertosa sector (central Lucanian region). Their hypocentral depths are slightly deeper than those observed beneath the chain. I suggest that these two seismic features are representative of the transition from the inner portion of the chain with NE-SW extension to the external margin characterized by dextral strike-slip kinematics. In the easternmost part of the study area, below the Bradano foredeep and the Apulia foreland, the seismicity is generally deeper and more scattered and is associated to the Murge uplift and to the small structures present in the area. I also observed a small structure NE-SW oriented in the Abriola-Pietrapertosa area (activated with a swarm in November 2008) that could be considered to act as a barrier to the propagation of a potential rupture of an active NW-SE striking faults system. Focal mechanisms computed in this study are in large part normal and strike-slip solutions and their tensional axes (T-axes) have a generalized NE-SW orientation. Thanks to denser coverage of seismic stations and the detailed analysis, this study is a further contribution to the comprehension of the seismogenesis and state of stress of the Southern Apennines region, giving important contributions to seismotectonic zoning and seismic hazard assessment.
Resumo:
The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.
Resumo:
The PhD thesis was developed in the framework of Innovar H2020 project. This project aimed at using genomics, transcriptomics and phenotyping techniques to update varietal registration procedure used in Europe for Value of Cultivation and Use (VCU) and Distinctiness Uniformity and Stability (DUS) protocols. The phenotypic and genotypic diversity of a durum wheat panel were assessed for different agronomic traits, connected with wheat development, disease resistance and spike fertility. A panel of 253 durum wheat varieties was characterized for VCU and DUS traits and genotyped with Illumina 90K SNP Chip array (Wang et al., 2014). GWAS analysis was performed, detecting strong QTLs confirmed also by literature review. Candidate genes were identified for each trait and molecular markers will be developed to be used for marker assisted selection in breeding programs. As for disease resistance, the panel was evaluated for resistance to Soil-Borne-Cereal-Mosaic-Virus (SBCMV). A major QTL, sbm2, was detected on chromosome 2B responsible for durum wheat resistance (Maccaferri et al., 2011). The sbm2 interval was explored by fine mapping on segregant population using KASP markers and by RNASeq analysis, detecting candidate genes involved in plant-pathogen reaction. As regards yield related traits, detailed analysis was performed on the GNI-2A QTL (Milner et al., 2016), responsible for increased number spike fertility. Fine mapping analysis was performed on durum panel identifying hox2 a strong candidate gene, codifying for transcription factor protein. The gene is paralogue of GNI-1 (Sakuma et al., 2019), and it has a 4 kbp deletion responsible for increased number of florets per spikelet. To conclude, the herein reported thesis shows a complete characterization of agronomic and disease resistance traits in modern durum wheat varieties. The results obtained will augment available information for each variety, identifying informative molecular markers for breeding purposes and QTLs/candidate genes responsible for different agronomic traits.
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
Water is a safe, harmless, and environmentally benign solvent. From an eco-sustainable chemistry perspective, the use of water instead of organic solvent is preferred to decrease environmental contamination. Moreover, water has unique physical and chemical properties, such as high dielectric constant and high cohesive energy density compared to most organic solvents. The different interactions between water and substrates, make water an interesting candidate as a solvent or co-solvent from an industrial and laboratory perspective. In this regard, organic reactions in aqueous media are of current interest. In addition, from practical and synthetic standpoints, a great advantage of using water is immediately evident, since it does not require any preliminary drying process. This thesis was found on this aspect of chemical research, with particular attention to the mechanisms which control organo and bio-catalysis outcome. The first part of the study was focused on the aldol reaction. In particular, for the first time it has been analyzed for the first time the stereoselectivity of the condensation reaction between 3-pyridincarbaldehyde and the cyclohexanone, catalyzed by morpholine and 4-tertbutyldimethylsiloxyproline, using water as sole solvent. This interest has resulted in countless works appeared in the literature concerning the use of proline derivatives as effective catalysts in organic aqueous environment. These studies showed good enantio and diastereoselectivities but they did not present an in depth study of the reaction mechanism. The analysis of the products diastereomeric ratios through the Eyring equation allowed to compare the activation parameters (ΔΔH≠ and ΔΔS≠) of the diastereomeric reaction paths, and to compare the different type of catalysis. While morpholine showed constant diasteromeric ratio at all temperatures, the O(TBS)-L-proline, showed a non-linear Eyring diagram, with two linear trends and the presence of an inversion temperature (Tinv) at 53 ° C, which denotes the presence of solvation effects by water. A pH-dependent study allowed to identify two different reaction mechanisms, and in the case of O(TBS)-L-proline, to ensure the formation of an enaminic species, as a keyelement in the stereoselective process. Moreover, it has been studied the possibility of using the 6- aminopenicillanic acid (6-APA) as amino acid-type catalyst for aldol condensation between cyclohexanone and aromatic aldehydes. A detailed analysis of the catalyst regarding its behavior in different organic solvents and pH, allowed to prove its potential as a candidate for green catalysis. Best results were obtained in neat conditions, where 6-APA proved to be an effective catalyst in terms of yields. The catalyst performance in terms of enantio- and diastereo-selectivity, was impaired by the competition between two different catalytic mechanisms: one via imine-enamine mechanism and one via a Bronsted-acid catalysis. The last part of the thesis was dedicated to the enzymatic catalysis, with particular attention to the use of an enzyme belonging to the class of alcohol dehydrogenase, the Horse Liver Alcohol Dehydrogenase (HLADH) which was selected and used in the enantioselective reduction of aldehydes to enantiopure arylpropylic alcohols. This enzyme has showed an excellent responsiveness to this type of aldehydes and a good tolerance toward organic solvents. Moreover, the fast keto-enolic equilibrium of this class of aldehydes that induce the stereocentre racemization, allows the dynamic-kinetic resolution (DKR) to give the enantiopure alcohol. By analyzing the different reaction parameters, especially the pH and the amount of enzyme, and adding a small percentage of organic solvent, it was possible to control all the parameters involved in the reaction. The excellent enatioselectivity of HLADH along with the DKR of arylpropionic aldehydes, allowed to obtain the corresponding alcohols in quantitative yields and with an optical purity ranging from 64% to >99%.
Resumo:
The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.
Resumo:
The importance of organizational issues to assess the success of international development project has not been fully considered yet. After a brief overview, in 1st chapter, on main actors involved on international cooperation, in the 2nd chapter an analysis of the literature on the project success definition, focused on the success criteria and success factors, was carried out by surveying the contribution of different authors and approaches. Traditionally projects were perceived as successful when they met time, budget and performance goals, assuming a basic similarity among projects (universalistic approach). However, starting from a non-universalistic approach, the importance of organization’s effectiveness, in terms of Relations Sustainability, emerged as a dimension able to define and assess a project success. The identification of the factors influencing the relationship between and inside the organizations becomes consequently a priority. In 3th chapter, starting from a literature survey, the different analytical approaches related to the inter and intra-organization relationships are analysed. They involve two different groups: the first includes studies focused on the type of organizations relationship structure (Supply Chains, Networks, Clusters and Industrial Districts); the second group includes approaches related to the general theories on firms relationship interpretation (Transaction Costs Economics, Resource Based View, Organization Theory). The variables and logical frameworks provided by these different theoretical contributions are compared and classified in order to find out possible connections and/or juxtapositions. Being an exhaustive collection of the literature on the subject is impossible, the main goal is to underline the existence of potentially overlapping and/or integrating approaches examining the contribution provided by different representative authors. The survey showed first of all many variables in common between approaches coming from different disciplines; furthermore the non overlapping variables can be integrated contributing to a broader picture of the variables influencing the organization relations; in particular a theoretical design for the identification of connections between the inter and the intra-organizations relations was made possible. The results obtained in 3th chapter help to defining a general theoretical framework linking the different interpretative variables. Based on extensive research contributions on the factors influencing the relations between organizations, the 4th chapter expands the analysis of the influence of variables like Human Resource Management, Organizational Climate, Psychological Contract and KSA (Knowledge, Skills, Abilities) on the relation sustainability. A detailed analysis of these relations is provided and a research hypothesis are built. According to this new framework in 5th chapter a statistical analysis was performed to qualify and quantify the influence of Organizational Climate on the Relations Sustainability. To this end the Structural Equation Modeling (SEMs) has adopted as method for the definition of the latent variables and the measure of their relations. The results obtained are satisfactory. An effective strategy to motivate the respondents to participate in the survey seems to be at the moment one of the major obstacles to the analysis implementation since the organizational performances are not specifically required by the projects’ evaluation guidelines and they represent an increase in the project related transaction costs. Their explicit introduction in the project presentation guidelines should be explored as an opportunity to increase the chances of success of these projects.
Resumo:
The aim of this PhD thesis, developed in the framework of the Italian Agroscenari research project, is to compare current irrigation volumes in two study area in Emilia-Romagna with the likely irrigation under climate change conditions. This comparison was carried out between the reference period 1961-1990, as defined by WMO, and the 2021-2050 period. For this period, multi-model climatic projections on the two study areas were available. So, the climatic projections were analyzed in term of their impact on irrigation demand and adaptation strategies for fruit and horticultural crops in the study area of Faenza, with a detailed analysis for kiwifruit vine, and for horticultural crops in Piacenza plan, focusing on the irrigation water needs of tomato. We produced downscaled climatic projections (based on A1B Ipcc emission scenario) for the two study areas. The climate change impacts for the period 2021-2050 on crop irrigation water needs and other agrometeorological index were assessed by means of the Criteria water balance model, in the two versions available, Criteria BdP (local) and Geo (spatial) with different levels of detail. We found in general for both the areas an irrigation demand increase of about +10% comparing the 2021-2050 period with the reference years 1961-1990, but no substantial differences with more recent years (1991-2008), mainly due to a projected increase in spring precipitation compensating the projected higher summer temperature and evapotranspiration. As a consequence, it is not forecasted a dramatic increase in the irrigation volumes with respect to the current volumes.
Resumo:
This PhD Thesis is devoted to the accurate analysis of the physical properties of Active Galactic Nuclei (AGN) and the AGN/host-galaxy interplay. Due to the broad-band AGN emission (from radio to hard X-rays), a multi-wavelength approach is mandatory. Our research is carried out over the COSMOS field, within the context of the XMM-Newton wide-field survey. To date, the COSMOS field is a unique area for comprehensive multi-wavelength studies, allowing us to define a large and homogeneous sample of QSOs with a well-sampled spectral coverage and to keep selection effects under control. Moreover, the broad-band information contained in the COSMOS database is well-suited for a detailed analysis of AGN SEDs, bolometric luminosities and bolometric corrections. In order to investigate the nature of both obscured (Type-2) and unobscured (Type-1) AGN, the observational approach is complemented with a theoretical modelling of the AGN/galaxy co-evolution. The X-ray to optical properties of an X-ray selected Type-1 AGN sample are discussed in the first part. The relationship between X-ray and optical/UV luminosities, parametrized by the spectral index αox, provides a first indication about the nature of the central engine powering the AGN. Since a Type-1 AGN outshines the surrounding environment, it is extremely difficult to constrain the properties of its host-galaxy. Conversely, in Type-2 AGN the host-galaxy light is the dominant component of the optical/near-IR SEDs, severely affecting the recovery of the intrinsic AGN emission. Hence a multi-component SED-fitting code is developed to disentangle the emission of the stellar populationof the galaxy from that associated with mass accretion. Bolometric corrections, luminosities, stellar masses and star-formation rates, correlated with the morphology of Type-2 AGN hosts, are presented in the second part, while the final part concerns a physically-motivated model for the evolution of spheroidal galaxies with a central SMBH. The model is able to reproduce two important stages of galaxy evolution, namely the obscured cold-phase and the subsequent quiescent hot-phase.
Resumo:
La ricerca oggetto della tesi riguarda la disamina delle esperienze comunitarie e nazionali in Europa in materia di elusione IVA ed antielusione IVA, al fine di esaminare le utilizzabili esperienze in materia per la Cina che dovrebbe iniziare a prendere le giuste azioni per fronteggiare il problema della crescente diffusione dei fenomeni dell’elusione IVA, in particolare nel contesto della riforma perdurante dell’IVA cinese verso il modello moderno. A questo fine, prima la tesi ha analizzato dettagliatamente le seguenti principali questioni sulla base delle esperienze comunitarie e nazionali dei determinati Stati Membri dell’UE: la definizione dell’elusione fiscale generale (a tale proposito, più rilevanti le differenze tra elusione fiscale ed altri relativi concetti come risparmio d’imposta, evasione fiscale e simulazione e la relazione intrinseca tra elusione fiscale ed altri relativi concetti come frode alla legge ed abuso del diritto), la definizione dell’elusione IVA (a tale proposito, più rilevanti gli aspetti di particolare interesse ai fini della definizione dell’elusione IVA), i principi e metodologie dell’elusione IVA, le applicazioni delle varie misure antielusive rivolte all’elusione IVA nell’ordinamento comunitario e negli ordinamenti interni degli alcuni principali Stati Membri dell’UE (a tale proposito, più rilevanti l’applicazone del principio Halifax, quale norma antielusiva generale basata sul principio di divieto dell’abuso del diritto, e la considerazione delle altre soluzioni antielusive applicabili nell’IVA, comprese correzioni normative e clausole antielusive specifiche), gli effetti dell’antielusione IVA e i limiti entro cui l’autorità impositiva può esercitare l’antielusione IVA per la tutela degli interessi legittimi dei soggetti passivi come certezza giuridica ed autonomia contrattuale. Poi, la tesi ha avanzato le proposte relative al perfezionamento delle soluzioni antielusive IVA nell’ordinamento tributario cinese, dopo presentati i principali regimi attuali dell’IVA cinese e analizzate le situazioni attuali realtive all’elusione IVA e all’antielusione IVA nel sistema fiscale cinese.
Resumo:
L'approvvigionamento di risorse minerali e la tutela dell'ambiente sono spesso considerate attività contrapposte ed inconciliabili, ma in realtà rappresentano due necessità imprescindibili per le società moderne. Le georisorse, in quanto non rinnovabili, devono essere valorizzate in maniera efficiente, adoperando strumenti che garantiscano la sostenibilità ambientale, sociale ed economica degli interventi estrattivi. La necessità di tutelare il territorio e migliorare la qualità della vita delle comunità locali impone alla Pubblica Amministrazione di implementare misure per la riqualificazione di aree degradate, ma fino ai primi anni '90 la normativa di settore non prevedeva strumenti a tal proposito, e ciò ha portato alla proliferazione di siti estrattivi dismessi e abbandonati senza interventi di recupero ambientale. Il presente lavoro di ricerca fornisce contributi innovativi alla pianificazione e progettazione sostenibile delle attività estrattive, attraverso l'adozione di un approccio multidisciplinare alla trattazione del tema e l'utilizzo esperto dei Sistemi Informativi Geografici, in particolare GRASS GIS. A seguito di una approfondita analisi in merito agli strumenti e le procedure adottate nella pianificazione delle Attività Estrattive in Italia, sono stati sviluppati un metodo di indagine ed un sistema esperto per la previsione ed il controllo delle vibrazioni indotte nel terreno da volate in cava a cielo aperto, che consentono di ottimizzare la progettazione della volata e del sistema di monitoraggio delle vibrazioni grazie a specifici strumenti operativi implementati in GRASS GIS. A supporto di una più efficace programmazione di interventi di riqualificazione territoriale, è stata messa a punto una procedura per la selezione di siti dismessi e di potenziali interventi di riqualificazione, che ottimizza le attività di pianificazione individuando interventi caratterizzati da elevata sostenibilità ambientale, economica e sociale. I risultati ottenuti dimostrano la necessità di un approccio esperto alla pianificazione ed alla progettazione delle attività estrattive, incrementandone la sostenibilità attraverso l'adozione di strumenti operativi più efficienti.
Resumo:
The dynamics of a passive back-to-back test rig have been characterised, leading to a multi-coordinate approach for the analysis of arbitrary test configurations. Universal joints have been introduced into a typical pre-loaded back-to-back system in order to produce an oscillating torsional moment in a test specimen. Two different arrangements have been investigated using a frequency-based sub-structuring approach: the receptance method. A numerical model has been developed in accordance with this theory, allowing interconnection of systems with two-coordinates and closed multi-loop schemes. The model calculates the receptance functions and modal and deflected shapes of a general system. Closed form expressions of the following individual elements have been developed: a servomotor, damped continuous shaft and a universal joint. Numerical results for specific cases have been compared with published data in literature and experimental measurements undertaken in the present work. Due to the complexity of the universal joint and its oscillating dynamic effects, a more detailed analysis of this component has been developed. Two models have been presented. The first represents the joint as two inertias connected by a massless cross-piece. The second, derived by the dynamic analysis of a spherical four-link mechanism, considers the contribution of the floating element and its gyroscopic effects. An investigation into non-linear behaviour has led to a time domain model that utilises the Runge-Kutta fourth order method for resolution of the dynamic equations. It has been demonstrated that the torsional receptances of a universal joint, derived using the simple model, result in representation of the joint as an equivalent variable inertia. In order to verify the model, a test rig has been built and experimental validation undertaken. The variable inertia of a universal joint has lead to a novel application of the component as a passive device for the balancing of inertia variations in slider-crank mechanisms.