12 resultados para FGGE-Equator ´79 - First GARP Global Experiment
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The effect of soil incorporation of 7 Meliaceae derivatives (6 commercial neem cakes and leaves of Melia azedarach L.) on C and N dynamics and on nutrient availability to micropropagated GF677 rootstock was investigated. In a first laboratory incubation experiment the derivatives showed different N mineralization dynamics, generally well predicted by their C:N ratio and only partly by their initial N concentration. All derivatives increased microbial biomass C, thus representing a source of C for the soil microbial population. Soil addition of all neem cakes (8 g kg-1) and melia leaves (16 g kg-1) had a positive effect on plant growth and increased root N uptake and leaf green colour of micropropagated plants of GF677. In addition, the neem cakes characterized by higher nutrient concentration increased P and K concentration in shoot and leaves 68 days after the amendment. In another experiment, soil incorporation of 15N labeled melia leaves (16 g kg-1) had no effect on the total amount of plant N, however the percentage of melia derived-N of treated plants ranged between 0.8% and 34% during the experiment. At the end of the growing season, about 7% of N added as melia leaves was recovered in plant, while 70% of it was still present in soil. Real C mineralization and the priming effect induced by the addition of the derivatives were quantified by a natural 13C abundance method. The real C mineralization of the derivatives ranged between 22% and 40% of added-C. All the derivatives studied induced a positive priming effect and, 144 days after the amendment, the amount of C primed corresponded to 26% of added-C, for all the derivatives. Despite this substantial priming effect, the C balance of the soil, 144 days after the amendment, always resulted positive.
Resumo:
The study of tides and their interactions with the complex dynamics of the global ocean represents a crucial challenge in ocean modelling. This thesis aims to deepen this study from a dynamical point of view, analysing what are the tidal effects on the general circulation of the ocean. We perform different experiments of a mesoscale-permitting global ocean model forced by both atmospheric fields and astronomical tidal potential, and we implement two parametrizations to include in the model tidal phenomena that are currently unresolved, with particular emphasis to the topographic wave drag for locally dissipating internal waves. An additional experiment using a mesoscale-resolving configuration is used to compare the simulated tides at different resolutions with observed data. We find that the accuracy of modelled tides strongly depends on the region and harmonic component of interest, even though the increased resolution allows to improve the modelled topography and resolve more intense internal waves. We then focus on the impact of tides in the Atlantic Ocean and find that tides weaken the overturning circulation during the analysed period from 1981 to 2007, even though the interannual differences strongly change in both amplitude and phase. The zonally integrated momentum balance shows that tide changes the water stratification at the zonal boundaries, modifying the pressure and therefore the geostrophic balance over the entire basin. Finally, we describe the overturning circulation in the Mediterranean Sea computing the meridional and zonal streamfunctions both in the Eulerian and residual frameworks. The circulation is characterised by different cells, and their forcing processes are described with particular emphasis to the role of mesoscale and a transient climatic event. We complete the description of the overturning circulation giving evidence for the first time to the connection between meridional and zonal cells.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.
Resumo:
The research project presented in this dissertation is about text and memory. The title of the work is "Text and memory between Semiotics and Cognitive Science: an experimental setting about remembering a movie". The object of the research is the relationship between texts or "textuality" - using a more general semiotic term - and memory. The goal is to analyze the link between those semiotic artifacts that a culture defines as autonomous meaningful objects - namely texts - and the cognitive performance of memory that allows to remember them. An active dialogue between Semiotics and Cognitive Science is the theoretical paradigm in which this research is set, the major intend is to establish a productive alignment between the "theory of text" developed in Semiotics and the "theory of memory" outlined in Cognitive Science. In particular the research is an attempt to study how human subjects remember and/or misremember a film, as a specific case study; in semiotics, films are “cinematographic texts”. The research is based on the production of a corpus of data gained through the qualitative method of interviewing. After an initial screening of a fulllength feature film each participant of the experiment has been interviewed twice, according to a pre-established set of questions. The first interview immediately after the screening: the subsequent, follow-up interview three months from screening. The purpose of this design is to elicit two types of recall from the participants. In order to conduce a comparative inquiry, three films have been used in the experimental setting. Each film has been watched by thirteen subjects, that have been interviewed twice. The corpus of data is then made by seventy-eight interviews. The present dissertation displays the results of the investigation of these interviews. It is divided into six main parts. Chapter one presents a theoretical framework about the two main issues: memory and text. The issue of the memory is introduced through many recherches drown up in the field of Cognitive Science and Neuroscience. It is developed, at the same time, a possible relationship with a semiotic approach. The theoretical debate about textuality, characterizing the field of Semiotics, is examined in the same chapter. Chapter two deals with methodology, showing the process of definition of the whole method used for production of the corpus of data. The interview is explored in detail: how it is born, what are the expected results, what are the main underlying hypothesis. In Chapter three the investigation of the answers given by the spectators starts. It is examined the phenomenon of the outstanding details of the process of remembering, trying to define them in a semiotic way. Moreover there is an investigation of the most remembered scenes in the movie. Chapter four considers how the spectators deal with the whole narrative. At the same time it is examined what they think about the global meaning of the film. Chapter five is about affects. It tries to define the role of emotions in the process of comprehension and remembering. Chapter six presents a study of how the spectators account for a single scene of the movie. The complete work offers a broad perspective about the semiotic issue of textuality, using both a semiotic competence and a cognitive one. At the same time it presents a new outlook on the issue of memory, opening several direction of research.
Resumo:
This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.
Resumo:
This PhD thesis addresses the topic of large-scale interactions between climate and marine biogeochemistry. To this end, centennial simulations are performed under present and projected future climate conditions with a coupled ocean-atmosphere model containing a complex marine biogeochemistry model. The role of marine biogeochemistry in the climate system is first investigated. Phytoplankton solar radiation absorption in the upper ocean enhances sea surface temperatures and upper ocean stratification. The associated increase in ocean latent heat losses raises atmospheric temperatures and water vapor. Atmospheric circulation is modified at tropical and extratropical latitudes with impacts on precipitation, incoming solar radiation, and ocean circulation which cause upper-ocean heat content to decrease at tropical latitudes and to increase at middle latitudes. Marine biogeochemistry is tightly related to physical climate variability, which may vary in response to internal natural dynamics or to external forcing such as anthropogenic carbon emissions. Wind changes associated with the North Atlantic Oscillation (NAO), the dominant mode of climate variability in the North Atlantic, affect ocean properties by means of momentum, heat, and freshwater fluxes. Changes in upper ocean temperature and mixing impact the spatial structure and seasonality of North Atlantic phytoplankton through light and nutrient limitations. These changes affect the capability of the North Atlantic Ocean of absorbing atmospheric CO2 and of fixing it inside sinking particulate organic matter. Low-frequency NAO phases determine a delayed response of ocean circulation, temperature and salinity, which in turn affects stratification and marine biogeochemistry. In 20th and 21st century simulations natural wind fluctuations in the North Pacific, related to the two dominant modes of atmospheric variability, affect the spatial structure and the magnitude of the phytoplankton spring bloom through changes in upper-ocean temperature and mixing. The impacts of human-induced emissions in the 21st century are generally larger than natural climate fluctuations, with the phytoplankton spring bloom starting one month earlier than in the 20th century and with ~50% lower magnitude. This PhD thesis advances the knowledge of bio-physical interactions within the global climate, highlighting the intrinsic coupling between physical climate and biosphere, and providing a framework on which future studies of Earth System change can be built on.
Resumo:
In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.
Resumo:
The surprising discovery of the X(3872) resonance by the Belle experiment in 2003, and subsequent confirmation by BaBar, CDF and D0, opened up a new chapter of QCD studies and puzzles. Since then, detailed experimental and theoretical studies have been performed in attempt to determine and explain the proprieties of this state. Since the end of 2009 the world’s largest and highest-energy particle accelerator, the Large Hadron Collider (LHC), started its operations at the CERN laboratories in Geneva. One of the main experiments at LHC is CMS (Compact Muon Solenoid), a general purpose detector projected to address a wide range of physical phenomena, in particular the search of the Higgs boson, the only still unconfirmed element of the Standard Model (SM) of particle interactions and, new physics beyond the SM itself. Even if CMS has been designed to study high energy events, it’s high resolution central tracker and superior muon spectrometer made it an optimal tool to study the X(3872) state. In this thesis are presented the results of a series of study on the X(3872) state performed with the CMS experiment. Already with the first year worth of data, a clear peak for the X(3872) has been identified, and the measurement of the cross section ratio with respect to the Psi(2S) has been performed. With the increased statistic collected during 2011 it has been possible to study, in bins of transverse momentum, the cross section ratio between X(3872) and Psi(2S) and separate their prompt and non-prompt component.
Resumo:
This thesis is concerned with the role played by software tools in the analysis and dissemination of linguistic corpora and their contribution to a more widespread adoption of corpora in different fields. Chapter 1 contains an overview of some of the most relevant corpus analysis tools available today, presenting their most interesting features and some of their drawbacks. Chapter 2 begins with an explanation of the reasons why none of the available tools appear to satisfy the requirements of the user community and then continues with technical overview of the current status of the new system developed as part of this work. This presentation is followed by highlights of features that make the system appealing to users and corpus builders (i.e. scholars willing to make their corpora available to the public). The chapter concludes with an indication of future directions for the projects and information on the current availability of the software. Chapter 3 describes the design of an experiment devised to evaluate the usability of the new system in comparison to another corpus tool. Usage of the tool was tested in the context of a documentation task performed on a real assignment during a translation class in a master's degree course. In chapter 4 the findings of the experiment are presented on two levels of analysis: firstly a discussion on how participants interacted with and evaluated the two corpus tools in terms of interface and interaction design, usability and perceived ease of use. Then an analysis follows of how users interacted with corpora to complete the task and what kind of queries they submitted. Finally, some general conclusions are drawn and areas for future work are outlined.
Resumo:
AMS-02 is running after great scientific goals since one year and a half: a final setting up for dark matter searches has been achieved, allowing to study the so important antiparticle to particle ratios, which will probably be the first dark matter signals ever corroborated. Even if primary cosmic rays fluxes are subjected to a lot of uncertainties sources, some statements can be done and have been written down about dark matter properties: DM should be a heavy Majorana fermion or Spin 0 or 1 boson, with a mass from about 1 TeV to 10 TeV - unveiling a new TeV-ish search age - which could be able to originate antiparticle fluxes enhancements at high energies, both for positrons and antiprotons. All the observations, direct and indirect, point to these new paradigms or can be traced back to them quite easily. These enhancements perfectly fall into the research window of AMS-02, allowing the experiment to attack each today credible theory. Also an investigation of the Sommerfeld effect-associated dark boson will be possible, in terms of antiparticle to particle ratios substructures. The first great AMS-02 measurement is the positron fraction: an official paper is going to be submitted in few months, where the correct behavior of the apparatus will be reviewed and the full positron fraction rate will be analyzed up to 200 GeV. In this concern, one of the objectives of this work is to test the AMS-02 capability and versatility in doing these dark matter researches, thanks to an orbital temporal (and geomagnetic) stability. The goal has been accomplished: the experiment is very stable in time, so that the temporal error associated to the positron fraction measurement is compatible with zero, offering a beyond belief opportunity to measure CR antiparticle to particle ratios.
Resumo:
The Zero Degree Calorimeter (ZDC) of the ATLAS experiment at CERN is placed in the TAN of the LHC collider, covering the pseudorapidity region higher than 8.3. It is composed by 2 calorimeters, each one longitudinally segmented in 4 modules, located at 140 m from the IP exactly on the beam axis. The ZDC can detect neutral particles during pp collisions and it is a tool for diffractive physics. Here we present results on the forward photon energy distribution obtained using p-p collision data at sqrt{s} = 7 TeV. First the pi0 reconstruction will be used for the detector calibration with photons, then we will show results on the forward photon energy distribution in p-p collisions and the same distribution, but obtained using MC generators. Finally a comparison between data and MC will be shown.