11 resultados para approximate calculation of sums

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The object of the present study is the process of gas transport in nano-sized materials, i.e. systems having structural elements of the order of nanometers. The aim of this work is to advance the understanding of the gas transport mechanism in such materials, for which traditional models are not often suitable, by providing a correct interpretation of the relationship between diffusive phenomena and structural features. This result would allow the development new materials with permeation properties tailored on the specific application, especially in packaging systems. The methods used to achieve this goal were a detailed experimental characterization and different simulation methods. The experimental campaign regarded the determination of oxygen permeability and diffusivity in different sets of organic-inorganic hybrid coatings prepared via sol-gel technique. The polymeric samples coated with these hybrid layers experienced a remarkable enhancement of the barrier properties, which was explained by the strong interconnection at the nano-scale between the organic moiety and silica domains. An analogous characterization was performed on microfibrillated cellulose films, which presented remarkable barrier effect toward oxygen when it is dry, while in the presence of water the performance significantly drops. The very low value of water diffusivity at low activities is also an interesting characteristic which deals with its structural properties. Two different approaches of simulation were then considered: the diffusion of oxygen through polymer-layered silicates was modeled on a continuum scale with a CFD software, while the properties of n-alkanthiolate self assembled monolayers on gold were analyzed from a molecular point of view by means of a molecular dynamics algorithm. Modeling transport properties in layered nanocomposites, resulting from the ordered dispersion of impermeable flakes in a 2-D matrix, allowed the calculation of the enhancement of barrier effect in relation with platelets structural parameters leading to derive a new expression. On this basis, randomly distributed systems were simulated and the results were analyzed to evaluate the different contributions to the overall effect. The study of more realistic three-dimensional geometries revealed a prefect correspondence with the 2-D approximation. A completely different approach was applied to simulate the effect of temperature on the oxygen transport through self assembled monolayers; the structural information obtained from equilibrium MD simulations showed that raising the temperature, makes the monolayer less ordered and consequently less crystalline. This disorder produces a decrease in the barrier free energy and it lowers the overall resistance to oxygen diffusion, making the monolayer more permeable to small molecules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to handle Natural disasters, emergency areas are often individuated over the territory, close to populated centres. In these areas, rescue services are located which respond with resources and materials for population relief. A method of automatic positioning of these centres in case of a flood or an earthquake is presented. The positioning procedure consists of two distinct parts developed by the research group of Prof Michael G. H. Bell of Imperial College, London, refined and applied to real cases at the University of Bologna under the coordination of Prof Ezio Todini. There are certain requirements that need to be observed such as the maximum number of rescue points as well as the number of people involved. Initially, the candidate points are decided according to the ones proposed by the local civil protection services. We then calculate all possible routes from each candidate rescue point to all other points, generally using the concept of the "hyperpath", namely a set of paths each one of which may be optimal. The attributes of the road network are of fundamental importance, both for the calculation of the ideal distance and eventual delays due to the event measured in travel time units. In a second phase, the distances are used to decide the optimum rescue point positions using heuristics. This second part functions by "elimination". In the beginning, all points are considered rescue centres. During every interaction we wish to delete one point and calculate the impact it creates. In each case, we delete the point that creates less impact until we reach the number of rescue centres we wish to keep.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a method for locating sources of volcanic tremor and applied it to a dataset recorded on Stromboli volcano before and after the onset of the February 27th 2007 effusive eruption. Volcanic tremor has attracted considerable attention by seismologists because of its potential value as a tool for forecasting eruptions and for better understanding the physical processes that occur inside active volcanoes. Commonly used methods to locate volcanic tremor sources are: 1) array techniques, 2) semblance based methods, 3) calculation of wave field amplitude. We have choosen the third approach, using a quantitative modeling of the seismic wavefield. For this purpose, we have calculated the Green Functions (GF) in the frequency domain with the Finite Element Method (FEM). We have used this method because it is well suited to solve elliptic problems, as the elastodynamics in the Fourier domain. The volcanic tremor source is located by determining the source function over a regular grid of points. The best fit point is choosen as the tremor source location. The source inversion is performed in the frequency domain, using only the wavefield amplitudes. We illustrate the method and its validation over a synthetic dataset. We show some preliminary results on the Stromboli dataset, evidencing temporal variations of the volcanic tremor sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The international growing concern for the human exposure to magnetic fields generated by electric power lines has unavoidably led to imposing legal limits. Respecting these limits, implies being able to calculate easily and accurately the generated magnetic field also in complex configurations. Twisting of phase conductors is such a case. The consolidated exact and approximated theory regarding a single-circuit twisted three-phase power cable line has been reported along with the proposal of an innovative simplified formula obtained by means of an heuristic procedure. This formula, although being dramatically simpler, is proven to be a good approximation of the analytical formula and at the same time much more accurate than the approximated formula found in literature. The double-circuit twisted three-phase power cable line case has been studied following different approaches of increasing complexity and accuracy. In this framework, the effectiveness of the above-mentioned innovative formula is also examined. The experimental verification of the correctness of the twisted double-circuit theoretical analysis has permitted its extension to multiple-circuit twisted three-phase power cable lines. In addition, appropriate 2D and, in particularly, 3D numerical codes for simulating real existing overhead power lines for the calculation of the magnetic field in their vicinity have been created. Finally, an innovative ‘smart’ measurement and evaluation system of the magnetic field is being proposed, described and validated, which deals with the experimentally-based evaluation of the total magnetic field B generated by multiple sources in complex three-dimensional arrangements, carried out on the basis of the measurement of the three Cartesian field components and their correlation with the field currents via multilinear regression techniques. The ultimate goal is verifying that magnetic induction intensity is within the prescribed limits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geochemical mapping is a valuable tool for the control of territory that can be used not only in the identification of mineral resources and geological, agricultural and forestry studies but also in the monitoring of natural resources by giving solutions to environmental and economic problems. Stream sediments are widely used in the sampling campaigns carried out by the world's governments and research groups for their characteristics of broad representativeness of rocks and soils, for ease of sampling and for the possibility to conduct very detailed sampling In this context, the environmental role of stream sediments provides a good basis for the implementation of environmental management measures, in fact the composition of river sediments is an important factor in understanding the complex dynamics that develop within catchment basins therefore they represent a critical environmental compartment: they can persistently incorporate pollutants after a process of contamination and release into the biosphere if the environmental conditions change. It is essential to determine whether the concentrations of certain elements, in particular heavy metals, can be the result of natural erosion of rocks containing high concentrations of specific elements or are generated as residues of human activities related to a certain study area. This PhD thesis aims to extract from an extensive database on stream sediments of the Romagna rivers the widest spectrum of informations. The study involved low and high order stream in the mountain and hilly area, but also the sediments of the floodplain area, where intensive agriculture is active. The geochemical signals recorded by the stream sediments will be interpreted in order to reconstruct the natural variability related to bedrock and soil contribution, the effects of the river dynamics, the anomalous sites, and with the calculation of background values be able to evaluate their level of degradation and predict the environmental risk.