298 resultados para 550 Scienze della Terra
Resumo:
On account of the commercial importance of gilthead sea bream (Sparus aurata) in Italy the aim of the present study is the evaluation of the quality of nutritional, technological, sensory and freshness aspects. Sea bream production is growing in the Mediterranean and the evaluation of its quality concerns both producers and consumers alike. The culture system greatly influences final product quality. In Italy most of sea bream culture is carried out in cages but there is also a production in land based facilities and in lagoons. In this study external appeareance differentiations are pronounced. Different results were found for nutritional aspects, for fatty acids and for mineral content. Some differences in the freshness indices are also found. Furthermore, organoleptic differences have been described between culture system.
Resumo:
Three finfish species frequently caught in the waters of the Gulf of Manfredonia (Apulia, Italy) were studied in order to know how the flesh composition (proximate, fatty acid, macro- and micro- element contents) could be affected by the season effect. The species we examined were European hake (Merluccius merluccius), chub mackerel (Scomber japonicus) and horse mackerel (Trachurus trachurus), which were analysed at the raw state in three catch season and after cooking in two catch season. More precisely, European hake and chub mackerel caught during winter, summer and fall were analysed at the raw state. The composition of the flesh of grilled European hake and chub mackerel was study on fish caught in winter and fall. Horse mackerel of summer and winter catches were analysed both at the raw and grilled state. Furthermore, an overall sensory profile was outlined for each species in two catch season and the relevant spider web diagrams compared. On the whole, two hundred and eighty fish were analysed during this research project in order to obtain a nutritional profile of the three species. One hundred and fifty was the overall number of specimens used to create complete sensory profiles and compare them among the species. The three finfish species proved to be quite interesting for their proximate, fatty acids, macro- and micro-element contents. Nutritional and sensory changes occurred as seasons elapsed for chub and horse mackerel only. A high variability of flesh composition seemed to characterise these two species. European hake confirmed its mild sensory profile and good nutritional characteristics, which were not affected by any season effect.
Resumo:
Mycotoxins are contaminants of agricultural products both in the field and during storage and can enter the food chain through contaminated cereals and foods (milk, meat, and eggs) obtained from animals fed mycotoxin contaminated feeds. Mycotoxins are genotoxic carcinogens that cause health and economic problems. Ochratoxin A and fumonisin B1 have been classified by the International Agency for Research on Cancer in 1993, as “possibly carcinogenic to humans” (class 2B). To control mycotoxins induced damages, different strategies have been developed to reduce the growth of mycotoxigenic fungi as well as to decontaminate and/or detoxify mycotoxin contaminated foods and animal feeds. Critical points, target for these strategies, are: prevention of mycotoxin contamination, detoxification of mycotoxins already present in food and feed, inhibition of mycotoxin absorption in the gastrointestinal tract, reduce mycotoxin induced damages when absorption occurs. Decontamination processes, as indicate by FAO, needs the following requisites to reduce toxic and economic impact of mycotoxins: it must destroy, inactivate, or remove mycotoxins; it must not produce or leave toxic and/or carcinogenic/mutagenic residues in the final products or in food products obtained from animals fed decontaminated feed; it must be capable of destroying fungal spores and mycelium in order to avoiding mycotoxin formation under favorable conditions; it should not adversely affect desirable physical and sensory properties of the feedstuff; it has to be technically and economically feasible. One important approach to the prevention of mycotoxicosis in livestock is the addition in the diets of the non-nutritionally adsorbents that bind mycotoxins preventing the absorption in the gastrointestinal tract. Activated carbons, hydrated sodium calcium aluminosilicate (HSCAS), zeolites, bentonites, and certain clays, are the most studied adsorbent and they possess a high affinity for mycotoxins. In recent years, there has been increasing interest on the hypothesis that the absorption in consumed food can be inhibited by microorganisms in the gastrointestinal tract. Numerous investigators showed that some dairy strains of LAB and bifidobacteria were able to bind aflatoxins effectively. There is a strong need for prevention of the mycotoxin-induced damages once the toxin is ingested. Nutritional approaches, such as supplementation of nutrients, food components, or additives with protective effects against mycotoxin toxicity are assuming increasing interest. Since mycotoxins have been known to produce damages by increasing oxidative stress, the protective properties of antioxidant substances have been extensively investigated. Purpose of the present study was to investigate in vitro and in vivo, strategies to counteract mycotoxin threat particularly in swine husbandry. The Ussing chambers technique was applied in the present study that for the first time to investigate in vitro the permeability of OTA and FB1 through rat intestinal mucosa. Results showed that OTA and FB1 were not absorbed from rat small intestine mucosa. Since in vivo absorption of both mycotoxins normally occurs, it is evident that in these experimental conditions Ussing diffusion chambers were not able to assess the intestinal permeability of OTA and FB1. A large number of LAB strains isolated from feces and different gastrointestinal tract regions of pigs and poultry were screened for their ability to remove OTA, FB1, and DON from bacterial medium. Results of this in vitro study showed low efficacy of isolated LAB strains to reduce OTA, FB1, and DON from bacterial medium. An in vivo trial in rats was performed to evaluate the effects of in-feed supplementation of a LAB strain, Pediococcus pentosaceus FBB61, to counteract the toxic effects induced by exposure to OTA contaminated diets. The study allows to conclude that feed supplementation with P. pentosaceus FBB61 ameliorates the oxidative status in liver, and lowers OTA induced oxidative damage in liver and kidney if diet was contaminated by OTA. This P. pentosaceus FBB61 feature joined to its bactericidal activity against Gram positive bacteria and its ability to modulate gut microflora balance in pigs, encourage additional in vivo experiments in order to better understand the potential role of P. pentosaceus FBB61 as probiotic for farm animals and humans. In the present study, in vivo trial on weaned piglets fed FB1 allow to conclude that feeding of 7.32 ppm of FB1 for 6 weeks did not impair growth performance. Deoxynivalenol contamination of feeds was evaluated in an in vivo trial on weaned piglets. The comparison between growth parameters of piglets fed DON contaminated diet and contaminated diet supplemented with the commercial product did not reach the significance level but piglet growth performances were numerically improved when the commercial product was added to DON contaminated diet. Further studies are needed to improve knowledge on mycotoxins intestinal absorption, mechanism for their detoxification in feeds and foods, and nutritional strategies to reduce mycotoxins induced damages in animals and humans. The multifactorial approach acting on each of the various steps could be a promising strategy to counteract mycotoxins damages.
Resumo:
La valutazione dell’intensità secondo una procedura formale trasparente, obiettiva e che permetta di ottenere valori numerici attraverso scelte e criteri rigorosi, rappresenta un passo ed un obiettivo per la trattazione e l’impiego delle informazioni macrosismiche. I dati macrosismici possono infatti avere importanti applicazioni per analisi sismotettoniche e per la stima della pericolosità sismica. Questa tesi ha affrontato il problema del formalismo della stima dell’intensità migliorando aspetti sia teorici che pratici attraverso tre passaggi fondamentali sviluppati in ambiente MS-Excel e Matlab: i) la raccolta e l’archiviazione del dataset macrosismico; ii), l’associazione (funzione di appartenenza o membership function) tra effetti e gradi di intensità della scala macrosismica attraverso i principi della logica dei fuzzy sets; iii) l’applicazione di algoritmi decisionali rigorosi ed obiettivi per la stima dell’intensità finale. L’intera procedura è stata applicata a sette terremoti italiani sfruttando varie possibilità, anche metodologiche, come la costruzione di funzioni di appartenenza combinando le informazioni macrosismiche di più terremoti: Monte Baldo (1876), Valle d’Illasi (1891), Marsica (1915), Santa Sofia (1918), Mugello (1919), Garfagnana (1920) e Irpinia (1930). I risultati ottenuti hanno fornito un buon accordo statistico con le intensità di un catalogo macrosismico di riferimento confermando la validità dell’intera metodologia. Le intensità ricavate sono state poi utilizzate per analisi sismotettoniche nelle aree dei terremoti studiati. I metodi di analisi statistica sui piani quotati (distribuzione geografica delle intensità assegnate) si sono rivelate in passato uno strumento potente per analisi e caratterizzazione sismotettonica, determinando i principali parametri (localizzazione epicentrale, lunghezza, larghezza, orientazione) della possibile sorgente sismogenica. Questa tesi ha implementato alcuni aspetti delle metodologie di analisi grazie a specifiche applicazioni sviluppate in Matlab che hanno permesso anche di stimare le incertezze associate ai parametri di sorgente, grazie a tecniche di ricampionamento statistico. Un’analisi sistematica per i terremoti studiati è stata portata avanti combinando i vari metodi per la stima dei parametri di sorgente con i piani quotati originali e ricalcolati attraverso le procedure decisionali fuzzy. I risultati ottenuti hanno consentito di valutare le caratteristiche delle possibili sorgenti e formulare ipotesi di natura sismotettonica che hanno avuto alcuni riscontri indiziali con dati di tipo geologico e geologico-strutturale. Alcuni eventi (1915, 1918, 1920) presentano una forte stabilità dei parametri calcolati (localizzazione epicentrale e geometria della possibile sorgente) con piccole incertezze associate. Altri eventi (1891, 1919 e 1930) hanno invece mostrato una maggiore variabilità sia nella localizzazione dell’epicentro che nella geometria delle box: per il primo evento ciò è probabilmente da mettere in relazione con la ridotta consistenza del dataset di intensità mentre per gli altri con la possibile molteplicità delle sorgenti sismogenetiche. Anche l’analisi bootstrap ha messo in evidenza, in alcuni casi, le possibili asimmetrie nelle distribuzioni di alcuni parametri (ad es. l’azimut della possibile struttura), che potrebbero suggerire meccanismi di rottura su più faglie distinte.
Resumo:
Two analytical models are proposed to describe two different mechanisms of lava tubes formation. A first model is introduced to describe the development of a solid crust in the central region of the channel, and the formation of a tube when crust widens until it reaches the leve\'es. The Newtonian assumption is considered and the steady state Navier- Stokes equation in a rectangular conduit is solved. A constant heat flux density assigned at the upper flow surface resumes the combined effects of two thermal processes: radiation and convection into the atmosphere. Advective terms are also included, by the introduction of velocity into the expression of temperature. Velocity is calculated as an average value over the channel width, so that lateral variations of temperature are neglected. As long as the upper flow surface cools, a solid layer develops, described as a plastic body, having a resistance to shear deformation. If the applied shear stress exceeds this resistance, crust breaks, otherwise, solid fragments present at the flow surface can weld together forming a continuous roof, as it happens in the sidewall flow regions. Variations of channel width, ground slope and effusion rate are analyzed, as parameters that strongly affect the shear stress values. Crust growing is favored when the channel widens, and tube formation is possible when the ground slope or the effusion rate reduce. A comparison of results is successfully made with data obtained from the analysis of pictures of actual flows. The second model describes the formation of a stable, well defined crust along both channel sides, their growing towards the center and their welding to form the tube roof. The fluid motion is described as in the model above. Thermal budget takes into account conduction into the atmosphere, and advection is included considering the velocity depending both on depth and channel width. The solidified crust has a non uniform thickness along the channel width. Stresses acting on the crust are calculated using the equations of the elastic thin plate, pinned at its ends. The model allows to calculate the distance where crust thickness is able to resist the drag of the underlying fluid and to sustain its weight by itself, and the level of the fluid can lower below the tube roof. Viscosity and thermal conductivity have been experimentally investigated through the use of a rotational viscosimeter. Analyzing samples coming from Mount Etna (2002) the following results have been obtained: the fluid is Newtonian and the thermal conductivity is constant in a range of temperature above the liquidus. For lower temperature, the fluid becomes non homogeneous, and the used experimental techniques are not able to detect any properties, because measurements are not reproducible.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
Curved mountain belts have always fascinated geologists and geophysicists because of their peculiar structural setting and geodynamic mechanisms of formation. The need of studying orogenic bends arises from the numerous questions to which geologists and geophysicists have tried to answer to during the last two decades, such as: what are the mechanisms governing orogenic bends formation? Why do they form? Do they develop in particular geological conditions? And if so, what are the most favorable conditions? What are their relationships with the deformational history of the belt? Why is the shape of arcuate orogens in many parts of the Earth so different? What are the factors controlling the shape of orogenic bends? Paleomagnetism demonstrated to be one of the most effective techniques in order to document the deformation of a curved belt through the determination of vertical axis rotations. In fact, the pattern of rotations within a curved belt can reveal the occurrence of a bending, and its timing. Nevertheless, paleomagnetic data alone are not sufficient to constrain the tectonic evolution of a curved belt. Usually, structural analysis integrates paleomagnetic data, in defining the kinematics of a belt through kinematic indicators on brittle fault planes (i.e., slickensides, mineral fibers growth, SC-structures). My research program has been focused on the study of curved mountain belts through paleomagnetism, in order to define their kinematics, timing, and mechanisms of formation. Structural analysis, performed only in some regions, supported and integrated paleomagnetic data. In particular, three arcuate orogenic systems have been investigated: the Western Alpine Arc (NW Italy), the Bolivian Orocline (Central Andes, NW Argentina), and the Patagonian Orocline (Tierra del Fuego, southern Argentina). The bending of the Western Alpine Arc has been investigated so far using different approaches, though few based on reliable paleomagnetic data. Results from our paleomagnetic study carried out in the Tertiary Piedmont Basin, located on top of Alpine nappes, indicate that the Western Alpine Arc is a primary bend that has been subsequently tightened by further ~50° during Aquitanian-Serravallian times (23-12 Ma). This mid-Miocene oroclinal bending, superimposing onto a pre-existing Eocene nonrotational arc, is the result of a composite geodynamic mechanism, where slab rollback, mantle flows, and rotating thrust emplacement are intimately linked. Relying on our paleomagnetic and structural evidence, the Bolivian Orocline can be considered as a progressive bend, whose formation has been driven by the along-strike gradient of crustal shortening. The documented clockwise rotations up to 45° are compatible with a secondary-bending type mechanism occurring after Eocene-Oligocene times (30-40 Ma), and their nature is probably related to the widespread shearing taking place between zones of differential shortening. Since ~15 Ma ago, the activity of N-S left-lateral strike-slip faults in the Eastern Cordillera at the border with the Altiplano-Puna plateau induced up to ~40° counterclockwise rotations along the fault zone, locally annulling the regional clockwise rotation. We proposed that mid-Miocene strike-slip activity developed in response of a compressive stress (related to body forces) at the plateau margins, caused by the progressive lateral (southward) growth of the Altiplano-Puna plateau, laterally spreading from the overthickened crustal region of the salient apex. The growth of plateaux by lateral spreading seems to be a mechanism common to other major plateaux in the Earth (i.e., Tibetan plateau). Results from the Patagonian Orocline represent the first reliable constraint to the timing of bending in the southern tip of South America. They indicate that the Patagonian Orocline did not undergo any significant rotation since early Eocene times (~50 Ma), implying that it may be considered either a primary bend, or an orocline formed during the late Cretaceous-early Eocene deformation phase. This result has important implications on the opening of the Drake Passage at ~32 Ma, since it is definitely not related to the formation of the Patagonian orocline, but the sole consequence of the Scotia plate spreading. Finally, relying on the results and implications from the study of the Western Alpine Arc, the Bolivian Orocline, and the Patagonian Orocline, general conclusions on curved mountain belt formation have been inferred.
Resumo:
In this work we study the relation between crustal heterogeneities and complexities in fault processes. The first kind of heterogeneity considered involves the concept of asperity. The presence of an asperity in the hypocentral region of the M = 6.5 earthquake of June 17-th, 2000 in the South Iceland Seismic Zone was invoked to explain the change of seismicity pattern before and after the mainshock: in particular, the spatial distribution of foreshock epicentres trends NW while the strike of the main fault is N 7◦ E and aftershocks trend accordingly; the foreshock depths were typically deeper than average aftershock depths. A model is devised which simulates the presence of an asperity in terms of a spherical inclusion, within a softer elastic medium in a transform domain with a deviatoric stress field imposed at remote distances (compressive NE − SW, tensile NW − SE). An isotropic compressive stress component is induced outside the asperity, in the direction of the compressive stress axis, and a tensile component in the direction of the tensile axis; as a consequence, fluid flow is inhibited in the compressive quadrants while it is favoured in tensile quadrants. Within the asperity the isotropic stress vanishes but the deviatoric stress increases substantially, without any significant change in the principal stress directions. Hydrofracture processes in the tensile quadrants and viscoelastic relaxation at depth may contribute to lower the effective rigidity of the medium surrounding the asperity. According to the present model, foreshocks may be interpreted as induced, close to the brittle-ductile transition, by high pressure fluids migrating upwards within the tensile quadrants; this process increases the deviatoric stress within the asperity which eventually fails, becoming the hypocenter of the mainshock, on the optimally oriented fault plane. In the second part of our work we study the complexities induced in fault processes by the layered structure of the crust. In the first model proposed we study the case in which fault bending takes place in a shallow layer. The problem can be addressed in terms of a deep vertical planar crack, interacting with a shallower inclined planar crack. An asymptotic study of the singular behaviour of the dislocation density at the interface reveals that the density distribution has an algebraic singularity at the interface of degree ω between -1 and 0, depending on the dip angle of the upper crack section and on the rigidity contrast between the two media. From the welded boundary condition at the interface between medium 1 and 2, a stress drop discontinuity condition is obtained which can be fulfilled if the stress drop in the upper medium is lower than required for a planar trough-going surface: as a corollary, a vertically dipping strike-slip fault at depth may cross the interface with a sedimentary layer, provided that the shallower section is suitably inclined (fault "refraction"); this results has important implications for our understanding of the complexity of the fault system in the SISZ; in particular, we may understand the observed offset of secondary surface fractures with respect to the strike direction of the seismic fault. The results of this model also suggest that further fractures can develop in the opposite quadrant and so a second model describing fault branching in the upper layer is proposed. As the previous model, this model can be applied only when the stress drop in the shallow layer is lower than the value prescribed for a vertical planar crack surface. Alternative solutions must be considered if the stress drop in the upper layer is higher than in the other layer, which may be the case when anelastic processes relax deviatoric stress in layer 2. In such a case one through-going crack cannot fulfil the welded boundary conditions and unwelding of the interface may take place. We have solved this problem within the theory of fracture mechanics, employing the boundary element method. The fault terminates against the interface in a T-shaped configuration, whose segments interact among each other: the lateral extent of the unwelded surface can be computed in terms of the main fault parameters and the stress field resulting in the shallower layer can be modelled. A wide stripe of high and nearly uniform shear stress develops above the unwelded surface, whose width is controlled by the lateral extension of unwelding. Secondary shear fractures may then open within this stripe, according to the Coulomb failure criterion, and the depth of open fractures opening in mixed mode may be computed and compared with the well studied fault complexities observed in the field. In absence of the T-shaped decollement structure, stress concentration above the seismic fault would be difficult to reconcile with observations, being much higher and narrower.
Resumo:
The work of my thesis is focused on the impact of tsunami waves in limited basins. By limited basins I mean here those basins capable of modifying significantly the tsunami signal with respect to the surrounding open sea. Based on this definition, we consider limited basins not only harbours but also straits, channels, seamounts and oceanic shelves. I have considered two different examples, one dealing with the Seychelles Island platform in the Indian Ocean, the second focussing on the Messina Strait and the harbour of the Messina city itself (Italy). The Seychelles platform is differentiated at bathymetric level from the surrounding ocean, with rapid changes from 2 km to 70 meters over short horizontal distances. The study of the platform response to the tsunami propagation is based on the simulation of the mega-event occurred on 26 December 2004. Based on a hypothesis for the earthquake causative fault, the ensuing tsunami has been numerically simulated. I analysed synthetic tide gauge records at several virtual tide gauges aligned along the direction going from the source to the platform. A substantial uniformity of tsunami signals in all calculated open ocean tide-gauge records is observed, while the signals calculated in two points of the Seychelles platform show different features both in terms of amplitude and period of the perturbation. To better understand the content in frequency of different calculated marigrams, a spectral analysis was carried out. In particular the ratio between the calculated tide-gauge records spectrum on the platform and the average tide-gauge records in the open ocean was considered. The main result is that, while in the average spectrum in the open ocean the fundamental peak is related to the source, the platform introduces further peaks linked both to the bathymetric configuration and to coastal geometry. The Messina Strait represents an interesting case because it consists in a sort of a channel open both in the north and in the south and furthermore contains the limited basin of the Messina harbour. In this case the study has been carried out in a different way with respect to the Seychelles case. The basin was forced along a boundary of the computational domain with sinusoidal functions having different periods within the typical tsunami frequencies. The tsunami has been simulated numerically and in particular the tide-gauge records were calculated for every forcing function in different points both externally and internally of the channel and of the Messina harbour. Apart from the tide-gauge records in the source region that almost immediately reach stationarity, all the computed signals in the channel and in the Messina harbour present a transient variable amplitude followed by a stationary part. Based exclusively on this last part, I calculated the amplification curves for each site. I found that the maximum amplification is obtained for forcing periods of approximately 10 minutes.
Resumo:
The theory of the 3D multipole probability tomography method (3D GPT) to image source poles, dipoles, quadrupoles and octopoles, of a geophysical vector or scalar field dataset is developed. A geophysical dataset is assumed to be the response of an aggregation of poles, dipoles, quadrupoles and octopoles. These physical sources are used to reconstruct without a priori assumptions the most probable position and shape of the true geophysical buried sources, by determining the location of their centres and critical points of their boundaries, as corners, wedges and vertices. This theory, then, is adapted to the geoelectrical, gravity and self potential methods. A few synthetic examples using simple geometries and three field examples are discussed in order to demonstrate the notably enhanced resolution power of the new approach. At first, the application to a field example related to a dipole–dipole geoelectrical survey carried out in the archaeological park of Pompei is presented. The survey was finalised to recognize remains of the ancient Roman urban network including roads, squares and buildings, which were buried under the thick pyroclastic cover fallen during the 79 AD Vesuvius eruption. The revealed anomaly structures are ascribed to wellpreserved remnants of some aligned walls of Roman edifices, buried and partially destroyed by the 79 AD Vesuvius pyroclastic fall. Then, a field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging as accurately as possible the differential mass density structure within the first few km of depth inside the volcanic apparatus. An assemblage of vertical prismatic blocks appears to be the most probable gravity model of the Etna apparatus within the first 5 km of depth below sea level. Finally, an experimental SP dataset collected in the Mt. Somma-Vesuvius volcanic district (Naples, Italy) is elaborated in order to define location and shape of the sources of two SP anomalies of opposite sign detected in the northwestern sector of the surveyed area. The modelled sources are interpreted as the polarization state induced by an intense hydrothermal convective flow mechanism within the volcanic apparatus, from the free surface down to about 3 km of depth b.s.l..
Resumo:
This work is a detailed study of hydrodynamic processes in a defined area, the littoral in front of the Venice Lagoon and its inlets, which are complex morphological areas of interconnection. A finite element hydrodynamic model of the Venice Lagoon and the Adriatic Sea has been developed in order to study the coastal current patterns and the exchanges at the inlets of the Venice Lagoon. This is the first work in this area that tries to model the interaction dynamics, running together a model for the lagoon and the Adriatic Sea. First the barotropic processes near the inlets of the Venice Lagoon have been studied. Data from more than ten tide gauges displaced in the Adriatic Sea have been used in the calibration of the simulated water levels. To validate the model results, empirical flux data measured by ADCP probes installed inside the inlets of Lido and Malamocco have been used and the exchanges through the three inlets of the Venice Lagoon have been analyzed. The comparison between modelled and measured fluxes at the inlets outlined the efficiency of the model to reproduce both tide and wind induced water exchanges between the sea and the lagoon. As a second step, also small scale processes around the inlets that connect the Venice lagoon with the Northern Adriatic Sea have been investigated by means of 3D simulations. Maps of vorticity have been produced, considering the influence of tidal flows and wind stress in the area. A sensitivity analysis has been carried out to define the importance of the advection and of the baroclinic pressure gradients in the development of vortical processes seen along the littoral close to the inlets. Finally a comparison with real data measurements, surface velocity data from HF Radar near the Venice inlets, has been performed, which allows for a better understanding of the processes and their seasonal dynamics. The results outline the predominance of wind and tidal forcing in the coastal area. Wind forcing acts mainly on the mean coastal current inducing its detachment offshore during Sirocco events and an increase of littoral currents during Bora events. The Bora action is more homogeneous on the whole coastal area whereas the Sirocco strengthens its impact in the South, near Chioggia inlet. Tidal forcing at the inlets is mainly barotropic. The sensitivity analysis shows how advection is the main physical process responsible for the persistent vortical structures present along the littoral between the Venice Lagoon inlets. The comparison with measurements from HF Radar not only permitted a validation the model results, but also a description of different patterns in specific periods of the year. The success of the 2D and the 3D simulations on the reproduction both of the SSE, inside and outside the Venice Lagoon, of the tidal flow, through the lagoon inlets, and of the small scale phenomena, occurring along the littoral, indicates that the finite element approach is the most suitable tool for the investigation of coastal processes. For the first time, as shown by the flux modeling, the physical processes that drive the interaction between the two basins were reproduced.
Resumo:
Benessere delle popolazioni, gestione sostenibile delle risorse, povertà e degrado ambientale sono dei concetti fortemente connessi in un mondo in cui il 20% della popolazione mondiale consuma più del 75% delle risorse naturali. Sin dal 1992 al Summit della Terra a Rio de Janeiro si è affermato il forte legame tra tutela dell’ambiente e riduzione della povertà, ed è anche stata riconosciuta l’importanza di un ecosistema sano per condurre una vita dignitosa, specialmente nelle zone rurali povere dell’Africa, dell’Asia e dell’America Latina. La natura infatti, soprattutto per le popolazioni rurali, rappresenta un bene quotidiano e prezioso, una forma essenziale per la sussistenza ed una fonte primaria di reddito. Accanto a questa constatazione vi è anche la consapevolezza che negli ultimi decenni gli ecosistemi naturali si stanno degradando ad un ritmo impressionate, senza precedenti nella storia della specie umana: consumiamo le risorse più velocemente di quanto la Terra sia capace di rigenerarle e di “metabolizzare” i nostri scarti. Allo stesso modo aumenta la povertà: attualmente ci sono 1,2 miliardi di persone che vivono con meno di un dollaro al giorno, mentre circa metà della popolazione mondiale sopravvive con meno di due dollari al giorno (UN). La connessione tra povertà ed ambiente non dipende solamente dalla scarsità di risorse che rende più difficili le condizioni di vita, ma anche dalla gestione delle stesse risorse naturali. Infatti in molti paesi o luoghi dove le risorse non sono carenti la popolazione più povera non vi ha accesso per motivi politici, economici e sociali. Inoltre se si paragona l’impronta ecologica con una misura riconosciuta dello “sviluppo umano”, l’Indice dello Sviluppo Umano (HDI) delle Nazioni Unite (Cfr. Cap 2), il rapporto dimostra chiaramente che ciò che noi accettiamo generalmente come “alto sviluppo” è molto lontano dal concetto di sviluppo sostenibile accettato universalmente, in quanto i paesi cosiddetti “sviluppati” sono quelli con una maggior impronta ecologica. Se allora lo “sviluppo” mette sotto pressione gli ecosistemi, dal cui benessere dipende direttamente il benessere dell’uomo, allora vuol dire che il concetto di “sviluppo” deve essere rivisitato, perché ha come conseguenza non il benessere del pianeta e delle popolazioni, ma il degrado ambientale e l’accrescimento delle disuguaglianze sociali. Quindi da una parte vi è la “società occidentale”, che promuove l’avanzamento della tecnologia e dell’industrializzazione per la crescita economica, spremendo un ecosistema sempre più stanco ed esausto al fine di ottenere dei benefici solo per una ristretta fetta della popolazione mondiale che segue un modello di vita consumistico degradando l’ambiente e sommergendolo di rifiuti; dall’altra parte ci sono le famiglie di contadini rurali, i “moradores” delle favelas o delle periferie delle grandi metropoli del Sud del Mondo, i senza terra, gli immigrati delle baraccopoli, i “waste pickers” delle periferie di Bombay che sopravvivono raccattando rifiuti, i profughi di guerre fatte per il controllo delle risorse, gli sfollati ambientali, gli eco-rifugiati, che vivono sotto la soglia di povertà, senza accesso alle risorse primarie per la sopravvivenza. La gestione sostenibile dell’ambiente, il produrre reddito dalla valorizzazione diretta dell’ecosistema e l’accesso alle risorse naturali sono tra gli strumenti più efficaci per migliorare le condizioni di vita degli individui, strumenti che possono anche garantire la distribuzione della ricchezza costruendo una società più equa, in quanto le merci ed i servizi dell’ecosistema fungono da beni per le comunità. La corretta gestione dell’ambiente e delle risorse quindi è di estrema importanza per la lotta alla povertà ed in questo caso il ruolo e la responsabilità dei tecnici ambientali è cruciale. Il lavoro di ricerca qui presentato, partendo dall’analisi del problema della gestione delle risorse naturali e dal suo stretto legame con la povertà, rivisitando il concetto tradizionale di “sviluppo” secondo i nuovi filoni di pensiero, vuole suggerire soluzioni e tecnologie per la gestione sostenibile delle risorse naturali che abbiano come obiettivo il benessere delle popolazioni più povere e degli ecosistemi, proponendo inoltre un metodo valutativo per la scelta delle alternative, soluzioni o tecnologie più adeguate al contesto di intervento. Dopo l’analisi dello “stato del Pianeta” (Capitolo 1) e delle risorse, sia a livello globale che a livello regionale, il secondo Capitolo prende in esame il concetto di povertà, di Paese in Via di Sviluppo (PVS), il concetto di “sviluppo sostenibile” e i nuovi filoni di pensiero: dalla teoria della Decrescita, al concetto di Sviluppo Umano. Dalla presa di coscienza dei reali fabbisogni umani, dall’analisi dello stato dell’ambiente, della povertà e delle sue diverse facce nei vari paesi, e dalla presa di coscienza del fallimento dell’economia della crescita (oggi visibile più che mai) si può comprendere che la soluzione per sconfiggere la povertà, il degrado dell’ambiente, e raggiungere lo sviluppo umano, non è il consumismo, la produzione, e nemmeno il trasferimento della tecnologia e l’industrializzazione; ma il “piccolo e bello” (F. Schumacher, 1982), ovvero gli stili di vita semplici, la tutela degli ecosistemi, e a livello tecnologico le “tecnologie appropriate”. Ed è proprio alle Tecnologie Appropriate a cui sono dedicati i Capitoli successivi (Capitolo 4 e Capitolo 5). Queste sono tecnologie semplici, a basso impatto ambientale, a basso costo, facilmente gestibili dalle comunità, tecnologie che permettono alle popolazioni più povere di avere accesso alle risorse naturali. Sono le tecnologie che meglio permettono, grazie alle loro caratteristiche, la tutela dei beni comuni naturali, quindi delle risorse e dell’ambiente, favorendo ed incentivando la partecipazione delle comunità locali e valorizzando i saperi tradizionali, grazie al coinvolgimento di tutti gli attori, al basso costo, alla sostenibilità ambientale, contribuendo all’affermazione dei diritti umani e alla salvaguardia dell’ambiente. Le Tecnologie Appropriate prese in esame sono quelle relative all’approvvigionamento idrico e alla depurazione dell’acqua tra cui: - la raccolta della nebbia, - metodi semplici per la perforazione di pozzi, - pompe a pedali e pompe manuali per l’approvvigionamento idrico, - la raccolta dell’acqua piovana, - il recupero delle sorgenti, - semplici metodi per la depurazione dell’acqua al punto d’uso (filtro in ceramica, filtro a sabbia, filtro in tessuto, disinfezione e distillazione solare). Il quinto Capitolo espone invece le Tecnolocie Appropriate per la gestione dei rifiuti nei PVS, in cui sono descritte: - soluzioni per la raccolta dei rifiuti nei PVS, - soluzioni per lo smaltimento dei rifiuti nei PVS, - semplici tecnologie per il riciclaggio dei rifiuti solidi. Il sesto Capitolo tratta tematiche riguardanti la Cooperazione Internazionale, la Cooperazione Decentrata e i progetti di Sviluppo Umano. Per progetti di sviluppo si intende, nell’ambito della Cooperazione, quei progetti che hanno come obiettivi la lotta alla povertà e il miglioramento delle condizioni di vita delle comunità beneficiarie dei PVS coinvolte nel progetto. All’interno dei progetti di cooperazione e di sviluppo umano gli interventi di tipo ambientale giocano un ruolo importante, visto che, come già detto, la povertà e il benessere delle popolazioni dipende dal benessere degli ecosistemi in cui vivono: favorire la tutela dell’ambiente, garantire l’accesso all’acqua potabile, la corretta gestione dei rifiuti e dei reflui nonché l’approvvigionamento energetico pulito sono aspetti necessari per permettere ad ogni individuo, soprattutto se vive in condizioni di “sviluppo”, di condurre una vita sana e produttiva. È importante quindi, negli interventi di sviluppo umano di carattere tecnico ed ambientale, scegliere soluzioni decentrate che prevedano l’adozione di Tecnologie Appropriate per contribuire a valorizzare l’ambiente e a tutelare la salute della comunità. I Capitoli 7 ed 8 prendono in esame i metodi per la valutazione degli interventi di sviluppo umano. Un altro aspetto fondamentale che rientra nel ruolo dei tecnici infatti è l’utilizzo di un corretto metodo valutativo per la scelta dei progetti possibili che tenga presente tutti gli aspetti, ovvero gli impatti sociali, ambientali, economici e che si cali bene alle realtà svantaggiate come quelle prese in considerazione in questo lavoro; un metodo cioè che consenta una valutazione specifica per i progetti di sviluppo umano e che possa permettere l’individuazione del progetto/intervento tecnologico e ambientale più appropriato ad ogni contesto specifico. Dall’analisi dei vari strumenti valutativi si è scelto di sviluppare un modello per la valutazione degli interventi di carattere ambientale nei progetti di Cooperazione Decentrata basato sull’Analisi Multi Criteria e sulla Analisi Gerarchica. L’oggetto di questa ricerca è stato quindi lo sviluppo di una metodologia, che tramite il supporto matematico e metodologico dell’Analisi Multi Criteria, permetta di valutare l’appropriatezza, la sostenibilità degli interventi di Sviluppo Umano di carattere ambientale, sviluppati all’interno di progetti di Cooperazione Internazionale e di Cooperazione Decentrata attraverso l’utilizzo di Tecnologie Appropriate. Nel Capitolo 9 viene proposta la metodologia, il modello di calcolo e i criteri su cui si basa la valutazione. I successivi capitoli (Capitolo 10 e Capitolo 11) sono invece dedicati alla sperimentazione della metodologia ai diversi casi studio: - “Progetto ambientale sulla gestione dei rifiuti presso i campi Profughi Saharawi”, Algeria, - “Programa 1 milhão de Cisternas, P1MC” e - “Programa Uma Terra e Duas Águas, P1+2”, Semi Arido brasiliano.
Resumo:
Alla luce della vasta letteratura storico-artistica sorta, negli ultimi anni, sul paesaggio dipinto, sulla sua storia e sui suoi protagonisti, sulla filiera delle influenze e sulle varie declinazioni stilistiche che lo caratterizzano, uno studio sulle Origini del genere all’alba della modernità può sembrare destinato, non tanto ad aggiungere nuove informazioni, quanto a sistematizzare quelle sinora emerse. Eppure, il problema del paesaggio come oggetto semiotico deve ancora essere chiarito. Gli storici dell’arte si sono sostanzialmente limitati a rimuovere la questione, dietro l’idea che i quadri della natura siano rappresentazioni votate alla massima trasparenza, dove ha luogo “una transizione diretta dai motivi al contenuto” (Panofsky 1939, p. 9). Questo studio recupera e fa riemergere la domanda sul senso della pittura di paesaggio. Il suo scopo è proporre un’analisi del paesaggio in quanto produzione discorsiva moderna. Tra XVI e XVII secolo, quando il genere nasce, questa produzione si manifesta in quattro diverse forme semiotiche: l’ornamento o paraergon (cap. II), la macchia cromatica (cap. III), l’assiologia orizzontale del dispositivo topologico (cap. IV) e il regime di visibilità del “vedere attraverso” (cap. V). La prima di queste forme appartiene alla continuità storica, e la sua analisi offre l’occasione di dimostrare che, anche in qualità di paraergon, il paesaggio non è mai l’abbellimento estetico di un contenuto invariante, ma interviene attivamente, e in vario modo, nella costruzione del senso dell’opera. Le altre forme marcano invece una forte discontinuità storica. In esse, il genere moderno si rivela un operatore di grandi trasformazioni, i cui significati emergono nell’opposizione con il paradigma artistico classico. Contro il predominio del disegno e della figuratività, proprio della tradizionale “concezione strumentale dell’arte” (Gombrich 1971), il paesaggio si attualizza come macchia cromatica, facendosi portavoce di un discorso moderno sul valore plastico della pittura. Contro la “tirannia del formato quadrangolare” (Burckhardt 1898), strumento della tradizionale concezione liturgica e celebrativa dell’arte, il paesaggio si distende su formati oblunghi orizzontali, articolando un discorso laico della pittura. Infine, attraverso la messa in cornice della visione, propria del regime di visibilità del “vedere attraverso” (Stoichita 1993), il paesaggio trasforma la contemplazione del mondo in contemplazione dell’immagine del mondo. Il dispositivo cognitivo che soggiace a questo tipo di messa in discorso fa del paesaggio il preludio (simbolico) alla nascita del sapere cartografico moderno, che farà della riduzione del mondo a sua immagine il fondamento del metodo di conoscenza scientifica della Terra.