951 resultados para 550 Scienze della Terra
Resumo:
In this work a multidisciplinary study of the December 26th, 2004 Sumatra earthquake has been carried out. We have investigated both the effect of the earthquake on the Earth rotation and the stress field variations associated with the seismic event. In the first part of the work we have quantified the effects of a water mass redistribution associated with the propagation of a tsunami wave on the Earth’s pole path and on the length-of-day (LOD) and applied our modeling results to the tsunami following the 2004 giant Sumatra earthquake. We compared the result of our simulations on the instantaneous rotational axis variations with some preliminary instrumental evidences on the pole path perturbation (which has not been confirmed yet) registered just after the occurrence of the earthquake, which showed a step-like discontinuity that cannot be attributed to the effect of a seismic dislocation. Our results show that the perturbation induced by the tsunami on the instantaneous rotational pole is characterized by a step-like discontinuity, which is compatible with the observations but its magnitude turns out to be almost one hundred times smaller than the detected one. The LOD variation induced by the water mass redistribution turns out to be not significant because the total effect is smaller than current measurements uncertainties. In the second part of this work of thesis we modeled the coseismic and postseismic stress evolution following the Sumatra earthquake. By means of a semi-analytical, viscoelastic, spherical model of global postseismic deformation and a numerical finite-element approach, we performed an analysis of the stress diffusion following the earthquake in the near and far field of the mainshock source. We evaluated the stress changes due to the Sumatra earthquake by projecting the Coulomb stress over the sequence of aftershocks taken from various catalogues in a time window spanning about two years and finally analyzed the spatio-temporal pattern. The analysis performed with the semi-analytical and the finite-element modeling gives a complex picture of the stress diffusion, in the area under study, after the Sumatra earthquake. We believe that the results obtained with the analytical method suffer heavily for the restrictions imposed, on the hypocentral depths of the aftershocks, in order to obtain the convergence of the harmonic series of the stress components. On the contrary we imposed no constraints on the numerical method so we expect that the results obtained give a more realistic description of the stress variations pattern.
Resumo:
For its particular position and the complex geological history, the Northern Apennines has been considered as a natural laboratory to apply several kinds of investigations. By the way, it is complicated to joint all the knowledge about the Northern Apennines in a unique picture that explains the structural and geological emplacement that produced it. The main goal of this thesis is to put together all information on the deformation - in the crust and at depth - of this region and to describe a geodynamical model that takes account of it. To do so, we have analyzed the pattern of deformation in the crust and in the mantle. In both cases the deformation has been studied using always information recovered from earthquakes, although using different techniques. In particular the shallower deformation has been studied using seismic moment tensors information. For our purpose we used the methods described in Arvidsson and Ekstrom (1998) that allowing the use in the inversion of surface waves [and not only of the body waves as the Centroid Moment Tensor (Dziewonski et al., 1981) one] allow to determine seismic source parameters for earthquakes with magnitude as small as 4.0. We applied this tool in the Northern Apennines and through this activity we have built up the Italian CMT dataset (Pondrelli et al., 2006) and the pattern of seismic deformation using the Kostrov (1974) method on a regular grid of 0.25 degree cells. We obtained a map of lateral variations of the pattern of seismic deformation on different layers of depth, taking into account the fact that shallow earthquakes (within 15 km of depth) in the region occur everywhere while most of events with a deeper hypocenter (15-40 km) occur only in the outer part of the belt, on the Adriatic side. For the analysis of the deep deformation, i.e. that occurred in the mantle, we used the anisotropy information characterizing the structure below the Northern Apennines. The anisotropy is an earth properties that in the crust is due to the presence of aligned fluid filled cracks or alternating isotropic layers with different elastic properties while in the mantle the most important cause of seismic anisotropy is the lattice preferred orientation (LPO) of the mantle minerals as the olivine. This last is a highly anisotropic mineral and tends to align its fast crystallographic axes (a-axis) parallel to the astenospheric flow as a response to finite strain induced by geodynamic processes. The seismic anisotropy pattern of a region is measured utilizing the shear wave splitting phenomenon (that is the seismological analogue to optical birefringence). Here, to do so, we apply on teleseismic earthquakes recorded on stations located in the study region, the Sileny and Plomerova (1996) approach. The results are analyzed on the basis of their lateral and vertical variations to better define the earth structure beneath Northern Apennines. We find different anisotropic domains, a Tuscany and an Adria one, with a pattern of seismic anisotropy which laterally varies in a similar way respect to the seismic deformation. Moreover, beneath the Adriatic region the distribution of the splitting parameters is so complex to request an appropriate analysis. Therefore we applied on our data the code of Menke and Levin (2003) which allows to look for different models of structures with multilayer anisotropy. We obtained that the structure beneath the Po Plain is probably even more complicated than expected. On the basis of the results obtained for this thesis, added with those from previous works, we suggest that slab roll-back, which created the Apennines and opened the Tyrrhenian Sea, evolved in the north boundary of Northern Apennines in a different way from its southern part. In particular, the trench retreat developed primarily south of our study region, with an eastward roll-back. In the northern portion of the orogen, after a first stage during which the retreat was perpendicular to the trench, it became oblique with respect to the structure.
Resumo:
By the end of the 19th century, geodesy has contributed greatly to the knowledge of regional tectonics and fault movement through its ability to measure, at sub-centimetre precision, the relative positions of points on the Earth’s surface. Nowadays the systematic analysis of geodetic measurements in active deformation regions represents therefore one of the most important tool in the study of crustal deformation over different temporal scales [e.g., Dixon, 1991]. This dissertation focuses on motion that can be observed geodetically with classical terrestrial position measurements, particularly triangulation and leveling observations. The work is divided into two sections: an overview of the principal methods for estimating longterm accumulation of elastic strain from terrestrial observations, and an overview of the principal methods for rigorously inverting surface coseismic deformation fields for source geometry with tests on synthetic deformation data sets and applications in two different tectonically active regions of the Italian peninsula. For the long-term accumulation of elastic strain analysis, triangulation data were available from a geodetic network across the Messina Straits area (southern Italy) for the period 1971 – 2004. From resulting angle changes, the shear strain rates as well as the orientation of the principal axes of the strain rate tensor were estimated. The computed average annual shear strain rates for the time period between 1971 and 2004 are γ˙1 = 113.89 ± 54.96 nanostrain/yr and γ˙2 = -23.38 ± 48.71 nanostrain/yr, with the orientation of the most extensional strain (θ) at N140.80° ± 19.55°E. These results suggests that the first-order strain field of the area is dominated by extension in the direction perpendicular to the trend of the Straits, sustaining the hypothesis that the Messina Straits could represents an area of active concentrated deformation. The orientation of θ agree well with GPS deformation estimates, calculated over shorter time interval, and is consistent with previous preliminary GPS estimates [D’Agostino and Selvaggi, 2004; Serpelloni et al., 2005] and is also similar to the direction of the 1908 (MW 7.1) earthquake slip vector [e.g., Boschi et al., 1989; Valensise and Pantosti, 1992; Pino et al., 2000; Amoruso et al., 2002]. Thus, the measured strain rate can be attributed to an active extension across the Messina Straits, corresponding to a relative extension rate ranges between < 1mm/yr and up to ~ 2 mm/yr, within the portion of the Straits covered by the triangulation network. These results are consistent with the hypothesis that the Messina Straits is an important active geological boundary between the Sicilian and the Calabrian domains and support previous preliminary GPS-based estimates of strain rates across the Straits, which show that the active deformation is distributed along a greater area. Finally, the preliminary dislocation modelling has shown that, although the current geodetic measurements do not resolve the geometry of the dislocation models, they solve well the rate of interseismic strain accumulation across the Messina Straits and give useful information about the locking the depth of the shear zone. Geodetic data, triangulation and leveling measurements of the 1976 Friuli (NE Italy) earthquake, were available for the inversion of coseismic source parameters. From observed angle and elevation changes, the source parameters of the seismic sequence were estimated in a join inversion using an algorithm called “simulated annealing”. The computed optimal uniform–slip elastic dislocation model consists of a 30° north-dipping shallow (depth 1.30 ± 0.75 km) fault plane with azimuth of 273° and accommodating reverse dextral slip of about 1.8 m. The hypocentral location and inferred fault plane of the main event are then consistent with the activation of Periadriatic overthrusts or other related thrust faults as the Gemona- Kobarid thrust. Then, the geodetic data set exclude the source solution of Aoudia et al. [2000], Peruzza et al. [2002] and Poli et al. [2002] that considers the Susans-Tricesimo thrust as the May 6 event. The best-fit source model is then more consistent with the solution of Pondrelli et al. [2001], which proposed the activation of other thrusts located more to the North of the Susans-Tricesimo thrust, probably on Periadriatic related thrust faults. The main characteristics of the leveling and triangulation data are then fit by the optimal single fault model, that is, these results are consistent with a first-order rupture process characterized by a progressive rupture of a single fault system. A single uniform-slip fault model seems to not reproduce some minor complexities of the observations, and some residual signals that are not modelled by the optimal single-fault plane solution, were observed. In fact, the single fault plane model does not reproduce some minor features of the leveling deformation field along the route 36 south of the main uplift peak, that is, a second fault seems to be necessary to reproduce these residual signals. By assuming movements along some mapped thrust located southward of the inferred optimal single-plane solution, the residual signal has been successfully modelled. In summary, the inversion results presented in this Thesis, are consistent with the activation of some Periadriatic related thrust for the main events of the sequence, and with a minor importance of the southward thrust systems of the middle Tagliamento plain.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
Today’s pet food industry is growing rapidly, with pet owners demanding high-quality diets for their pets. The primary role of diet is to provide enough nutrients to meet metabolic requirements, while giving the consumer a feeling of well-being. Diet nutrient composition and digestibility are of crucial importance for health and well being of animals. A recent strategy to improve the quality of food is the use of “nutraceuticals” or “Functional foods”. At the moment, probiotics and prebiotics are among the most studied and frequently used functional food compounds in pet foods. The present thesis reported results from three different studies. The first study aimed to develop a simple laboratory method to predict pet foods digestibility. The developed method was based on the two-step multi-enzymatic incubation assay described by Vervaeke et al. (1989), with some modification in order to better represent the digestive physiology of dogs. A trial was then conducted to compare in vivo digestibility of pet-foods and in vitro digestibility using the newly developed method. Correlation coefficients showed a close correlation between digestibility data of total dry matter and crude protein obtained with in vivo and in vitro methods (0.9976 and 0.9957, respectively). Ether extract presented a lower correlation coefficient, although close to 1 (0.9098). Based on the present results, the new method could be considered as an alternative system of evaluation of dog foods digestibility, reducing the need for using experimental animals in digestibility trials. The second parte of the study aimed to isolate from dog faeces a Lactobacillus strain capable of exert a probiotic effect on dog intestinal microflora. A L. animalis strain was isolated from the faeces of 17 adult healthy dogs..The isolated strain was first studied in vitro when it was added to a canine faecal inoculum (at a final concentration of 6 Log CFU/mL) that was incubated in anaerobic serum bottles and syringes which simulated the large intestine of dogs. Samples of fermentation fluid were collected at 0, 4, 8, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms, clostridia). Consequently, the L. animalis strain was fed to nine dogs having lactobacilli counts lower than 4.5 Log CFU per g of faeces. The study indicated that the L animalis strain was able to survive gastrointestinal passage and transitorily colonize the dog intestine. Both in vitro and in vivo results showed that the L. animalis strain positively influenced composition and metabolism of the intestinal microflora of dogs. The third trail investigated in vitro the effects of several non-digestible oligosaccharides (NDO) on dog intestinal microflora composition and metabolism. Substrates were fermented using a canine faecal inoculum that was incubated in anaerobic serum bottles and syringes. Substrates were added at the final concentration of 1g/L (inulin, FOS, pectin, lactitol, gluconic acid) or 4g/L (chicory). Samples of fermentation fluid were collected at 0, 6, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms). Gas production was measured throughout the 24 h of the study. Among the tested NDO lactitol showed the best prebiotic properties. In fact, it reduced coliforms and increased lactobacilli counts, enhanced microbial fermentation and promoted the production of SCFA while decreasing BCFA. All the substrates that were investigated showed one or more positive effects on dog faecal microflora metabolism or composition. Further studies (in particular in vivo studies with dogs) will be needed to confirm the prebiotic properties of lactitol and evaluate its optimal level of inclusion in the diet.
Resumo:
On account of the commercial importance of gilthead sea bream (Sparus aurata) in Italy the aim of the present study is the evaluation of the quality of nutritional, technological, sensory and freshness aspects. Sea bream production is growing in the Mediterranean and the evaluation of its quality concerns both producers and consumers alike. The culture system greatly influences final product quality. In Italy most of sea bream culture is carried out in cages but there is also a production in land based facilities and in lagoons. In this study external appeareance differentiations are pronounced. Different results were found for nutritional aspects, for fatty acids and for mineral content. Some differences in the freshness indices are also found. Furthermore, organoleptic differences have been described between culture system.
Resumo:
Three finfish species frequently caught in the waters of the Gulf of Manfredonia (Apulia, Italy) were studied in order to know how the flesh composition (proximate, fatty acid, macro- and micro- element contents) could be affected by the season effect. The species we examined were European hake (Merluccius merluccius), chub mackerel (Scomber japonicus) and horse mackerel (Trachurus trachurus), which were analysed at the raw state in three catch season and after cooking in two catch season. More precisely, European hake and chub mackerel caught during winter, summer and fall were analysed at the raw state. The composition of the flesh of grilled European hake and chub mackerel was study on fish caught in winter and fall. Horse mackerel of summer and winter catches were analysed both at the raw and grilled state. Furthermore, an overall sensory profile was outlined for each species in two catch season and the relevant spider web diagrams compared. On the whole, two hundred and eighty fish were analysed during this research project in order to obtain a nutritional profile of the three species. One hundred and fifty was the overall number of specimens used to create complete sensory profiles and compare them among the species. The three finfish species proved to be quite interesting for their proximate, fatty acids, macro- and micro-element contents. Nutritional and sensory changes occurred as seasons elapsed for chub and horse mackerel only. A high variability of flesh composition seemed to characterise these two species. European hake confirmed its mild sensory profile and good nutritional characteristics, which were not affected by any season effect.
Resumo:
Mycotoxins are contaminants of agricultural products both in the field and during storage and can enter the food chain through contaminated cereals and foods (milk, meat, and eggs) obtained from animals fed mycotoxin contaminated feeds. Mycotoxins are genotoxic carcinogens that cause health and economic problems. Ochratoxin A and fumonisin B1 have been classified by the International Agency for Research on Cancer in 1993, as “possibly carcinogenic to humans” (class 2B). To control mycotoxins induced damages, different strategies have been developed to reduce the growth of mycotoxigenic fungi as well as to decontaminate and/or detoxify mycotoxin contaminated foods and animal feeds. Critical points, target for these strategies, are: prevention of mycotoxin contamination, detoxification of mycotoxins already present in food and feed, inhibition of mycotoxin absorption in the gastrointestinal tract, reduce mycotoxin induced damages when absorption occurs. Decontamination processes, as indicate by FAO, needs the following requisites to reduce toxic and economic impact of mycotoxins: it must destroy, inactivate, or remove mycotoxins; it must not produce or leave toxic and/or carcinogenic/mutagenic residues in the final products or in food products obtained from animals fed decontaminated feed; it must be capable of destroying fungal spores and mycelium in order to avoiding mycotoxin formation under favorable conditions; it should not adversely affect desirable physical and sensory properties of the feedstuff; it has to be technically and economically feasible. One important approach to the prevention of mycotoxicosis in livestock is the addition in the diets of the non-nutritionally adsorbents that bind mycotoxins preventing the absorption in the gastrointestinal tract. Activated carbons, hydrated sodium calcium aluminosilicate (HSCAS), zeolites, bentonites, and certain clays, are the most studied adsorbent and they possess a high affinity for mycotoxins. In recent years, there has been increasing interest on the hypothesis that the absorption in consumed food can be inhibited by microorganisms in the gastrointestinal tract. Numerous investigators showed that some dairy strains of LAB and bifidobacteria were able to bind aflatoxins effectively. There is a strong need for prevention of the mycotoxin-induced damages once the toxin is ingested. Nutritional approaches, such as supplementation of nutrients, food components, or additives with protective effects against mycotoxin toxicity are assuming increasing interest. Since mycotoxins have been known to produce damages by increasing oxidative stress, the protective properties of antioxidant substances have been extensively investigated. Purpose of the present study was to investigate in vitro and in vivo, strategies to counteract mycotoxin threat particularly in swine husbandry. The Ussing chambers technique was applied in the present study that for the first time to investigate in vitro the permeability of OTA and FB1 through rat intestinal mucosa. Results showed that OTA and FB1 were not absorbed from rat small intestine mucosa. Since in vivo absorption of both mycotoxins normally occurs, it is evident that in these experimental conditions Ussing diffusion chambers were not able to assess the intestinal permeability of OTA and FB1. A large number of LAB strains isolated from feces and different gastrointestinal tract regions of pigs and poultry were screened for their ability to remove OTA, FB1, and DON from bacterial medium. Results of this in vitro study showed low efficacy of isolated LAB strains to reduce OTA, FB1, and DON from bacterial medium. An in vivo trial in rats was performed to evaluate the effects of in-feed supplementation of a LAB strain, Pediococcus pentosaceus FBB61, to counteract the toxic effects induced by exposure to OTA contaminated diets. The study allows to conclude that feed supplementation with P. pentosaceus FBB61 ameliorates the oxidative status in liver, and lowers OTA induced oxidative damage in liver and kidney if diet was contaminated by OTA. This P. pentosaceus FBB61 feature joined to its bactericidal activity against Gram positive bacteria and its ability to modulate gut microflora balance in pigs, encourage additional in vivo experiments in order to better understand the potential role of P. pentosaceus FBB61 as probiotic for farm animals and humans. In the present study, in vivo trial on weaned piglets fed FB1 allow to conclude that feeding of 7.32 ppm of FB1 for 6 weeks did not impair growth performance. Deoxynivalenol contamination of feeds was evaluated in an in vivo trial on weaned piglets. The comparison between growth parameters of piglets fed DON contaminated diet and contaminated diet supplemented with the commercial product did not reach the significance level but piglet growth performances were numerically improved when the commercial product was added to DON contaminated diet. Further studies are needed to improve knowledge on mycotoxins intestinal absorption, mechanism for their detoxification in feeds and foods, and nutritional strategies to reduce mycotoxins induced damages in animals and humans. The multifactorial approach acting on each of the various steps could be a promising strategy to counteract mycotoxins damages.
Resumo:
La valutazione dell’intensità secondo una procedura formale trasparente, obiettiva e che permetta di ottenere valori numerici attraverso scelte e criteri rigorosi, rappresenta un passo ed un obiettivo per la trattazione e l’impiego delle informazioni macrosismiche. I dati macrosismici possono infatti avere importanti applicazioni per analisi sismotettoniche e per la stima della pericolosità sismica. Questa tesi ha affrontato il problema del formalismo della stima dell’intensità migliorando aspetti sia teorici che pratici attraverso tre passaggi fondamentali sviluppati in ambiente MS-Excel e Matlab: i) la raccolta e l’archiviazione del dataset macrosismico; ii), l’associazione (funzione di appartenenza o membership function) tra effetti e gradi di intensità della scala macrosismica attraverso i principi della logica dei fuzzy sets; iii) l’applicazione di algoritmi decisionali rigorosi ed obiettivi per la stima dell’intensità finale. L’intera procedura è stata applicata a sette terremoti italiani sfruttando varie possibilità, anche metodologiche, come la costruzione di funzioni di appartenenza combinando le informazioni macrosismiche di più terremoti: Monte Baldo (1876), Valle d’Illasi (1891), Marsica (1915), Santa Sofia (1918), Mugello (1919), Garfagnana (1920) e Irpinia (1930). I risultati ottenuti hanno fornito un buon accordo statistico con le intensità di un catalogo macrosismico di riferimento confermando la validità dell’intera metodologia. Le intensità ricavate sono state poi utilizzate per analisi sismotettoniche nelle aree dei terremoti studiati. I metodi di analisi statistica sui piani quotati (distribuzione geografica delle intensità assegnate) si sono rivelate in passato uno strumento potente per analisi e caratterizzazione sismotettonica, determinando i principali parametri (localizzazione epicentrale, lunghezza, larghezza, orientazione) della possibile sorgente sismogenica. Questa tesi ha implementato alcuni aspetti delle metodologie di analisi grazie a specifiche applicazioni sviluppate in Matlab che hanno permesso anche di stimare le incertezze associate ai parametri di sorgente, grazie a tecniche di ricampionamento statistico. Un’analisi sistematica per i terremoti studiati è stata portata avanti combinando i vari metodi per la stima dei parametri di sorgente con i piani quotati originali e ricalcolati attraverso le procedure decisionali fuzzy. I risultati ottenuti hanno consentito di valutare le caratteristiche delle possibili sorgenti e formulare ipotesi di natura sismotettonica che hanno avuto alcuni riscontri indiziali con dati di tipo geologico e geologico-strutturale. Alcuni eventi (1915, 1918, 1920) presentano una forte stabilità dei parametri calcolati (localizzazione epicentrale e geometria della possibile sorgente) con piccole incertezze associate. Altri eventi (1891, 1919 e 1930) hanno invece mostrato una maggiore variabilità sia nella localizzazione dell’epicentro che nella geometria delle box: per il primo evento ciò è probabilmente da mettere in relazione con la ridotta consistenza del dataset di intensità mentre per gli altri con la possibile molteplicità delle sorgenti sismogenetiche. Anche l’analisi bootstrap ha messo in evidenza, in alcuni casi, le possibili asimmetrie nelle distribuzioni di alcuni parametri (ad es. l’azimut della possibile struttura), che potrebbero suggerire meccanismi di rottura su più faglie distinte.
Resumo:
Two analytical models are proposed to describe two different mechanisms of lava tubes formation. A first model is introduced to describe the development of a solid crust in the central region of the channel, and the formation of a tube when crust widens until it reaches the leve\'es. The Newtonian assumption is considered and the steady state Navier- Stokes equation in a rectangular conduit is solved. A constant heat flux density assigned at the upper flow surface resumes the combined effects of two thermal processes: radiation and convection into the atmosphere. Advective terms are also included, by the introduction of velocity into the expression of temperature. Velocity is calculated as an average value over the channel width, so that lateral variations of temperature are neglected. As long as the upper flow surface cools, a solid layer develops, described as a plastic body, having a resistance to shear deformation. If the applied shear stress exceeds this resistance, crust breaks, otherwise, solid fragments present at the flow surface can weld together forming a continuous roof, as it happens in the sidewall flow regions. Variations of channel width, ground slope and effusion rate are analyzed, as parameters that strongly affect the shear stress values. Crust growing is favored when the channel widens, and tube formation is possible when the ground slope or the effusion rate reduce. A comparison of results is successfully made with data obtained from the analysis of pictures of actual flows. The second model describes the formation of a stable, well defined crust along both channel sides, their growing towards the center and their welding to form the tube roof. The fluid motion is described as in the model above. Thermal budget takes into account conduction into the atmosphere, and advection is included considering the velocity depending both on depth and channel width. The solidified crust has a non uniform thickness along the channel width. Stresses acting on the crust are calculated using the equations of the elastic thin plate, pinned at its ends. The model allows to calculate the distance where crust thickness is able to resist the drag of the underlying fluid and to sustain its weight by itself, and the level of the fluid can lower below the tube roof. Viscosity and thermal conductivity have been experimentally investigated through the use of a rotational viscosimeter. Analyzing samples coming from Mount Etna (2002) the following results have been obtained: the fluid is Newtonian and the thermal conductivity is constant in a range of temperature above the liquidus. For lower temperature, the fluid becomes non homogeneous, and the used experimental techniques are not able to detect any properties, because measurements are not reproducible.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.