853 resultados para Small Area Estimation
Resumo:
The thermal limits of individual animals were originally proposed as a link between animal physiology and thermal ecology. Although this link is valid in theory, the evaluation of physiological tolerances involves some problems that are the focus of this study. One rationale was that heating rates shall influence upper critical limits, so that ecological thermal limits need to consider experimental heating rates. In addition, if thermal limits are not surpassed in experiments, subsequent tests of the same individual should yield similar results or produce evidence of hardening. Finally, several non-controlled variables such as time under experimental conditions and procedures may affect results. To analyze these issues we conducted an integrative study of upper critical temperatures in a single species, the ant Atta sexdens rubropiosa, an animal model providing large numbers of individuals of diverse sizes but similar genetic makeup. Our specific aims were to test the 1) influence of heating rates in the experimental evaluation of upper critical temperature, 2) assumptions of absence of physical damage and reproducibility, and 3) sources of variance often overlooked in the thermal-limits literature; and 4) to introduce some experimental approaches that may help researchers to separate physiological and methodological issues. The upper thermal limits were influenced by both heating rates and body mass. In the latter case, the effect was physiological rather than methodological. The critical temperature decreased during subsequent tests performed on the same individual ants, even one week after the initial test. Accordingly, upper thermal limits may have been overestimated by our (and typical) protocols. Heating rates, body mass, procedures independent of temperature and other variables may affect the estimation of upper critical temperatures. Therefore, based on our data, we offer suggestions to enhance the quality of measurements, and offer recommendations to authors aiming to compile and analyze databases from the literature.
Resumo:
Most studies on measures of transpiration of plants, especially woody fruit, relies on methods of heat supply in the trunk. This study aimed to calibrate the Thermal Dissipation Probe Method (TDP) to estimate the transpiration, study the effects of natural thermal gradients and determine the relation between outside diameter and area of xylem in 'Valencia' orange young plants. TDP were installed in 40 orange plants of 15 months old, planted in boxes of 500 L, in a greenhouse. It was tested the correction of the natural thermal differences (DTN) for the estimation based on two unheated probes. The area of the conductive section was related to the outside diameter of the stem by means of polynomial regression. The equation for estimation of sap flow was calibrated having as standard lysimeter measures of a representative plant. The angular coefficient of the equation for estimating sap flow was adjusted by minimizing the absolute deviation between the sap flow and daily transpiration measured by lysimeter. Based on these results, it was concluded that the method of TDP, adjusting the original calibration and correction of the DTN, was effective in transpiration assessment.
Resumo:
Despite the great importance of soybeans in Brazil, there have been few applications of soybean crop modeling on Brazilian conditions. Thus, the objective of this study was to use modified crop models to estimate the depleted and potential soybean crop yield in Brazil. The climatic variable data used in the modified simulation of the soybean crop models were temperature, insolation and rainfall. The data set was taken from 33 counties (28 Sao Paulo state counties, and 5 counties from other states that neighbor São Paulo). Among the models, modifications in the estimation of the leaf area of the soybean crop, which includes corrections for the temperature, shading, senescence, CO2, and biomass partition were proposed; also, the methods of input for the model's simulation of the climatic variables were reconsidered. The depleted yields were estimated through a water balance, from which the depletion coefficient was estimated. It can be concluded that the adaptation soybean growth crop model might be used to predict the results of the depleted and potential yield of soybeans, and it can also be used to indicate better locations and periods of tillage.
Resumo:
The measurement of mesozooplankton biomass in the ocean requires the use of analytical procedures that destroy the samples. Alternatively, the development of methods to estimate biomass from optical systems and appropriate conversion factors could be a compromise between the accuracy of analytical methods and the need to preserve the samples for further taxonomic studies. The conversion of the body area recorded by an optical counter or a camera, by converting the digitized area of an organism into individual biomass, was suggested as a suitable method to estimate total biomass. In this study, crustacean mesozooplankton from subtropical waters were analyzed, and individual dry weight and body area were compared. The obtained relationships agreed with other measurements of biomass obtained from a previous study in Antarctic waters. Gelatinous mesozooplankton from subtropical and Antarctic waters were also sampled and processed for body area and biomass. As expected, differences between crustacean and gelatinous plankton were highly significant. Transparent gelatinous organisms have a lower dry weight per unit area. Therefore, to estimate biomass from digitized images, pattern recognition discerning, at least, between crustaceans and gelatinous forms is required.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
Prehension in an act of coordinated reaching and grasping. The reaching component is concerned with bringing the hand to object to be grasped (transport phase); the grasping component refers to the shaping of the hand according to the object features (grasping phase) (Jeannerod, 1981). Reaching and grasping involve different muscles, proximal and distal muscles respectively, and are controlled by different parietofrontal circuit (Jeannerod et al., 1995): a medial circuit, involving area of superior parietal lobule and dorsal premotor area 6 (PMd) (dorsomedial visual stream), is mainly concerned with reaching; a lateral circuit, involving the inferior parietal lobule and ventral premotor area 6 (PMv) (dorsolateral visual stream), with grasping. Area V6A is located in the caudalmost part of the superior parietal lobule, so it belongs to the dorsomedial visual stream; it contains neurons sensitive to visual stimuli (Galletti et al. 1993, 1996, 1999) as well as cells sensitive to the direction of gaze (Galletti et al. 1995) and cells showing saccade-related activity (Nakamura et al. 1999; Kutz et al. 2003). Area V6A contains also arm-reaching neurons likely involved in the control of the direction of the arm during movements towards objects in the peripersonal space (Galletti et al. 1997; Fattori et al. 2001). The present results confirm this finding and demonstrate that during the reach-to-grasp the V6A neurons are also modulated by the orientation of the wrist. Experiments were approved by the Bioethical Committee of the University of Bologna and were performed in accordance with National laws on care and use of laboratory animals and with the European Communities Council Directive of 24th November 1986 (86/609/EEC), recently revised by the Council of Europe guidelines (Appendix A of Convention ETS 123). Experiments were performed in two awake Macaca fascicularis. Each monkey was trained to sit in a primate chair with the head restrained to perform reaching and grasping arm movements in complete darkness while gazing a small fixation point. The object to be grasped was a handle that could have different orientation. We recorded neural activity from 163 neurons of the anterior parietal sulcus; 116/163 (71%) neurons were modulated by the reach-to-grasp task during the execution of the forward movements toward the target (epoch MOV), 111/163 (68%) during the pulling of the handle (epoch HOLD) and 102/163 during the execution of backward movements (epoch M2) (t_test, p ≤ 0.05). About the 45% of the tested cells turned out to be sensitive to the orientation of the handle (one way ANOVA, p ≤ 0.05). To study how the distal components of the movement, such as the hand preshaping during the reaching of the handle, could influence the neuronal discharge, we compared the neuronal activity during the reaching movements towards the same spatial location in reach-to-point and reach-to-grasp tasks. Both tasks required proximal arm movements; only the reach-to-grasp task required distal movements to orient the wrist and to shape the hand to grasp the handle. The 56% of V6A cells showed significant differences in the neural discharge (one way ANOVA, p ≤ 0.05) between the reach-to-point and the reach-to-grasp tasks during MOV, 54% during HOLD and 52% during M2. These data show that reaching and grasping are processed by the same population of neurons, providing evidence that the coordination of reaching and grasping takes place much earlier than previously thought, i.e., in the parieto-occipital cortex. The data here reported are in agreement with results of lesions to the medial posterior parietal cortex in both monkeys and humans, and with recent imaging data in humans, all of them indicating a functional coupling in the control of reaching and grasping by the medial parietofrontal circuit.
Resumo:
This work is a detailed study of hydrodynamic processes in a defined area, the littoral in front of the Venice Lagoon and its inlets, which are complex morphological areas of interconnection. A finite element hydrodynamic model of the Venice Lagoon and the Adriatic Sea has been developed in order to study the coastal current patterns and the exchanges at the inlets of the Venice Lagoon. This is the first work in this area that tries to model the interaction dynamics, running together a model for the lagoon and the Adriatic Sea. First the barotropic processes near the inlets of the Venice Lagoon have been studied. Data from more than ten tide gauges displaced in the Adriatic Sea have been used in the calibration of the simulated water levels. To validate the model results, empirical flux data measured by ADCP probes installed inside the inlets of Lido and Malamocco have been used and the exchanges through the three inlets of the Venice Lagoon have been analyzed. The comparison between modelled and measured fluxes at the inlets outlined the efficiency of the model to reproduce both tide and wind induced water exchanges between the sea and the lagoon. As a second step, also small scale processes around the inlets that connect the Venice lagoon with the Northern Adriatic Sea have been investigated by means of 3D simulations. Maps of vorticity have been produced, considering the influence of tidal flows and wind stress in the area. A sensitivity analysis has been carried out to define the importance of the advection and of the baroclinic pressure gradients in the development of vortical processes seen along the littoral close to the inlets. Finally a comparison with real data measurements, surface velocity data from HF Radar near the Venice inlets, has been performed, which allows for a better understanding of the processes and their seasonal dynamics. The results outline the predominance of wind and tidal forcing in the coastal area. Wind forcing acts mainly on the mean coastal current inducing its detachment offshore during Sirocco events and an increase of littoral currents during Bora events. The Bora action is more homogeneous on the whole coastal area whereas the Sirocco strengthens its impact in the South, near Chioggia inlet. Tidal forcing at the inlets is mainly barotropic. The sensitivity analysis shows how advection is the main physical process responsible for the persistent vortical structures present along the littoral between the Venice Lagoon inlets. The comparison with measurements from HF Radar not only permitted a validation the model results, but also a description of different patterns in specific periods of the year. The success of the 2D and the 3D simulations on the reproduction both of the SSE, inside and outside the Venice Lagoon, of the tidal flow, through the lagoon inlets, and of the small scale phenomena, occurring along the littoral, indicates that the finite element approach is the most suitable tool for the investigation of coastal processes. For the first time, as shown by the flux modeling, the physical processes that drive the interaction between the two basins were reproduced.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
Terrestrial radioactivity for most individual is the major contributor to the total dose and is mostly provided by 238U, 232Th and 40K radionuclides. In particular indoor radioactivity is principally due to 222Rn, a radioactive noble gas descendent of 238U, second cause of lung cancer after cigarettes smoking. Vulsini Volcanic District is a well known quaternary volcanic area located between the northern Latium and southern Tuscany (Central Italy). It is characterized by an high natural radiation background resulting from the high concentrations of 238U, 232Th and 40K in the volcanic products. In this context, subduction-related metasomatic enrichment of incompatible elements in the mantle source coupled with magma differentiation within the upper crust has given rise to U, Th and K enriched melts. Almost every ancient village and town located in this part of Italy has been built with volcanic rocks pertaining to the Vulsini Volcanic District. The radiological risk of living in this area has been estimated considering separately: a. the risk associated with buildings made of volcanic products and built on volcanic rock substrates b. the risk associated to soil characteristics. The former has been evaluated both using direct 222Rn indoor measurements and simulations of “standard rooms” built with the tuffs and lavas from the Vulsini Volcanic District investigated in this work. The latter has been carried out by using in situ measurements of 222Rn activity in the soil gases. A radon risk map for the Bolsena village has been developed using soil radon measurements integrating geological information. Data of airborne radioactivity in ambient aerosol at two elevated stations in Emilia Romagna (North Italy) under the influence of Fukushima plume have been collected, effective doses have been calculated and an extensive comparison between doses associated with artificial and natural sources in different area have been described and discussed.
Resumo:
In den letzten drei Jahrzehnten sind Fernerkundung und GIS in den Geowissenschaften zunehmend wichtiger geworden, um die konventionellen Methoden von Datensammlung und zur Herstellung von Landkarten zu verbessern. Die vorliegende Arbeit befasst sich mit der Anwendung von Fernerkundung und geographischen Informationssystemen (GIS) für geomorphologische Untersuchungen. Durch die Kombination beider Techniken ist es vor allem möglich geworden, geomorphologische Formen im Überblick und dennoch detailliert zu erfassen. Als Grundlagen werden in dieser Arbeit topographische und geologische Karten, Satellitenbilder und Klimadaten benutzt. Die Arbeit besteht aus 6 Kapiteln. Das erste Kapitel gibt einen allgemeinen Überblick über den Untersuchungsraum. Dieser umfasst folgende morphologische Einheiten, klimatischen Verhältnisse, insbesondere die Ariditätsindizes der Küsten- und Gebirgslandschaft sowie das Siedlungsmuster beschrieben. Kapitel 2 befasst sich mit der regionalen Geologie und Stratigraphie des Untersuchungsraumes. Es wird versucht, die Hauptformationen mit Hilfe von ETM-Satellitenbildern zu identifizieren. Angewandt werden hierzu folgende Methoden: Colour Band Composite, Image Rationing und die sog. überwachte Klassifikation. Kapitel 3 enthält eine Beschreibung der strukturell bedingten Oberflächenformen, um die Wechselwirkung zwischen Tektonik und geomorphologischen Prozessen aufzuklären. Es geht es um die vielfältigen Methoden, zum Beispiel das sog. Image Processing, um die im Gebirgskörper vorhandenen Lineamente einwandfrei zu deuten. Spezielle Filtermethoden werden angewandt, um die wichtigsten Lineamente zu kartieren. Kapitel 4 stellt den Versuch dar, mit Hilfe von aufbereiteten SRTM-Satellitenbildern eine automatisierte Erfassung des Gewässernetzes. Es wird ausführlich diskutiert, inwieweit bei diesen Arbeitsschritten die Qualität kleinmaßstäbiger SRTM-Satellitenbilder mit großmaßstäbigen topographischen Karten vergleichbar ist. Weiterhin werden hydrologische Parameter über eine qualitative und quantitative Analyse des Abflussregimes einzelner Wadis erfasst. Der Ursprung von Entwässerungssystemen wird auf der Basis geomorphologischer und geologischer Befunde interpretiert. Kapitel 5 befasst sich mit der Abschätzung der Gefahr episodischer Wadifluten. Die Wahrscheinlichkeit ihres jährlichen Auftretens bzw. des Auftretens starker Fluten im Abstand mehrerer Jahre wird in einer historischen Betrachtung bis 1921 zurückverfolgt. Die Bedeutung von Regentiefs, die sich über dem Roten Meer entwickeln, und die für eine Abflussbildung in Frage kommen, wird mit Hilfe der IDW-Methode (Inverse Distance Weighted) untersucht. Betrachtet werden außerdem weitere, regenbringende Wetterlagen mit Hilfe von Meteosat Infrarotbildern. Genauer betrachtet wird die Periode 1990-1997, in der kräftige, Wadifluten auslösende Regenfälle auftraten. Flutereignisse und Fluthöhe werden anhand von hydrographischen Daten (Pegelmessungen) ermittelt. Auch die Landnutzung und Siedlungsstruktur im Einzugsgebiet eines Wadis wird berücksichtigt. In Kapitel 6 geht es um die unterschiedlichen Küstenformen auf der Westseite des Roten Meeres zum Beispiel die Erosionsformen, Aufbauformen, untergetauchte Formen. Im abschließenden Teil geht es um die Stratigraphie und zeitliche Zuordnung von submarinen Terrassen auf Korallenriffen sowie den Vergleich mit anderen solcher Terrassen an der ägyptischen Rotmeerküste westlich und östlich der Sinai-Halbinsel.
Resumo:
The arid regions are dominated to a much larger degree than humid regions by major catastrophic events. Although most of Egypt lies within the great hot desert belt; it experiences especially in the north some torrential rainfall, which causes flash floods all over Sinai Peninsula. Flash floods in hot deserts are characterized by high velocity and low duration with a sharp discharge peak. Large sediment loads may be carried by floods threatening fields and settlements in the wadis and even people who are living there. The extreme spottiness of rare heavy rainfall, well known to desert people everywhere, precludes any efficient forecasting. Thus, although the limitation of data still reflects pre-satellite methods, chances of developing a warning system for floods in the desert seem remote. The relatively short flood-to-peak interval, a characteristic of desert floods, presents an additional impediment to the efficient use of warning systems. The present thesis contains introduction and five chapters, chapter one points out the physical settings of the study area. There are the geological settings such as outcrop lithology of the study area and the deposits. The alluvial deposits of Wadi Moreikh had been analyzed using OSL dating to know deposits and palaeoclimatic conditions. The chapter points out as well the stratigraphy and the structure geology containing main faults and folds. In addition, it manifests the pesent climate conditions such as temperature, humidity, wind and evaporation. Besides, it presents type of soils and natural vegetation cover of the study area using unsupervised classification for ETM+ images. Chapter two points out the morphometric analysis of the main basins and their drainage network in the study area. It is divided into three parts: The first part manifests the morphometric analysis of the drainage networks which had been extracted from two main sources, topographic maps and DEM images. Basins and drainage networks are considered as major influencing factors on the flash floods; Most of elements were studied which affect the network such as stream order, bifurcation ratio, stream lengths, stream frequency, drainage density, and drainage patterns. The second part of this chapter shows the morphometric analysis of basins such as area, dimensions, shape and surface. Whereas, the third part points the morphometric analysis of alluvial fans which form most of El-Qaá plain. Chapter three manifests the surface runoff through rainfall and losses analysis. The main subject in this chapter is rainfall which has been studied in detail; it is the main reason for runoff. Therefore, all rainfall characteristics are regarded here such as rainfall types, distribution, rainfall intensity, duration, frequency, and the relationship between rainfall and runoff. While the second part of this chapter concerns with water losses estimation by evaporation and infiltration which are together the main losses with direct effect on the high of runoff. Finally, chapter three points out the factors influencing desert runoff and runoff generation mechanism. Chapter four is concerned with assessment of flood hazard, it is important to estimate runoff and tocreate a map of affected areas. Therefore, the chapter consists of four main parts; first part manifests the runoff estimation, the different methods to estimate runoff and its variables such as runoff coefficient lag time, time of concentration, runoff volume, and frequency analysis of flash flood. While the second part points out the extreme event analysis. The third part shows the map of affected areas for every basin and the flash floods degrees. In this point, it has been depending on the DEM to extract the drainage networks and to determine the main streams which are normally more dangerous than others. Finally, part four presets the risk zone map of total study area which is of high inerest for planning activities. Chapter five as the last chapter concerns with flash flood Hazard mitigation. It consists of three main parts. First flood prediction and the method which can be used to predict and forecast the flood. The second part aims to determine the best methods which can be helpful to mitigate flood hazard in the arid zone and especially the study area. Whereas, the third part points out the development perspective for the study area indicating the suitable places in El-Qaá plain for using in economic activities.
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
Small-scale dynamic stochastic general equilibrium have been treated as the benchmark of much of the monetary policy literature, given their ability to explain the impact of monetary policy on output, inflation and financial markets. One cause of the empirical failure of New Keynesian models is partially due to the Rational Expectations (RE) paradigm, which entails a tight structure on the dynamics of the system. Under this hypothesis, the agents are assumed to know the data genereting process. In this paper, we propose the econometric analysis of New Keynesian DSGE models under an alternative expectations generating paradigm, which can be regarded as an intermediate position between rational expectations and learning, nameley an adapted version of the "Quasi-Rational" Expectatations (QRE) hypothesis. Given the agents' statistical model, we build a pseudo-structural form from the baseline system of Euler equations, imposing that the length of the reduced form is the same as in the `best' statistical model.