5 resultados para Humid regions.

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The arid regions are dominated to a much larger degree than humid regions by major catastrophic events. Although most of Egypt lies within the great hot desert belt; it experiences especially in the north some torrential rainfall, which causes flash floods all over Sinai Peninsula. Flash floods in hot deserts are characterized by high velocity and low duration with a sharp discharge peak. Large sediment loads may be carried by floods threatening fields and settlements in the wadis and even people who are living there. The extreme spottiness of rare heavy rainfall, well known to desert people everywhere, precludes any efficient forecasting. Thus, although the limitation of data still reflects pre-satellite methods, chances of developing a warning system for floods in the desert seem remote. The relatively short flood-to-peak interval, a characteristic of desert floods, presents an additional impediment to the efficient use of warning systems. The present thesis contains introduction and five chapters, chapter one points out the physical settings of the study area. There are the geological settings such as outcrop lithology of the study area and the deposits. The alluvial deposits of Wadi Moreikh had been analyzed using OSL dating to know deposits and palaeoclimatic conditions. The chapter points out as well the stratigraphy and the structure geology containing main faults and folds. In addition, it manifests the pesent climate conditions such as temperature, humidity, wind and evaporation. Besides, it presents type of soils and natural vegetation cover of the study area using unsupervised classification for ETM+ images. Chapter two points out the morphometric analysis of the main basins and their drainage network in the study area. It is divided into three parts: The first part manifests the morphometric analysis of the drainage networks which had been extracted from two main sources, topographic maps and DEM images. Basins and drainage networks are considered as major influencing factors on the flash floods; Most of elements were studied which affect the network such as stream order, bifurcation ratio, stream lengths, stream frequency, drainage density, and drainage patterns. The second part of this chapter shows the morphometric analysis of basins such as area, dimensions, shape and surface. Whereas, the third part points the morphometric analysis of alluvial fans which form most of El-Qaá plain. Chapter three manifests the surface runoff through rainfall and losses analysis. The main subject in this chapter is rainfall which has been studied in detail; it is the main reason for runoff. Therefore, all rainfall characteristics are regarded here such as rainfall types, distribution, rainfall intensity, duration, frequency, and the relationship between rainfall and runoff. While the second part of this chapter concerns with water losses estimation by evaporation and infiltration which are together the main losses with direct effect on the high of runoff. Finally, chapter three points out the factors influencing desert runoff and runoff generation mechanism. Chapter four is concerned with assessment of flood hazard, it is important to estimate runoff and tocreate a map of affected areas. Therefore, the chapter consists of four main parts; first part manifests the runoff estimation, the different methods to estimate runoff and its variables such as runoff coefficient lag time, time of concentration, runoff volume, and frequency analysis of flash flood. While the second part points out the extreme event analysis. The third part shows the map of affected areas for every basin and the flash floods degrees. In this point, it has been depending on the DEM to extract the drainage networks and to determine the main streams which are normally more dangerous than others. Finally, part four presets the risk zone map of total study area which is of high inerest for planning activities. Chapter five as the last chapter concerns with flash flood Hazard mitigation. It consists of three main parts. First flood prediction and the method which can be used to predict and forecast the flood. The second part aims to determine the best methods which can be helpful to mitigate flood hazard in the arid zone and especially the study area. Whereas, the third part points out the development perspective for the study area indicating the suitable places in El-Qaá plain for using in economic activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A broad variety of solid state NMR techniques were used to investigate the chain dynamics in several polyethylene (PE) samples, including ultrahigh molecular weight PEs (UHMW-PEs) and low molecular weight PEs (LMW-PEs). Via changing the processing history, i.e. melt/solution crystallization and drawing processes, these samples gain different morphologies, leading to different molecular dynamics. Due to the long chain nature, the molecular dynamics of polyethylene can be distinguished in local fluctuation and long range motion. With the help of NMR these different kinds of molecular dynamics can be monitored separately. In this work the local chain dynamics in non-crystalline regions of polyethylene samples was investigated via measuring 1H-13C heteronuclear dipolar coupling and 13C chemical shift anisotropy (CSA). By analyzing the motionally averaged 1H-13C heteronuclear dipolar coupling and 13C CSA, the information about the local anisotropy and geometry of motion was obtained. Taking advantage of the big difference of the 13C T1 relaxation time in crystalline and non-crystalline regions of PEs, the 1D 13C MAS exchange experiment was used to investigate the cooperative chain motion between these regions. The different chain organizations in non-crystalline regions were used to explain the relationship between the local fluctuation and the long range motion of the samples. In a simple manner the cooperative chain motion between crystalline and non-crystalline regions of PE results in the experimentally observed diffusive behavior of PE chain. The morphological influences on the diffusion motion have been discussed. The morphological factors include lamellar thickness, chain organization in non-crystalline regions and chain entanglements. Thermodynamics of the diffusion motion in melt and solution crystallized UHMW-PEs is discussed, revealing entropy-controlled features of the chain diffusion in PE. This thermodynamic consideration explains the counterintuitive relationship between the local fluctuation and the long range motion of the samples. Using the chain diffusion coefficient, the rates of jump motion in crystals of the melt crystallized PE have been calculated. A concept of "effective" jump motion has been proposed to explain the difference between the values derived from the chain diffusion coefficients and those in literatures. The observations of this thesis give a clear demonstration of the strong relationship between the sample morphology and chain dynamics. The sample morphologies governed by the processing history lead to different spatial constraints for the molecular chains, leading to different features of the local and long range chain dynamics. The knowledge of the morphological influence on the microscopic chain motion has many implications in our understanding of the alpha-relaxation process in PE and the related phenomena such as crystal thickening, drawability of PE, the easy creep of PE fiber, etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peptides presented by MHC class I molecules for CTL recognition are derived mainly from cytosolic proteins. For antigen presentation on the cell surface, epitopes require correct processing by cytosolic and ER proteases, efficient TAP transport and MHC class I binding affinity. The efficiency of epitope generation depends not only on the epitope itself, but also on its flanking regions. In this project, the influence of the C-terminal region of the model epitope SIINFEKL (S8L) from chicken ovalbumin (aa 257-264) on antigen processing has been investigated. S8L is a well characterized epitope presented on the murine MHC class I molecule, H-2Kb. The Flp-In 293Kb cell line was transfected with different constructs each enabling the expression of the S8L sequence with different defined C-terminal flanking regions. The constructs differed at the two first C-terminal positions after the S8L epitope, so called P1’ and P2’. At these sites, all 20 amino acids were exchanged consecutively and tested for their influence on H-2Kb/S8L presentation on the cell surface of the Flp-In 293Kb cells. The detection of this complex was performed by immunostaining and flow cytometry. The prevailing assumption is that proteasomal cleavages are exclusively responsible for the generation of the final C-termini of CTL epitopes. Nevertheless, recent publications showed that TPPII (tripeptidyl peptidase II) is required for the generation of the correct C-terminus of the HLA-A3-restricted HIV epitope Nef(73-82). With this background, the dependence of the S8L generation on proteasomal cleavage of the designed constructs was characterized using proteasomal inhibitors. The results obtained indicate that it is crucial for proteasomal cleavage, which amino acid is flanking the C-terminus of an epitope. Furthermore, partially proteasome independent S8L generation from specific S8L-precursor peptides was observed. Hence, the possibility of other existing endo- or carboxy-peptidases in the cytosol that could be involved in the correct trimming of the C-terminus of antigenic peptides for MHC class I presentation was investigated, performing specific knockdowns and using inhibitors against the target peptidases. In parallel, a purification strategy to identify the novel peptidase was established. The purified peaks showing an endopeptidase activity were further analyzed by mass spectrometry and some potential peptidases (like e.g. Lon) were identified, which have to be further characterized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis investigates the nucleon structure probed by the electromagnetic interaction. One of the most basic observables, reflecting the electromagnetic structure of the nucleon, are the form factors, which have been studied by means of elastic electron-proton scattering with ever increasing precision for several decades. In the timelike region, corresponding with the proton-antiproton annihilation into a electron-positron pair, the present experimental information is much less accurate. However, in the near future high-precision form factor measurements are planned. About 50 years after the first pioneering measurements of the electromagnetic form factors, polarization experiments stirred up the field since the results were found to be in striking contradiction to the findings of previous form factor investigations from unpolarized measurements. Triggered by the conflicting results, a whole new field studying the influence of two-photon exchange corrections to elastic electron-proton scattering emerged, which appeared as the most likely explanation of the discrepancy. The main part of this thesis deals with theoretical studies of two-photon exchange, which is investigated particularly with regard to form factor measurements in the spacelike as well as in the timelike region. An extraction of the two-photon amplitudes in the spacelike region through a combined analysis using the results of unpolarized cross section measurements and polarization experiments is presented. Furthermore, predictions of the two-photon exchange effects on the e+p/e-p cross section ratio are given for several new experiments, which are currently ongoing. The two-photon exchange corrections are also investigated in the timelike region in the process pbar{p} -> e+ e- by means of two factorization approaches. These corrections are found to be smaller than those obtained for the spacelike scattering process. The influence of the two-photon exchange corrections on cross section measurements as well as asymmetries, which allow a direct access of the two-photon exchange contribution, is discussed. Furthermore, one of the factorization approaches is applied for investigating the two-boson exchange effects in parity-violating electron-proton scattering. In the last part of the underlying work, the process pbar{p} -> pi0 e+e- is analyzed with the aim of determining the form factors in the so-called unphysical, timelike region below the two-nucleon production threshold. For this purpose, a phenomenological model is used, which provides a good description of the available data of the real photoproduction process pbar{p} -> pi0 gamma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.