891 resultados para Units of measurement


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Environmental decay in porous masonry materials, such as brick and mortar, is a widespread problem concerning both new and historic masonry structures. The decay mechanisms are quite complex dependng upon several interconnected parameters and from the interaction with the specific micro-climate. Materials undergo aesthetical and substantial changes in character but while many studies have been carried out, the mechanical aspect has been largely understudied while it bears true importance from the structural viewpoint. A quantitative assessment of the masonry material degradation and how it affects the load-bearing capacity of masonry structures appears missing. The research work carried out, limiting the attention to brick masonry addresses this issue through an experimental laboratory approach via different integrated testing procedures, both non-destructive and mechanical, together with monitoring methods. Attention was focused on transport of moisture and salts and on the damaging effects caused by the crystallization of two different salts, sodium chloride and sodium sulphate. Many series of masonry specimens, very different in size and purposes were used to track the damage process since its beginning and to monitor its evolution over a number of years Athe same time suitable testing techniques, non-destructive, mini-invasive, analytical, of monitoring, were validated for these purposes. The specimens were exposed to different aggressive agents (in terms of type of salt, of brine concentration, of artificial vs. open-air natural ageing, …), tested by different means (qualitative vs. quantitative, non destructive vs. mechanical testing, punctual vs. wide areas, …), and had different size (1-, 2-, 3-header thick walls, full-scale walls vs. small size specimens, brick columns and triplets vs. small walls, masonry specimens vs. single units of brick and mortar prisms, …). Different advanced testing methods and novel monitoring techniques were applied in an integrated holistic approach, for quantitative assessment of masonry health state.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The application of two low-temperature thermochronometers [fission-track analysis and (U-Th)/He analyses, both on apatite] to various tectonostratigraphic units of the Menderes and Alanya Massifs of Turkey has provided significant new constraints to the understanding of their structural evolution. The Menderes Massif of western Anatolia is one of the largest metamorphic core complexes on Earth. The integration of the geochronometric dataset presented in this dissertation with preexisting ones from the literature delineates three groups of samples within the Menderes Massif. In the northern and southern region the massif experienced a Late Oligocene-Early Miocene tectonic denudation and surface uplift; whereas data from the central region are younger, with most ages ranging between the Middle-Late Miocene. The results of this study are consistent with the interpretation for a symmetric exhumation of the Menderes Massif. The Alanya Massif of SW Anatolia presents a typical nappe pile consisting of thrust sheets with contrasting metamorphic histories. Petrological and geochronological data clearly indicate that the tectonometamorphic evolution Alanya started from Late Cretaceous with the northward subduction of an ‘Alanya ocean’ under the Tauride plate. As an effect of the closure of the İzmir–Ankara–Erzincan ocean, northward backthrusting during the Paleocene-Early Eocene created the present stacking order. Apatite fission-track ages from this study range from 31.8 to 26.8 Ma (Late Rupelian-Early Chattian) and point to a previously unrecognized mid-Oligocene cooling/exhumation episode. (U-Th)/He analysis on zircon crystals obtained from the island of Cyprus evidentiate that the Late Cretaceous trondhjemites of the Troodos Massif not recorded a significant cooling event. Instead results for the Late Triassic turbiditic sandstones of the Vlambouros Formation show that the Mamonia mélange was never buried enough to reach the closure temperature of the ZHe radiometric system (ca. 200°C), thus retaining the Paleozoic signature of a previous sedimentary cycle.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since historical times, coastal areas throughout the eastern Mediterranean are exposed to tsunami hazard. For many decades the knowledge about palaeotsunamis was solely based on historical accounts. However, results from timeline analyses reveal different characteristics affecting the quality of the dataset (i.e. distribution of data, temporal thinning backward of events, local periodization phenomena) that emphasize the fragmentary character of the historical data. As an increasing number of geo-scientific studies give convincing examples of well dated tsunami signatures not reported in catalogues, the non-existing record is a major problem to palaeotsunami research. While the compilation of historical data allows a first approach in the identification of areas vulnerable to tsunamis, it must not be regarded as reliable for hazard assessment. Considering the increasing economic significance of coastal regions (e.g. for mass tourism) and the constantly growing coastal population, our knowledge on the local, regional and supraregional tsunami hazard along Mediterranean coasts has to be improved. For setting up a reliable tsunami risk assessment and developing risk mitigation strategies, it is of major importance (i) to identify areas under risk and (ii) to estimate the intensity and frequency of potential events. This approach is most promising when based on the analysis of palaeotsunami research seeking to detect areas of high palaeotsunami hazard, to calculate recurrence intervals and to document palaeotsunami destructiveness in terms of wave run-up, inundation and long-term coastal change. Within the past few years, geo-scientific studies on palaeotsunami events provided convincing evidence that throughout the Mediterranean ancient harbours were subject to strong tsunami-related disturbance or destruction. Constructed to protect ships from storm and wave activity, harbours provide especially sheltered and quiescent environments and thus turned out to be valuable geo-archives for tsunamigenic high-energy impacts on coastal areas. Directly exposed to the Hellenic Trench and extensive local fault systems, coastal areas in the Ionian Sea and the Gulf of Corinth hold a considerably high risk for tsunami events, respectively.Geo-scientific and geoarcheaological studies carried out in the environs of the ancient harbours of Krane (Cefalonia Island), Lechaion (Corinth, Gulf of Corinth) and Kyllini (western Peloponnese) comprised on-shore and near-shore vibracoring and subsequent sedimentological, geochemical and microfossil analyses of the recovered sediments. Geophysical methods like electrical resistivity tomography and ground penetrating radar were applied in order to detect subsurface structures and to verify stratigraphical patterns derived from vibracores over long distances. The overall geochronological framework of each study area is based on radiocarbon dating of biogenic material and age determination of diagnostic ceramic fragments. Results presented within this study provide distinct evidence of multiple palaeotsunami landfalls for the investigated areas. Tsunami signatures encountered in the environs of Krane, Lechaion and Kyllini include (i) coarse-grained allochthonous marine sediments intersecting silt-dominated quiescent harbour deposits and/or shallow marine environments, (ii) disturbed microfaunal assemblages and/or (iii) distinct geochemical fingerprints as well as (iv) geo-archaeological destruction layers and (v) extensive units of beachrock-type calcarenitic tsunamites. For Krane, geochronological data yielded termini ad or post quem (maximum ages) for tsunami event generations dated to 4150 ± 60 cal BC, ~ 3200 ± 110 cal BC, ~ 650 ± 110 cal BC, and ~ 930 ± 40 cal AD, respectively. Results for Lechaion suggest that the harbour was hit by strong tsunami impacts in the 8th-6th century BC, the 1st-2nd century AD and in the 6th century AD. At Kyllini, the harbour site was affected by tsunami impact in between the late 7th and early 4th cent. BC and between the 4th and 6th cent. AD. In case of Lechaion and Kyllini, the final destruction of the harbour facilities also seems to be related to the tsunami impact. Comparing the tsunami signals obtained for each study areas with geo-scientific data from palaeotsunami events from other sites indicates that the investigated harbour sites represent excellent geo-archives for supra-regional mega-tsunamis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die elektrische Ladung des Neutrons ist eng mit der Frage nach der Existenz der Ladungsquantisierung verknüpft: Sollte das Neutron eine Ladung tragen, kann die Ladung nicht in Einheiten der Elementarladung e quantisiert sein.rnrnIm Rahmen der Elektrodynamik und des minimalen Standardmodells ist die Quantisierung der Ladung nicht enthalten. Eine mögliche Neutronenladung würde ihnen also nicht widersprechen. Allerdings geht sie aus den Weiterentwicklungen dieser Modelle hervor. Die sogenannten Grand Unified Theories sagen die Möglichkeit des Protonenzerfalls vorher. Dieser ist nur möglich, wenn die Ladung quantisiert ist.rnrnDurch die Messung einer elektrischen Ladung des Neutrons können die verschiedenen Theorien überprüft werden.rnrnIm Rahmen dieser Arbeit wurde eine Apparatur entwickelt, mit der die elektrische Ladung des Neutrons gemessen werden kann. Als Grundlage diente das Prinzip einer Messung von 1988. Mit einem flüssigen Neutronenspiegel aus Fomblin ist es zum ersten mal überhaupt gelungen, einen flüssigen Spiegel für Neutronen einzusetzen. Durch diese und andere Verbesserungen konnte die Sensitivität der Apparatur um einen Faktor 5 im Vergleich zum Experimentrnvon 1988 verbessert werden. Eine mögliche Ladung des Neutrons kann mit δq_n = 2,15·10^(−20)·e/√day gemessen werden. rnrnDie Messung der elektrischen Ladung soll im Winter 2014 durchgeführt werden. Bis dahin soll die Präzision aufrnδq_n = 1,4·10^(−21)·e/√day erhöht werden.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die rasante Entwicklung der Computerindustrie durch die stetige Verkleinerung der Transistoren führt immer schneller zum Erreichen der Grenze der Si-Technologie, ab der die Tunnelprozesse in den Transistoren ihre weitere Verkleinerung und Erhöhung ihrer Dichte in den Prozessoren nicht mehr zulassen. Die Zukunft der Computertechnologie liegt in der Verarbeitung der Quanteninformation. Für die Entwicklung von Quantencomputern ist die Detektion und gezielte Manipulation einzelner Spins in Festkörpern von größter Bedeutung. Die Standardmethoden der Spindetektion, wie ESR, erlauben jedoch nur die Detektion von Spinensembles. Die Idee, die das Auslesen von einzelnen Spins ermöglich sollte, besteht darin, die Manipulation getrennt von der Detektion auszuführen.rn Bei dem NV−-Zentrum handelt es sich um eine spezielle Gitterfehlstelle im Diamant, die sich als einen atomaren, optisch auslesbaren Magnetfeldsensor benutzen lässt. Durch die Messung seiner Fluoreszenz sollte es möglich sein die Manipulation anderer, optisch nicht detektierbaren, “Dunkelspins“ in unmittelbarer Nähe des NV-Zentrums mittels der Spin-Spin-Kopplung zu detektieren. Das vorgeschlagene Modell des Quantencomputers basiert auf dem in SWCNT eingeschlossenen N@C60.Die Peapods, wie die Einheiten aus den in Kohlenstoffnanoröhre gepackten Fullerenen mit eingefangenem Stickstoff genannt werden, sollen die Grundlage für die Recheneinheiten eines wahren skalierbaren Quantencomputers bilden. Die in ihnen mit dem Stickstoff-Elektronenspin durchgeführten Rechnungen sollen mit den oberflächennahen NV-Zentren (von Diamantplatten), über denen sie positioniert sein sollen, optisch ausgelesen werden.rnrnDie vorliegende Arbeit hatte das primäre Ziel, die Kopplung der oberflächennahen NV-Einzelzentren an die optisch nicht detektierbaren Spins der Radikal-Moleküle auf der Diamantoberfläche mittels der ODMR-Kopplungsexperimente optisch zu detektieren und damit entscheidende Schritte auf dem Wege der Realisierung eines Quantenregisters zu tun.rn Es wurde ein sich im Entwicklungsstadium befindende ODMR-Setup wieder aufgebaut und seine bisherige Funktionsweise wurde an kommerziellen NV-Zentrum-reichen Nanodiamanten verifiziert. Im nächsten Schritt wurde die Effektivität und Weise der Messung an die Detektion und Manipulation der oberflächennah (< 7 nm Tiefe) implantieren NV-Einzelzenten in Diamantplatten angepasst.Ein sehr großer Teil der Arbeit, der hier nur bedingt beschrieben werden kann, bestand aus derrnAnpassung der existierenden Steuersoftware an die Problematik der praktischen Messung. Anschließend wurde die korrekte Funktion aller implementierten Pulssequenzen und anderer Software-Verbesserungen durch die Messung an oberflächennah implantierten NV-Einzelzentren verifiziert. Auch wurde der Messplatz um die zur Messung der Doppelresonanz notwendigen Komponenten wie einen steuerbaren Elektromagneten und RF-Signalquelle erweitert. Unter der Berücksichtigung der thermischen Stabilität von N@C60 wurde für zukünftige Experimente auch ein optischer Kryostat geplant, gebaut, in das Setup integriert und charakterisiert.rn Die Spin-Spin-Kopplungsexperimente wurden mit dem sauerstoffstabilen Galvinoxyl-Radikalals einem Modell-System für Kopplung durchgeführt. Dabei wurde über die Kopplung mit einem NVZentrum das RF-Spektrum des gekoppelten Radikal-Spins beobachtet. Auch konnte von dem gekoppelten Spin eine Rabi-Nutation aufgenommen werden.rn Es wurden auch weitere Aspekte der Peapod Messung und Oberflächenimplantation betrachtet.Es wurde untersucht, ob sich die NV-Detektion durch die SWCNTs, Peapods oder Fullerene stören lässt. Es zeigte sich, dass die Komponenten des geplanten Quantencomputers, bis auf die C60-Cluster, für eine ODMR-Messanordnung nicht detektierbar sind und die NV-Messung nicht stören werden. Es wurde auch betrachtet, welche Arten von kommerziellen Diamantplatten für die Oberflächenimplantation geeignet sind, für die Kopplungsmessungen geeignete Dichte der implantierten NV-Zentren abgeschätzt und eine Implantation mit abgeschätzter Dichte betrachtet.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Metallische Nanopartikel und ihre Oxide (z.B. ZnO NP, TiO2 NP und Fe2O3 NP) werden aufgrund ihrer chemischen und physikalischen Eigenschaften häufig als Additive in der Reifenproduktion, in Katalysatoren, Lebensmitteln, Arzneimitteln und Kosmetikprodukten verwendet. Künftig wird ein kontinuierlicher Anstieg der industriellen Anwendung (~ 1663 Tonnen im Jahr 2025) mit gesteigerter Freisetzung in die Umwelt erwartet, was zwangsläufig zu einer vermehrten Aufnahme über das respiratorische Epithel führt. Metalldampffieber ist als gesundheitsschädigender Effekt von Metalloxid-haltigen Aerosolen (z.B. ZnO) nach Inhalation bekannt. Immunreaktionen, wie beispielsweise Entzündungen, werden häufig mit der Entstehung von Sauerstoffradikalen (ROS) in Verbindung gebracht, die wiederum zu DNA-Schäden führen können. Drei mögliche Ursachen der Genotoxität werden angenommen: direkte Interaktion von Nanopartikeln mit intrazellulären Strukturen, Interaktion von Ionen dissoziierter Partikel mit intrazellulären Strukturen sowie die Entstehung von ROS initiiert durch Partikel oder Ionen.rnDie vorliegende Studie befasst sich mit den Mechanismen der Genotoxizität von ZnO Nanopartikeln (ZnO NP), als Beispiel für metallische Nanopartikel, im respiratorischen Epithel. In der Studie wurde gezielt die intrazelluläre Aufnahme und Verteilung von ZnO NP, deren Toxizität, deren DNA schädigendes Potential sowie die Aktivierung der DNA damage response (DDR) analysiert.rnEs konnten kaum internalisierte ZnO NP mittels TEM detektiert werden. Innerhalb der ersten Sekunden nach Behandlung mit ZnO NP wurde spektrofluorometrisch ein starker Anstieg der intrazellulären Zn2+ Konzentration gemessen. In unbehandelten Zellen war Zn2+ in granulären Strukturen lokalisiert. Die Behandlung mit ZnO NP führte zu einer Akkumulation von Zn2+ in diesen Strukturen. Im zeitlichen Verlauf verlagerten sich die Zn2+-Ionen in das Zytoplasma, sowie in Zellkerne und Mitochondrien. Es wurde keine Kolokalisation von Zn2+ mit den frühen Endosomen und dem endoplasmatischen Retikulum beobachtet. Die Vorbehandlung der Zellen mit Diethylen-triaminpentaessigsäure (DTPA), als extrazellulärem Komplexbildner, verhinderte den intrazellulären Anstieg von Zn2+ nach Behandlung mit den Partikeln.rnDie Behandlung mit ZnO NP resultierte in einer zeit- und dosisabhängigen Reduktion der zellulären Viabilität, während die intrazelluläre ROS-Konzentrationen in den ersten 30 min leicht und anschließend kontinuierlich bis zum Ende der Messung anstiegen. Außerdem verringerte sich das mitochondriale Membranpotential, während sich die Anzahl der frühapoptotischen Zellen in einer zeitabhängigen Weise erhöhte. rnDNA Doppelstrangbrüche (DNA DSB) wurden mittels Immunfluoreszenz-Färbung der γH2A.X foci sichtbar gemacht und konnten nach Behandlung mit ZnO NP detektiert werden. Die Vorbehandlung mit dem Radikalfänger N-Acetyl-L-Cytein (NAC) resultierte in stark reduzierten intrazellulären ROS-Konzentrationen sowie wenigen DNA DSB. Die DNA Schädigung wurde durch Vorbehandlung mit DTPA ganz verhindert.rnDie Aktivierung der DDR wurde durch die Analyse von ATM, ATR, Chk1, Chk2, p53 und p21 mittels Western Blot und ELISA nach Behandlung mit ZnO NP überprüft. Der ATR/Chk1 Signalweg wurde durch ZnO NP nicht aktiviert. Die Komplexierung von Zn2+ resultierte in einer verminderten ATM/Chk2 Signalwegaktivierung. Es zeigte sich, dass das Abfangen von ROS keinen Effekt auf die ATM/Chk2 Signalwegaktivierung hatte.rnZusammengefasst wurde festgestellt, dass die Exposition mit ZnO NP in der Entstehung von ROS, reduzierter Viabilität und vermindertem mitochondrialem Membranpotential resultiert, sowie zeitabhängig eine frühe Apoptose initiiert. ZnO NP dissoziierten extrazellulär und wurden schnell als Zn2+ über unbekannte Mechanismen internalisiert. Die Zn2+-Ionen wurden im Zytoplasma, sowie besonders in den Mitochondrien und dem Zellkern, akkumuliert. Die DDR Signalgebung wurde durch ZnO NP aktiviert, jedoch nicht durch NAC inhibiert. Es wurde gezeigt, dass DTPA die DDR Aktivierung komplett inhibierte. Die Behandlung mit ZnO NP induzierte DNA DSB. Die Inhibition von ROS reduzierte die DNA DSB und die Komplexierung der Zn2+ verhinderte die Entstehung von DNA DSB.rnDiese Daten sprechen für die Dissoziation der Partikel und die hierbei freigesetzten Zn2+ als Hauptmediator der Genotoxizität metallischer Nanopartikel. rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective of this study is to reveal the housing patterns in Cairo as one of the most rapidly urbanizing city in the developing world. The study outlines the evolution of the housing problem and its influencing factors in Egypt generally and in Cairo specifically. The study takes into account the political transition from the national state economy to the open door policy, the neo-liberal period and finally to the housing situation after the January 2011 Revolution. The resulting housing patterns in Cairo Governorate were identified as (1) squatter settlements, (2) semi-informal settlements, (3) deteriorated inner pockets, and (4) formal settlements. rnThe study concluded that the housing patterns in Cairo are reflecting a multifaceted problem resulting in: (1) the imbalance between the high demand for affordable housing units for low-income families and the oversupply of upper-income housing, (2) the vast expansion of informal areas both on agricultural and desert lands, (3) the deterioration of the old parts of Cairo without upgrading or appropriate replacement of the housing structure, and (4) the high vacancy rate of newly constructed apartmentsrnThe evolution and development of the current housing problem were attributed to a number of factors. These factors are demographic factors represented in the rapid growth of the population associated with urbanization under the dictates of poverty, and the progressive increase of the prices of both buildable land and building materials. The study underlined that the current pattern of population density in Cairo Governorate is a direct result of the current housing problems. Around the depopulation core of the city, a ring of relatively stable areas in terms of population density has developed. Population densification, at the expense of the depopulation core, is characterizing the peripheries of the city. The population density in relation to the built-up area was examined using Landsat-7 ETM+ image (176/039). The image was acquired on 24 August 2006 and considered as an ideal source for land cover classification in Cairo since it is compatible with the population census 2006.rnConsidering that the socio-economic setting is a driving force of change of housing demand and that it is an outcome of the accumulated housing problems, the socio-economic deprivations of the inhabitants of Cairo Governorate are analyzed. Small administrative units in Cairo are categorized into four classes based on the Socio-Economic Opportunity Index (SEOI). This index is developed by using multiple domains focusing on the economic, educational and health situation of the residential population. The results show four levels of deprivation which are consistent with the existing housing patterns. Informal areas on state owned land are included in the first category, namely, the “severely deprived” level. Ex-formal areas or deteriorated inner pockets are characterized as “deprived” urban quarters. Semi-informal areas on agricultural land concentrate in the third category of “medium deprived” settlements. Formal or planned areas are included mostly in the fourth category of the “less deprived” parts of Cairo Governorate. rnFor a better understanding of the differences and similarities among the various housing patterns, four areas based on the smallest administrative units of shiakhat were selected for a detailed study. These areas are: (1) El-Ma’desa is representing a severely deprived squatter settlement, (2) Ain el-Sira is an example for an ex-formal deprived area, (3) El-Marg el-Qibliya was selected as a typical semi-informal and medium deprived settlement, and (4) El-Nozha is representing a formal and less deprived area.rnThe analysis at shiakhat level reveals how the socio-economic characteristics and the unregulated urban growth are greatly reflected in the morphological characteristics of the housing patterns in terms of street network and types of residential buildings as well as types of housing tenure. It is also reflected in the functional characteristics in terms of land use mix and its degree of compatibility. It is concluded that the provision and accessibility to public services represents a performance measure of the dysfunctional structure dominating squatter and semi-informal settlements on one hand and ample public services and accessibility in formal areas on the other hand.rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Leg edema is a common manifestation of various underlying pathologies. Reliable measurement tools are required to quantify edema and monitor therapeutic interventions. Aim of the present work was to investigate the reproducibility of optoelectronic leg volumetry over 3 weeks' time period and to eliminate daytime related within-individual variability. Methods Optoelectronic leg volumetry was performed in 63 hairdressers (mean age 45 ± 16 years, 85.7% female) in standing position twice within a minute for each leg and repeated after 3 weeks. Both lower leg (legBD) and whole limb (limbBF) volumetry were analysed. Reproducibility was expressed as analytical and within-individual coefficients of variance (CVA, CVW), and as intra-class correlation coefficients (ICC). Results A total of 492 leg volume measurements were analysed. Both legBD and limbBF volumetry were highly reproducible with CVA of 0.5% and 0.7%, respectively. Within-individual reproducibility of legBD and limbBF volumetry over a three weeks' period was high (CVW 1.3% for both; ICC 0.99 for both). At both visits, the second measurement revealed a significantly higher volume compared to the first measurement with a mean increase of 7.3 ml ± 14.1 (0.33% ± 0.58%) for legBD and 30.1 ml ± 48.5 ml (0.52% ± 0.79%) for limbBF volume. A significant linear correlation between absolute and relative leg volume differences and the difference of exact day time of measurement between the two study visits was found (P < .001). A therefore determined time-correction formula permitted further improvement of CVW. Conclusions Leg volume changes can be reliably assessed by optoelectronic leg volumetry at a single time point and over a 3 weeks' time period. However, volumetry results are biased by orthostatic and daytime-related volume changes. The bias for day-time related volume changes can be minimized by a time-correction formula.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The PM3 semiempirical quantum-mechanical method was found to systematically describe intermolecular hydrogen bonding in small polar molecules. PM3 shows charge transfer from the donor to acceptor molecules on the order of 0.02-0.06 units of charge when strong hydrogen bonds are formed. The PM3 method is predictive; calculated hydrogen bond energies with an absolute magnitude greater than 2 kcal mol-' suggest that the global minimum is a hydrogen bonded complex; absolute energies less than 2 kcal mol-' imply that other van der Waals complexes are more stable. The geometries of the PM3 hydrogen bonded complexes agree with high-resolution spectroscopic observations, gas electron diffraction data, and high-level ab initio calculations. The main limitations in the PM3 method are the underestimation of hydrogen bond lengths by 0.1-0.2 for some systems and the underestimation of reliable experimental hydrogen bond energies by approximately 1-2 kcal mol-l. The PM3 method predicts that ammonia is a good hydrogen bond acceptor and a poor hydrogen donor when interacting with neutral molecules. Electronegativity differences between F, N, and 0 predict that donor strength follows the order F > 0 > N and acceptor strength follows the order N > 0 > F. In the calculations presented in this article, the PM3 method mirrors these electronegativity differences, predicting the F-H- - -N bond to be the strongest and the N-H- - -F bond the weakest. It appears that the PM3 Hamiltonian is able to model hydrogen bonding because of the reduction of two-center repulsive forces brought about by the parameterization of the Gaussian core-core interactions. The ability of the PM3 method to model intermolecular hydrogen bonding means reasonably accurate quantum-mechanical calculations can be applied to small biologic systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pesiqta Rabbati is a unique homiletic midrash that follows the liturgical calendar in its presentation of homilies for festivals and special Sabbaths. This article attempts to utilize Pesiqta Rabbati in order to present a global theory of the literary production of rabbinic/homiletic literature. In respect to Pesiqta Rabbati it explores such areas as dating, textual witnesses, integrative apocalyptic meta-narrative, describing and mapping the structure of the text, internal and external constraints that impacted upon the text, text linguistic analysis, form-analysis: problems in the texts and linguistic gap-filling, transmission of text, strict formalization of a homiletic unit, deconstructing and reconstructing homiletic midrashim based upon form-analytic units of the homily, Neusner’s documentary hypothesis, surface structures of the homiletic unit, and textual variants. The suggested methodology may assist scholars in their production of editions of midrashic works by eliminating superfluous material and in their decoding and defining of ancient texts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this investigation was to describe the use of linezolid in pediatric inpatient facilities. A retrospective multicenter survey including data from nine participating tertiary care pediatric inpatient facilities in Germany and Austria was undertaken. Data on 126 off-label linezolid treatment courses administered to 108 patients were documented. The survey comprises linezolid treatment in a broad spectrum of clinical indications to children of all age groups; the median age was 6.8 years (interquartile range 0.6-15.5 years; range 0.1-21.2 years; ten patients were older than 18 years of age but were treated in pediatric inpatient units). Of the 126 treatment courses, 27 (21%) were administered to preterm infants, 64 (51%) to pediatric oncology patients, and 5% to patients soon after liver transplantation. In 25%, the infection was related to a medical device. Linezolid iv treatment was started after intensive pre-treatment (up to 11 other antibiotics for a median duration of 14 days) and changed to enteral administration in only 4% of all iv courses. In 39 (53%) of 74 courses administered to children older than 1 week and younger than 12 years of age, the dose was not adjusted to age-related pharmacokinetic parameters. In only 17 courses (13%) was a pediatric infectious disease consultant involved in the clinical decision algorithm. Linezolid seemed to have contributed to a favorable outcome in 70% of all treatment courses in this survey. Although retrospective, this survey generates interesting data on the off-label use of linezolid and highlights several important clinical aspects in which the use of this rescue antibiotic in children might be improved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Following an extensive survey of sources on urban development and comparative analyses of Bratislava and other major Central European cities and Slovak regional centres, Divinsky completed a detailed study of Bratislava's spatial structure using the most recent approaches of the so-called Belgian school. He also produced an intraurban regionalisation of Bratislava as a multi-structural interactive model, mapped and characterised by the cardinal parameters, processes, trends and inequalities of population and housing in each spatial element of the model. The field survey entailed a seven-month physical investigation of the territory using a "street by street, block by block, house by house and locality by locality" system to ensure that no areas were missed. A second field survey was carried out two years later to check on transformations. An important feature of the research was the concept of the morphological city, which was defined as "a continuously built-up area of all urban functions (i.e. excluding agricultural lands and forests lying outside the city which serve for half-day recreation) made up of spatial-structural units fulfilling certain criteria". The most important criteria was a minimum population density per unit of no less than 650 persons per square kilometre, except in the case of units totally surrounded by units of higher densities, where it could be lower. The morphological city as defined here includes only 36% of the territory of the administrative city, but 95% of the popula tion, giving a much higher population density which better reflects the urban reality of Bratislava.