907 resultados para Units of landscape


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Innerhalb des Untersuchungsgebiets Schleswig-Holstein wurden 39.712 topographische Hohlformen detektiert. Genutzt wurden dazu ESRI ArcMap 9.3 und 10.0. Der Datenaufbereitung folgten weitere Kalkulationen in MATLAB R2010b. Jedes Objekt wurde räumlich mit seinen individuellen Eigenschaften verschnitten. Dazu gehörten Fläche, Umfang, Koordinaten (Zentroide), Tiefe und maximale Tiefe der Hohlform und Formfaktoren wie Rundheit, Konvexität und Elongation. Ziel der vorgestellten Methoden war die Beantwortung von drei Fragestellungen: Sind negative Landformen dazu geeignet Landschaftseinheiten und Eisvorstöße zu unterscheiden und zu bestimmen? Existiert eine Kopplung von Depressionen an der rezenten Topographie zu geologischen Tiefenstrukturen? Können Senken unterschiedlicher Entstehung anhand ihrer Formcharakteristik unterteilt werden? Die vorgenommene Klassifikation der großen Landschaftseinheiten basiert auf der Annahme, dass sowohl Jungmoränengebiete, ihre Vorflächen als auch Altmoränengebiete durch charakteristische, abflusslose Hohlformen, wie Toteislöcher, Seen, etc. abgegrenzt werden können. Normalerweise sind solche Depressionen in der Natur eher selten, werden jedoch für ehemalige Glaziallandschaften als typisch erachtet. Ziel war es, die geologischen Haupteinheiten, Eisvorstöße und Moränengebiete der letzten Vereisungen zu differenzieren. Zur Bearbeitung wurde ein Detektionsnetz verwendet, das auf quadratischen Zellen beruht. Die Ergebnisse zeigen, dass durch die alleinige Nutzung von Depressionen zur Klassifizierung von Landschaftseinheiten Gesamtgenauigkeiten von bis zu 71,4% erreicht werden können. Das bedeutet, dass drei von vier Detektionszellen korrekt zugeordnet werden können. Jungmoränen, Altmoränen, periglazialeVorflächen und holozäne Bereiche können mit Hilfe der Hohlformen mit großer Sicherheit voneinander unterschieden und korrekt zugeordnet werden. Dies zeigt, dass für die jeweiligen Einheiten tatsächlich bestimmte Senkenformen typisch sind. Die im ersten Schritt detektierten Senken wurden räumlich mit weiterreichenden geologischen Informationen verschnitten, um zu untersuchen, inwieweit natürliche Depressionen nur glazial entstanden sind oder ob ihre Ausprägung auch mit tiefengeologischen Strukturen in Zusammenhang steht. 25.349 (63,88%) aller Senken sind kleiner als 10.000 m² und liegen in Jungmoränengebieten und können vermutlich auf glaziale und periglaziale Einflüsse zurückgeführt werden. 2.424 Depressionen liegen innerhalb der Gebiete subglazialer Rinnen. 1.529 detektierte Hohlformen liegen innerhalb von Subsidenzgebieten, von denen 1.033 innerhalb der Marschländer im Westen verortet sind. 919 große Strukturen über 1 km Größe entlang der Nordsee sind unter anderem besonders gut mit Kompaktionsbereichen elsterzeitlicher Rinnen zu homologisieren.344 dieser Hohlformen sind zudem mit Tunneltälern im Untergrund assoziiert. Diese Parallelität von Depressionen und den teils über 100 m tiefen Tunneltälern kann auf Sedimentkompaktion zurückgeführt werden. Ein Zusammenhang mit der Zersetzung postglazialen, organischen Materials ist ebenfalls denkbar. Darüber hinaus wurden in einer Distanz von 10 km um die miozän aktiven Flanken des Glückstadt-Grabens negative Landformen detektiert, die Verbindungen zu oberflächennahen Störungsstrukturen zeigen. Dies ist ein Anzeichen für Grabenaktivität während und gegen Ende der Vereisung und während des Holozäns. Viele dieser störungsbezogenen Senken sind auch mit Tunneltälern assoziiert. Entsprechend werden drei zusammenspielende Prozesse identifiziert, die mit der Entstehung der Hohlformen in Verbindung gebracht werden können. Eine mögliche Interpretation ist, dass die östliche Flanke des Glückstadt-Grabens auf die Auflast des elsterzeitlichen Eisschilds reagierte, während sich subglazial zeitgleich Entwässerungsrinnen entlang der Schwächezonen ausbildeten. Diese wurden in den Warmzeiten größtenteils durch Torf und unverfestigte Sedimente verfüllt. Die Gletschervorstöße der späten Weichselzeit aktivierten erneut die Flanken und zusätzlich wurde das Lockermaterial exariert, wodurch große Seen, wie z. B. der Große Plöner See entstanden sind. Insgesamt konnten 29 große Depressionen größer oder gleich 5 km in Schleswig-Holstein identifiziert werden, die zumindest teilweise mit Beckensubsidenz und Aktivität der Grabenflanken verbunden sind, bzw. sogar auf diese zurückgehen.Die letzte Teilstudie befasste sich mit der Differenzierung von Senken nach deren potentieller Genese sowie der Unterscheidung natürlicher von künstlichen Hohlformen. Dazu wurde ein DEM für einen Bereich im Norden Niedersachsens verwendet, das eine Gesamtgröße von 252 km² abdeckt. Die Ergebnisse zeigen, dass glazial entstandene Depressionen gute Rundheitswerte aufweisen und auch Elongation und Exzentrizität eher kompakte Formen anzeigen. Lineare negative Strukturen sind oft Flüsse oder Altarme. Sie können als holozäne Strukturen identifiziert werden. Im Gegensatz zu den potentiell natürlichen Senkenformen sind künstlich geschaffene Depressionen eher eckig oder ungleichmäßig und tendieren meist nicht zu kompakten Formen. Drei Hauptklassen topographischer Depressionen konnten identifiziert und voneinander abgegrenzt werden: Potentiell glaziale Senken (Toteisformen), Flüsse, Seiten- und Altarme sowie künstliche Senken. Die Methode der Senkenklassifikation nach Formparametern ist ein sinnvolles Instrument, um verschiedene Typen unterscheiden zu können und um bei geologischen Fragestellungen künstliche Senken bereits vor der Verarbeitung auszuschließen. Jedoch zeigte sich, dass die Ergebnisse im Wesentlichen von der Auflösung des entsprechenden Höhenmodells abhängen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since historical times, coastal areas throughout the eastern Mediterranean are exposed to tsunami hazard. For many decades the knowledge about palaeotsunamis was solely based on historical accounts. However, results from timeline analyses reveal different characteristics affecting the quality of the dataset (i.e. distribution of data, temporal thinning backward of events, local periodization phenomena) that emphasize the fragmentary character of the historical data. As an increasing number of geo-scientific studies give convincing examples of well dated tsunami signatures not reported in catalogues, the non-existing record is a major problem to palaeotsunami research. While the compilation of historical data allows a first approach in the identification of areas vulnerable to tsunamis, it must not be regarded as reliable for hazard assessment. Considering the increasing economic significance of coastal regions (e.g. for mass tourism) and the constantly growing coastal population, our knowledge on the local, regional and supraregional tsunami hazard along Mediterranean coasts has to be improved. For setting up a reliable tsunami risk assessment and developing risk mitigation strategies, it is of major importance (i) to identify areas under risk and (ii) to estimate the intensity and frequency of potential events. This approach is most promising when based on the analysis of palaeotsunami research seeking to detect areas of high palaeotsunami hazard, to calculate recurrence intervals and to document palaeotsunami destructiveness in terms of wave run-up, inundation and long-term coastal change. Within the past few years, geo-scientific studies on palaeotsunami events provided convincing evidence that throughout the Mediterranean ancient harbours were subject to strong tsunami-related disturbance or destruction. Constructed to protect ships from storm and wave activity, harbours provide especially sheltered and quiescent environments and thus turned out to be valuable geo-archives for tsunamigenic high-energy impacts on coastal areas. Directly exposed to the Hellenic Trench and extensive local fault systems, coastal areas in the Ionian Sea and the Gulf of Corinth hold a considerably high risk for tsunami events, respectively.Geo-scientific and geoarcheaological studies carried out in the environs of the ancient harbours of Krane (Cefalonia Island), Lechaion (Corinth, Gulf of Corinth) and Kyllini (western Peloponnese) comprised on-shore and near-shore vibracoring and subsequent sedimentological, geochemical and microfossil analyses of the recovered sediments. Geophysical methods like electrical resistivity tomography and ground penetrating radar were applied in order to detect subsurface structures and to verify stratigraphical patterns derived from vibracores over long distances. The overall geochronological framework of each study area is based on radiocarbon dating of biogenic material and age determination of diagnostic ceramic fragments. Results presented within this study provide distinct evidence of multiple palaeotsunami landfalls for the investigated areas. Tsunami signatures encountered in the environs of Krane, Lechaion and Kyllini include (i) coarse-grained allochthonous marine sediments intersecting silt-dominated quiescent harbour deposits and/or shallow marine environments, (ii) disturbed microfaunal assemblages and/or (iii) distinct geochemical fingerprints as well as (iv) geo-archaeological destruction layers and (v) extensive units of beachrock-type calcarenitic tsunamites. For Krane, geochronological data yielded termini ad or post quem (maximum ages) for tsunami event generations dated to 4150 ± 60 cal BC, ~ 3200 ± 110 cal BC, ~ 650 ± 110 cal BC, and ~ 930 ± 40 cal AD, respectively. Results for Lechaion suggest that the harbour was hit by strong tsunami impacts in the 8th-6th century BC, the 1st-2nd century AD and in the 6th century AD. At Kyllini, the harbour site was affected by tsunami impact in between the late 7th and early 4th cent. BC and between the 4th and 6th cent. AD. In case of Lechaion and Kyllini, the final destruction of the harbour facilities also seems to be related to the tsunami impact. Comparing the tsunami signals obtained for each study areas with geo-scientific data from palaeotsunami events from other sites indicates that the investigated harbour sites represent excellent geo-archives for supra-regional mega-tsunamis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective of this study is to reveal the housing patterns in Cairo as one of the most rapidly urbanizing city in the developing world. The study outlines the evolution of the housing problem and its influencing factors in Egypt generally and in Cairo specifically. The study takes into account the political transition from the national state economy to the open door policy, the neo-liberal period and finally to the housing situation after the January 2011 Revolution. The resulting housing patterns in Cairo Governorate were identified as (1) squatter settlements, (2) semi-informal settlements, (3) deteriorated inner pockets, and (4) formal settlements. rnThe study concluded that the housing patterns in Cairo are reflecting a multifaceted problem resulting in: (1) the imbalance between the high demand for affordable housing units for low-income families and the oversupply of upper-income housing, (2) the vast expansion of informal areas both on agricultural and desert lands, (3) the deterioration of the old parts of Cairo without upgrading or appropriate replacement of the housing structure, and (4) the high vacancy rate of newly constructed apartmentsrnThe evolution and development of the current housing problem were attributed to a number of factors. These factors are demographic factors represented in the rapid growth of the population associated with urbanization under the dictates of poverty, and the progressive increase of the prices of both buildable land and building materials. The study underlined that the current pattern of population density in Cairo Governorate is a direct result of the current housing problems. Around the depopulation core of the city, a ring of relatively stable areas in terms of population density has developed. Population densification, at the expense of the depopulation core, is characterizing the peripheries of the city. The population density in relation to the built-up area was examined using Landsat-7 ETM+ image (176/039). The image was acquired on 24 August 2006 and considered as an ideal source for land cover classification in Cairo since it is compatible with the population census 2006.rnConsidering that the socio-economic setting is a driving force of change of housing demand and that it is an outcome of the accumulated housing problems, the socio-economic deprivations of the inhabitants of Cairo Governorate are analyzed. Small administrative units in Cairo are categorized into four classes based on the Socio-Economic Opportunity Index (SEOI). This index is developed by using multiple domains focusing on the economic, educational and health situation of the residential population. The results show four levels of deprivation which are consistent with the existing housing patterns. Informal areas on state owned land are included in the first category, namely, the “severely deprived” level. Ex-formal areas or deteriorated inner pockets are characterized as “deprived” urban quarters. Semi-informal areas on agricultural land concentrate in the third category of “medium deprived” settlements. Formal or planned areas are included mostly in the fourth category of the “less deprived” parts of Cairo Governorate. rnFor a better understanding of the differences and similarities among the various housing patterns, four areas based on the smallest administrative units of shiakhat were selected for a detailed study. These areas are: (1) El-Ma’desa is representing a severely deprived squatter settlement, (2) Ain el-Sira is an example for an ex-formal deprived area, (3) El-Marg el-Qibliya was selected as a typical semi-informal and medium deprived settlement, and (4) El-Nozha is representing a formal and less deprived area.rnThe analysis at shiakhat level reveals how the socio-economic characteristics and the unregulated urban growth are greatly reflected in the morphological characteristics of the housing patterns in terms of street network and types of residential buildings as well as types of housing tenure. It is also reflected in the functional characteristics in terms of land use mix and its degree of compatibility. It is concluded that the provision and accessibility to public services represents a performance measure of the dysfunctional structure dominating squatter and semi-informal settlements on one hand and ample public services and accessibility in formal areas on the other hand.rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A key challenge for land change science is linking land cover information to human-environment interactions over larger spatial areas. Crucial information on land use types and people involved is still lacking. In Lao PDR, a country facing rapid and multilevel land change processes, this lack of information hinders evidence-based policy- and decision-making. We present a new approach for the description of landscape mosaics on national level and relate it to village level Population Census information. Results showed that swidden agricultural landscapes, involving 17% of the population, dominate 28% of the country, while permanent agricultural landscapes involve 74% of the population in 29% of the country.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The PM3 semiempirical quantum-mechanical method was found to systematically describe intermolecular hydrogen bonding in small polar molecules. PM3 shows charge transfer from the donor to acceptor molecules on the order of 0.02-0.06 units of charge when strong hydrogen bonds are formed. The PM3 method is predictive; calculated hydrogen bond energies with an absolute magnitude greater than 2 kcal mol-' suggest that the global minimum is a hydrogen bonded complex; absolute energies less than 2 kcal mol-' imply that other van der Waals complexes are more stable. The geometries of the PM3 hydrogen bonded complexes agree with high-resolution spectroscopic observations, gas electron diffraction data, and high-level ab initio calculations. The main limitations in the PM3 method are the underestimation of hydrogen bond lengths by 0.1-0.2 for some systems and the underestimation of reliable experimental hydrogen bond energies by approximately 1-2 kcal mol-l. The PM3 method predicts that ammonia is a good hydrogen bond acceptor and a poor hydrogen donor when interacting with neutral molecules. Electronegativity differences between F, N, and 0 predict that donor strength follows the order F > 0 > N and acceptor strength follows the order N > 0 > F. In the calculations presented in this article, the PM3 method mirrors these electronegativity differences, predicting the F-H- - -N bond to be the strongest and the N-H- - -F bond the weakest. It appears that the PM3 Hamiltonian is able to model hydrogen bonding because of the reduction of two-center repulsive forces brought about by the parameterization of the Gaussian core-core interactions. The ability of the PM3 method to model intermolecular hydrogen bonding means reasonably accurate quantum-mechanical calculations can be applied to small biologic systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pesiqta Rabbati is a unique homiletic midrash that follows the liturgical calendar in its presentation of homilies for festivals and special Sabbaths. This article attempts to utilize Pesiqta Rabbati in order to present a global theory of the literary production of rabbinic/homiletic literature. In respect to Pesiqta Rabbati it explores such areas as dating, textual witnesses, integrative apocalyptic meta-narrative, describing and mapping the structure of the text, internal and external constraints that impacted upon the text, text linguistic analysis, form-analysis: problems in the texts and linguistic gap-filling, transmission of text, strict formalization of a homiletic unit, deconstructing and reconstructing homiletic midrashim based upon form-analytic units of the homily, Neusner’s documentary hypothesis, surface structures of the homiletic unit, and textual variants. The suggested methodology may assist scholars in their production of editions of midrashic works by eliminating superfluous material and in their decoding and defining of ancient texts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this investigation was to describe the use of linezolid in pediatric inpatient facilities. A retrospective multicenter survey including data from nine participating tertiary care pediatric inpatient facilities in Germany and Austria was undertaken. Data on 126 off-label linezolid treatment courses administered to 108 patients were documented. The survey comprises linezolid treatment in a broad spectrum of clinical indications to children of all age groups; the median age was 6.8 years (interquartile range 0.6-15.5 years; range 0.1-21.2 years; ten patients were older than 18 years of age but were treated in pediatric inpatient units). Of the 126 treatment courses, 27 (21%) were administered to preterm infants, 64 (51%) to pediatric oncology patients, and 5% to patients soon after liver transplantation. In 25%, the infection was related to a medical device. Linezolid iv treatment was started after intensive pre-treatment (up to 11 other antibiotics for a median duration of 14 days) and changed to enteral administration in only 4% of all iv courses. In 39 (53%) of 74 courses administered to children older than 1 week and younger than 12 years of age, the dose was not adjusted to age-related pharmacokinetic parameters. In only 17 courses (13%) was a pediatric infectious disease consultant involved in the clinical decision algorithm. Linezolid seemed to have contributed to a favorable outcome in 70% of all treatment courses in this survey. Although retrospective, this survey generates interesting data on the off-label use of linezolid and highlights several important clinical aspects in which the use of this rescue antibiotic in children might be improved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data on antimicrobial use play a key role in the development of policies for the containment of antimicrobial resistance. On-farm data could provide a detailed overview of the antimicrobial use, but technical and methodological aspects of data collection and interpretation, as well as data quality need to be further assessed. The aims of this study were (1) to quantify antimicrobial use in the study population using different units of measurement and contrast the results obtained, (2) to evaluate data quality of farm records on antimicrobial use, and (3) to compare data quality of different recording systems. During 1 year, data on antimicrobial use were collected from 97 dairy farms. Antimicrobial consumption was quantified using: (1) the incidence density of antimicrobial treatments; (2) the weight of active substance; (3) the used daily dose and (4) the used course dose for antimicrobials for intestinal, intrauterine and systemic use; and (5) the used unit dose, for antimicrobials for intramammary use. Data quality was evaluated by describing completeness and accuracy of the recorded information, and by comparing farmers' and veterinarians' records. Relative consumption of antimicrobials depended on the unit of measurement: used doses reflected the treatment intensity better than weight of active substance. The use of antimicrobials classified as high priority was low, although under- and overdosing were frequently observed. Electronic recording systems allowed better traceability of the animals treated. Recording drug name or dosage often resulted in incomplete or inaccurate information. Veterinarians tended to record more drugs than farmers. The integration of veterinarian and farm data would improve data quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Anaerobic digestion of food scraps has the potential to accomplish waste minimization, energy production, and compost or humus production. At Bucknell University, removal of food scraps from the waste stream could reduce municipal solid waste transportation costs and landfill tipping fees, and provide methane and humus for use on campus. To determine the suitability of food waste produced at Bucknell for high-solids anaerobic digestion (HSAD), a year-long characterization study was conducted. Physical and chemical properties, waste biodegradability, and annual production of biodegradable waste were assessed. Bucknell University food and landscape waste was digested at pilot-scale for over a year to test performance at low and high loading rates, ease of operation at 20% solids, benefits of codigestion of food and landscape waste, and toprovide digestate for studies to assess the curing needs of HSAD digestate. A laboratory-scale curing study was conducted to assess the curing duration required to reduce microbial activity, phytotoxicity, and odors to acceptable levels for subsequent use ofhumus. The characteristics of Bucknell University food and landscape waste were tested approximately weekly for one year, to determine chemical oxygen demand (COD), total solids (TS), volatile solids (VS), and biodegradability (from batch digestion studies). Fats, oil, and grease and total Kjeldahl nitrogen were also tested for some food waste samples. Based on the characterization and biodegradability studies, Bucknell University dining hall food waste is a good candidate for HSAD. During batch digestion studies Bucknell University food waste produced a mean of 288 mL CH4/g COD with a 95%confidence interval of 0.06 mL CH4/g COD. The addition of landscape waste for digestion increased methane production from both food and landscape waste; however, because the landscape waste biodegradability was extremely low the increase was small.Based on an informal waste audit, Bucknell could collect up to 100 tons of food waste from dining facilities each year. The pilot-scale high-solids anaerobic digestion study confirmed that digestion ofBucknell University food waste combined with landscape waste at a low organic loading rate (OLR) of 2 g COD/L reactor volume-day is feasible. During low OLR operation, stable reactor performance was demonstrated through monitoring of biogas production and composition, reactor total and volatile solids, total and soluble chemical oxygendemand, volatile fatty acid content, pH, and bicarbonate alkalinity. Low OLR HSAD of Bucknell University food waste and landscape waste combined produced 232 L CH4/kg COD and 229 L CH4/kg VS. When OLR was increased to high loading (15 g COD/L reactor volume-day) to assess maximum loading conditions, reactor performance became unstable due to ammonia accumulation and subsequent inhibition. The methaneproduction per unit COD also decreased (to 211 L CH4/kg COD fed), although methane production per unit VS increased (to 272 L CH4/kg VS fed). The degree of ammonia inhibition was investigated through respirometry in which reactor digestate was diluted and exposed to varying concentrations of ammonia. Treatments with low ammoniaconcentrations recovered quickly from ammonia inhibition within the reactor. The post-digestion curing process was studied at laboratory-scale, to provide a preliminary assessment of curing duration. Digestate was mixed with woodchips and incubated in an insulated container at 35 °C to simulate full-scale curing self-heatingconditions. Degree of digestate stabilization was determined through oxygen uptake rates, percent O2, temperature, volatile solids, and Solvita Maturity Index. Phytotoxicity was determined through observation of volatile fatty acid and ammonia concentrations.Stabilization of organics and elimination of phytotoxic compounds (after 10–15 days of curing) preceded significant reductions of volatile sulfur compounds (hydrogen sulfide, methanethiol, and dimethyl sulfide) after 15–20 days of curing. Bucknell University food waste has high biodegradability and is suitable for high-solids anaerobic digestion; however, it has a low C:N ratio which can result in ammonia accumulation under some operating conditions. The low biodegradability of Bucknell University landscape waste limits the amount of bioavailable carbon that it can contribute, making it unsuitable for use as a cosubstrate to increase the C:N ratio of food waste. Additional research is indicated to determine other cosubstrates with higher biodegradabilities that may allow successful HSAD of Bucknell University food waste at high OLRs. Some cosubstrates to investigate are office paper, field residues, or grease trap waste. A brief curing period of less than 3 weeks was sufficient to produce viable humus from digestate produced by low OLR HSAD of food and landscape waste.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Following an extensive survey of sources on urban development and comparative analyses of Bratislava and other major Central European cities and Slovak regional centres, Divinsky completed a detailed study of Bratislava's spatial structure using the most recent approaches of the so-called Belgian school. He also produced an intraurban regionalisation of Bratislava as a multi-structural interactive model, mapped and characterised by the cardinal parameters, processes, trends and inequalities of population and housing in each spatial element of the model. The field survey entailed a seven-month physical investigation of the territory using a "street by street, block by block, house by house and locality by locality" system to ensure that no areas were missed. A second field survey was carried out two years later to check on transformations. An important feature of the research was the concept of the morphological city, which was defined as "a continuously built-up area of all urban functions (i.e. excluding agricultural lands and forests lying outside the city which serve for half-day recreation) made up of spatial-structural units fulfilling certain criteria". The most important criteria was a minimum population density per unit of no less than 650 persons per square kilometre, except in the case of units totally surrounded by units of higher densities, where it could be lower. The morphological city as defined here includes only 36% of the territory of the administrative city, but 95% of the popula tion, giving a much higher population density which better reflects the urban reality of Bratislava.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this research the supportive role of the family in coping with everyday problems was studied using two large data sets. The results show the importance of the structural aspect of social support. Mapping individual preferences to support referents showed the crucial role of spouse and parents in solving everyday problems. The individual choices of particular support referents could be fairly accurately predicted from knowledge of the composition of the family, in both categorical regression and logit models. The far lower predictability of the criterion variable was shown using a wide range of socioeconomic, social and demographic indicators. Residence in small cities and indicators of extreme occupational strata were particularly predictive of the choice of support referent. The supportive role of the family was also traced in the personal projects of young adults, which were seen as ecological, natural and dynamic middle-level units of analysis of personality. Different aspects of personal projects, including reliance on social support referents, turned out to be highly interrelated. One the one hand, expectations of support were determined by the content of the project, and on the other, expected social support also influences the content of the project. Sivuha sees this as one of the ways others can enter self-structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Today electronic portal imaging devices (EPID's) are used primarily to verify patient positioning. They have, however, also the potential as 2D-dosimeters and could be used as such for transit dosimetry or dose reconstruction. It has been proven that such devices, especially liquid filled ionization chambers, have a stable dose response relationship which can be described in terms of the physical properties of the EPID and the pulsed linac radiation. For absolute dosimetry however, an accurate method of calibration to an absolute dose is needed. In this work, we concentrate on calibration against dose in a homogeneous water phantom. Using a Monte Carlo model of the detector we calculated dose spread kernels in units of absolute dose per incident energy fluence and compared them to calculated dose spread kernels in water at different depths. The energy of the incident pencil beams varied between 0.5 and 18 MeV. At the depth of dose maximum in water for a 6 MV beam (1.5 cm) and for a 18 MV beam (3.0 cm) we observed large absolute differences between water and detector dose above an incident energy of 4 MeV but only small relative differences in the most frequent energy range of the beam energy spectra. It is shown that for a 6 MV beam the absolute reference dose measured at 1.5 cm water depth differs from the absolute detector dose by 3.8%. At depth 1.2 cm in water, however, the relative dose differences are almost constant between 2 and 6 MeV. The effects of changes in the energy spectrum of the beam on the dose responses in water and in the detector are also investigated. We show that differences larger than 2% can occur for different beam qualities of the incident photon beam behind water slabs of different thicknesses. It is therefore concluded that for high-precision dosimetry such effects have to be taken into account. Nevertheless, the precise information about the dose response of the detector provided in this Monte Carlo study forms the basis of extracting directly the basic radiometric quantities photon fluence and photon energy fluence from the detector's signal using a deconvolution algorithm. The results are therefore promising for future application in absolute transit dosimetry and absolute dose reconstruction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Western Escarpment of the Andes at 18.30°S (Arica area, northern Chile) is a classical example for a transient state in landscape evolution. This part of the Andes is characterized by the presence of >10,000 km2 plains that formed between the Miocene and the present, and >1500 m deeply incised valleys. Although processes in these valleys scale the rates of landscape evolution, determinations of ages of incision, and more importantly, interpretations of possible controls on valley formation have been controversial. This paper uses morphometric data and observations, stratigraphic information, and estimates of sediment yields for the time interval between ca. 7.5 Ma and present to illustrate that the formation of these valleys was driven by two probably unrelated components. The first component is a phase of base-level lowering with magnitudes of∼300–500 m in the Coastal Cordillera. This period of base-level change in the Arica area, that started at ca. 7.5 Ma according to stratigraphic data, caused the trunk streams to dissect headward into the plains. The headward erosion interpretation is based on the presence of well-defined knickzones in stream profiles and the decrease in valley widths from the coast toward these knickzones. The second component is a change in paleoclimate. This interpretation is based on (1) the increase in the size of the largest alluvial boulders (from dm to m scale) with distal sources during the last 7.5 m.y., and (2) the calculated increase in minimum fluvial incision rates of ∼0.2 mm/yr between ca. 7.5 Ma and 3 Ma to ∼0.3 mm/yr subsequently. These trends suggest an increase in effective water discharge for systems sourced in the Western Cordillera (distal source). During the same time, however, valleys with headwaters in the coastal region (local source) lack any evidence of fluvial incision. This implies that the Coastal Cordillera became hyperarid sometime after 7.5 Ma. Furthermore, between 7.5 Ma and present, the sediment yields have been consistently higher in the catchments with distal sources (∼15 m/m.y.) than in the headwaters of rivers with local sources (<7 m/m.y.). The positive correlation between sediment yields and the altitude of the headwaters (distal versus local sources) seems to reflect the effect of orographic precipitation on surface erosion. It appears that base-level change in the coastal region, in combination with an increase in the orographic effect of precipitation, has controlled the topographic evolution of the northern Chilean Andes.