891 resultados para spatially explicit individual-based model
Resumo:
In Italia, il processo di de-istituzionalizzazione e di implementazione di modelli di assistenza per la salute mentale sono caratterizzati da carenza di valutazione. In particolare, non sono state intraprese iniziative per monitorare le attività relative all’assistenza dei pazienti con disturbi psichiatrici. Pertanto, l’obiettivo della tesi è effettuare una valutazione comparativa dei percorsi di cura nell’ambito della salute mentale nei Dipartimenti di Salute Mentale e Dipendenze Patologiche della regione Emilia-Romagna utilizzando indicatori ottenuti dai flussi amministrativi correnti.. I dati necessari alla costruzione degli indicatori sono stati ottenuti attraverso un data linkage dei flussi amministrativi correnti regionali delle schede di dimissione ospedaliera, delle attività territoriali dei Centri di Salute Mentale e delle prescrizioni farmaceutiche, con riferimento all’anno 2010. Gli indicatori sono stati predisposti per tutti i pazienti con diagnosi principale psichiatrica e poi suddivisi per categoria diagnostica in base al ICD9-CM. . Il set di indicatori esaminato comprende i tassi di prevalenza trattata e di incidenza dei disturbi mentali, i tassi di ospedalizzazione, la ri-ospedalizzazione a 7 e 30 giorni dalla dimissione dai reparti psichiatrici, la continuità assistenziale ospedale-territorio, l’adesione ai trattamenti ed il consumo e appropriatezza prescrittiva di farmaci. Sono state rilevate alcune problematiche nella ricostruzione della continuità assistenziale ospedale-territorio ed alcuni limiti degli indicatori relativi alle prescrizioni dei farmaci. Il calcolo degli indicatori basato sui flussi amministrativi correnti si presenta fattibile, pur con i limiti legati alla qualità, completezza ed accuratezza dei dati presenti. L’implementazione di questi indicatori su larga scala (regionale e nazionale) e su base regolare può essere una opportunità per impostare un sistema di sorveglianza, monitoraggio e valutazione dell’assistenza psichiatrica nei DSM.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Calcium fluoride (CaF2) is one of the key lens materials in deep-ultraviolet microlithography because of its transparency at 193 nm and its nearly perfect optical isotropy. Its physical and chemical properties make it applicable for lens fabrication. The key feature of CaF2 is its extreme laser stability. rnAfter exposing CaF2 to 193 nm laser irradiation at high fluences, a loss in optical performance is observed, which is related to radiation-induced defect structures in the material. The initial rapid damage process is well understood as the formation of radiation-induced point defects, however, after a long irradiation time of up to 2 months, permanent damage of the crystals is observed. Based on experimental results, these permanent radiation-induced defect structures are identified as metallic Ca colloids.rnThe properties of point defects in CaF2 and their stabilization in the crystal bulk are calculated with density functional theory (DFT). Because the stabilization of the point defects and the formation of metallic Ca colloids are diffusion-driven processes, the diffusion coefficients for the vacancy (F center) and the interstitial (H center) in CaF2 are determined with the nudged elastic band method. The optical properties of Ca colloids in CaF2 are obtained from Mie-theory, and their formation energy is determined.rnBased on experimental observations and the theoretical description of radiation-induced point defects and defect structures, a diffusion-based model for laser-induced material damage in CaF2 is proposed, which also includes a mechanism for annealing of laser damage. rn
Resumo:
L’oggetto del lavoro si concentra sull’analisi in chiave giuridica del modello di cooperazione in rete tra le autorità nazionali degli Stati membri nel quadro dello Spazio LSG, allo scopo di valutarne il contributo, le prospettive e il potenziale. La trattazione si suddivide in due parti, precedute da una breve premessa teorica incentrata sull’analisi della nozione di rete e la sua valenza giuridica. La prima parte ricostruisce il percorso di maturazione della cooperazione in rete, dando risalto tanto ai fattori di ordine congiunturale quanto ai fattori giuridici e d’ordine strutturale che sono alla base del processo di retificazione dei settori giustizia e sicurezza. In particolare, vengono elaborati taluni rilievi critici, concernenti l’operatività degli strumenti giuridici che attuano il principio di mutuo riconoscimento e di quelli che danno applicazione al principio di disponibilità delle informazioni. Ciò allo scopo di evidenziare gli ostacoli che, di frequente, impediscono il buon esito delle procedure di cooperazione e di comprendere le potenzialità e le criticità derivanti dall’utilizzo della rete rispetto alla concreta applicazione di tali procedure. La seconda parte si focalizza sull’analisi delle principali reti attive in materia di giustizia e sicurezza, con particolare attenzione ai rispettivi meccanismi di funzionamento. La trattazione si suddivide in due distinte sezioni che si concentrano sulle a) reti che operano a supporto dell’applicazione delle procedure di assistenza giudiziaria e degli strumenti di mutuo riconoscimento e sulle b) reti che operano nel settore della cooperazione informativa e agevolano lo scambio di informazioni operative e tecniche nelle azioni di prevenzione e lotta alla criminalità - specialmente nel settore della protezione dell’economia lecita. La trattazione si conclude con la ricostruzione delle caratteristiche di un modello di rete europea e del ruolo che questo esercita rispetto all’esercizio delle competenze dell’Unione Europea in materia di giustizia e sicurezza.
Resumo:
Persons affected by Down Syndrome show a heterogeneous phenotype that includes developmental defects and cognitive and haematological disorders. Premature accelerated aging and the consequent development of age associated diseases like Alzheimer Disease (AD) seem to be the cause of higher mortality late in life of DS persons. Down Syndrome is caused by the complete or partial trisomy of chromosome 21, but it is not clear if the molecular alterations of the disease are triggered by the specific functions of a limited number of genes on chromosome 21 or by the disruption of genetic homeostasis due the presence of a trisomic chromosome. As epigenomic studies can help to shed light on this issue, here we used the Infinium HumanMethilation450 BeadChip to analyse blood DNA methylation patterns of 29 persons affected by Down syndrome (DSP), using their healthy siblings (DSS) and mothers (DSM) as controls. In this way we obtained a family-based model that allowed us to monitor possible confounding effects on DNA methylation patterns deriving from genetic and environmental factors. We showed that defects in DNA methylation map in genes involved in developmental, neurological and haematological pathways. These genes are enriched on chromosome 21 but localize also in the rest of the genome, suggesting that the trisomy of specific genes on chromosome 21 induces a cascade of events that engages many genes on other chromosomes and results in a global alteration of genomic function. We also analysed the methylation status of three target regions localized at the promoter (Ribo) and at the 5’ sequences of 18S and 28S regions of the rDNA, identifying differently methylated CpG sites. In conclusion, we identified an epigenetic signature of Down Syndrome in blood cells that sustains a link between developmental defects and disease phenotype, including segmental premature aging.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.
Resumo:
Zur Registrierung von Pharmazeutika ist eine umfassende Analyse ihres genotoxischen Potentials von Nöten. Aufgrund der Vielzahl genotoxischer Mechanismen und deren resultierenden Schäden wird ein gestaffeltes Testdesign durch die ICH-Richtlinie S2(R1) „Guidance on genotoxicity testing and data interpretation for pharmaceuticals intended for human use S2(R1)“ definiert, um alle genotoxischen Substanzen zu identifizieren. Die Standardtestbatterie ist in der frühen Phase der Arzneimittelentwicklung aufgrund des geringen Durchsatzes und des Mangels an verfügbarer Substanzmenge vermindert anwendbar. Darüber hinaus verfügen in vitro Genotoxizitätstests in Säugerzellen über eine relativ geringe Spezifität. Für eine vollständige Sicherheitsbeurteilung wird eine in vivo Testung auf Kanzerogenität benötigt. Allerdings sind diese Testsysteme kosten- und zeitintensiv. Aufgrund dessen zielen neue Forschungsansätze auf die Verbesserung der Prädiktivität und die Erfassung des genotoxischen Potentials bereits in der frühen Phase der Arzneimittelentwicklung ab. Die high content imaging (HCI)-Technologie offeriert einen Ansatz zur Verbesserung des Durchsatzes verglichen mit der Standardtestbatterie. Zusätzlich hat ein Zell-basiertes Modell den Vorteil Daten relativ schnell bei gleichzeitig geringem Bedarf an Substanzmenge zu generieren. Demzufolge ermöglichen HCI-basierte Testsysteme eine Prüfung in der frühen Phase der pharmazeutischen Arzneimittelentwicklung. Das Ziel dieser Studie ist die Entwicklung eines neuen, spezifischen und sensitiven HCI-basierten Testsytems für Genotoxine und Progenotoxine in vitro unter Verwendung von HepG2-Zellen gewesen. Aufgrund ihrer begrenzten metabolischen Kapazität wurde ein kombiniertes System bestehend aus HepG2-Zellen und einem metabolischen Aktivierungssystem zur Testung progenotoxischer Substanzen etabliert. Basierend auf einer vorherigen Genomexpressionsprofilierung (Boehme et al., 2011) und einer Literaturrecherche wurden die folgenden neun unterschiedlichen Proteine der DNA-Schadensantwort als putative Marker der Substanz-induzierten Genotoxizität ausgewählt: p-p53 (Ser15), p21, p-H2AX (Ser139), p-Chk1 (Ser345) p-ATM (Ser1981), p-ATR (Ser428), p-CDC2 (Thr14/Tyr15), GADD45A und p-Chk2 (Thr68). Die Expression bzw. Aktivierung dieser Proteine wurde 48 h nach Behandlung mit den (pro-) genotoxischen Substanzen (Cyclophosphamid, 7,12-Dimethylbenz[a]anthracen, Aflatoxin B1, 2-Acetylaminofluoren, Methylmethansulfonat, Actinomycin D, Etoposid) und den nicht-genotoxischen Substanzen (D-Mannitol, Phenforminhydrochlorid, Progesteron) unter Verwendung der HCI-Technologie ermittelt. Die beste Klassifizierung wurde bei Verwendung der folgenden fünf der ursprünglichen neun putativen Markerproteine erreicht: p-p53 (Ser15), p21, p-H2AX (Ser139), p-Chk1 (Ser345) und p-ATM (Ser1981). In einem zweiten Teil dieser Arbeit wurden die fünf ausgewählten Proteine mit Substanzen, welche von dem European Centre for the Validation of Alternative Methods (ECVAM) zur Beurteilung der Leistung neuer oder modifizierter in vitro Genotoxizitätstests empfohlen sind, getestet. Dieses neue Testsystem erzielte eine Sensitivität von 80 % und eine Spezifität von 86 %, was in einer Prädiktivität von 84 % resultierte. Der synergetische Effekt dieser fünf Proteine ermöglicht die Identifizierung von genotoxischen Substanzen, welche DNA-Schädigungen durch eine Vielzahl von unterschiedlichen Mechanismen induzieren, mit einem hohen Erfolg. Zusammenfassend konnte ein hochprädiktives Prüfungssystem mit metabolischer Aktivierung für ein breites Spektrum potenziell genotoxischer Substanzen generiert werden, welches sich aufgrund des hohen Durchsatzes, des geringen Zeitaufwandes und der geringen Menge benötigter Substanz zur Substanzpriorisierung und -selektion in der Phase der Leitstrukturoptimierung eignet und darüber hinaus mechanistische Hinweise auf die genotoxische Wirkung der Testsubstanz liefert.
Resumo:
Ein charakteristisches, neuropathologisches Merkmal der Alzheimer-Demenz (AD), der am häufigsten vorkommenden Demenz-Form des Menschen, ist das Auftreten von senilen Plaques im Gehirn der Patienten. Hierbei stellt das neurotoxische A-beta Peptid den Hauptbestandteil dieser Ablagerungen dar. Einen Beitrag zu der pathologisch erhöhten A-beta Generierung liefert das verschobene Expressionsgleichgewicht der um APP-konkurrierenden Proteasen BACE-1 und ADAM10 zu Gunsten der beta-Sekretase BACE-1. In der vorliegenden Dissertation sollten molekulare Mechanismen identifiziert werden, die zu einem pathologisch veränderten Gleichgewicht der APP-Spaltung und somit zum Entstehen und Fortschritt der AD beitragen. Des Weiteren sollten Substanzen identifiziert werden, die durch Beeinflussung der Genexpression einer der beiden Proteasen das physiologische Gleichgewicht der APP-Prozessierung wiederherstellen können und somit therapeutisch einsetzbar sind.rnAnhand eines „Screenings“ von 704 Transkriptionsfaktoren wurden 23 Faktoren erhalten die das Verhältnis ADAM10- pro BACE-1-Promotor Aktivität beeinflussten. Exemplarisch wurden zwei der molekularen Faktoren auf ihren Wirkmechanismus untersucht: Der TF „X box binding protein-1“ (XBP-1), der die so genannte „unfolded protein response“ (UPR) reguliert, erhöhte die Expression von ADAM10 in Zellkultur-Experimenten. Die Menge dieses Faktors war in AD-Patienten im Vergleich zu gesunden, Alters-korrelierten Kontrollen signifikant erniedrigt. Im Gegensatz dazu verminderte der Seneszenz-assoziierte TF „T box 2“ (Tbx2) die Menge an ADAM10 in SH-SY5Y Zellen. Die Expression des Faktors selbst war in post-mortem Kortexgewebe von AD-Patienten erhöht. Zusätzlich zu den TFs konnten in einer Kooperation mit dem Helmholtz Zentrum München drei microRNAs (miRNA 103, 107, 1306) bioinformatisch prädiziert und experimentell validiert werden, die die Expression des humanen ADAM10 reduzierten.rnIm Rahmen dieser Arbeit konnten damit körpereigene Faktoren identifiziert werden, die die Menge an ADAM10 regulieren und folglich potenziell an der Entstehung der gestörten Homöostase der APP-Prozessierung beteiligt sind. Somit ist die AD auch im Hinblick auf eine A-beta-vermittelte Pathologie als multifaktorielle Krankheit zu verstehen, in der verschiedene Regulatoren zur gestörten APP-Prozessierung und somit zur pathologisch gesteigerten A-beta Generierung beitragen können. rnEine pharmakologische Erhöhung der ADAM10 Genexpression würde zu der Freisetzung von neuroprotektivem APPs-alpha und gleichzeitig zu einer reduzierten A-beta Generierung führen. Deshalb war ein weiteres Ziel dieser Arbeit die Evaluierung von Substanzen mit therapeutischem Potenzial im Hinblick auf eine erhöhte ADAM10 Expression. Von 640 FDA-zugelassenen Medikamenten einer Substanz-Bibliothek wurden 23 Substanzen identifiziert, die die Menge an ADAM10 signifikant steigerten während die Expression von BACE-1 und APP unbeeinflusst blieb. In Zusammenarbeit mit dem Institut für Pathologie (Johannes Gutenberg Universität Mainz) wurde ein Zellkultur-basiertes Modell etabliert, um die Permeationsfähigkeit der potenziellen Kandidaten-Substanzen über die Blut-Hirn Schranke (BHS) zu untersuchen. Von den 23 Medikamenten konnten neun im Rahmen des etablierten Modells als BHS-gängig charakterisiert werden. Somit erfüllen diese verbleibenden Medikamente die grundlegenden Anforderungen an ein AD-Therapeutikum. rnADAM10 spaltet neben APP eine Vielzahl anderer Substrate mit unterschiedlichen Funktionen in der Zelle. Zum Beispiel reguliert das Zelladhäsionsmolekül Neuroligin-1 (NL-1), das von ADAM10 prozessiert wird, die synaptische Funktion exzitatorischer Neurone. Aus diesem Grund ist die Abschätzung potenzieller, Therapie-bedingter Nebenwirkungen sehr wichtig. Im Rahmen eines Forschungsaufenthalts an der Universität von Tokio konnte in primären, kortikalen Neuronen der Ratte bei einer Retinoid-induzierten Erhöhung von ADAM10 neben einer vermehrten alpha-sekretorischen APP-Prozessierung auch eine gesteigerte Spaltung von NL-1 beobachtet werden. Dies lässt vermuten, dass bei einer Behandlung mit dem Retinoid Acitretin neben einer vermehrten APP-Spaltung durch ADAM10 auch die Regulation glutamaterger Neurone durch die Spaltung von NL-1 betroffen ist. Anhand eines geeigneten Alzheimer-Tiermodells sollten diese Befunde weiter analysiert werden, um so auf einen sicheren therapeutischen Ansatz bezüglich einer vermehrten ADAM10 Genexpression schließen zu können.rn
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Experimental measurements are used to characterize the anisotropy of flow stress in extruded magnesium alloy AZ31 sheet during uniaxial tension tests at temperatures between 350°C and 450°C, and strain rates ranging from 10-5 to 10-2 s-1. The sheet exhibits lower flow stress and higher tensile ductility when loaded with the tensile axis perpendicular to the extrusion direction compared to when it is loaded parallel to the extrusion direction. This anisotropy is found to be grain size, strain rate, and temperature dependent, but is only weakly dependent on texture. A microstructure based model (D. E. Cipoletti, A. F. Bower, P. E. Krajewski, Scr. Mater., 64 (2011) 931–934) is used to explain the origin of the anisotropic behavior. In contrast to room temperature behavior, where anisotropy is principally a consequence of the low resistance to slip on the basal slip system, elevated temperature anisotropy is found to be caused by the grain structure of extruded sheet. The grains are elongated parallel to the extrusion direction, leading to a lower effective grain size perpendicular to the extrusion direction. As a result, grain boundary sliding occurs more readily if the material is loaded perpendicular to the extrusion direction.
Resumo:
This paper aims to deepen the search for ecosystem-like concepts in indigenous societies by highlighting the importance of place names used by Quechua indigenous farmers from the central Bolivian Andes. Villagers from two communities in the Tunari Mountain Range were asked to list, describe, map and categorize the places they knew on their community’s territory. Results show that place names capture spatially explicit units which integrate biotic and abiotic nature and humans, and that there is an emphasis on topographic terms, highlighting the importance of geodiversity. Farmers’ perspectives differ from the classical view of ecosystems because they ‘humanize’ places, considering them as living beings with agency. Consequently, they do not make a distinction between natural and cultural heritage. Their perspective of the environment is that of a personalized, dynamic relationship with the elements of the natural world that are perceived as living entities. A practical implication of the findings for sustainable development is that since places names make the links between people and the elements of the landscape, toponymy is a tool for ecosystem management rooted in indigenous knowledge. Because place names refer to holistic units linked with people’s experience and spatially explicit, they can be used as an entry point to implement an intercultural dialogue for more sustainable land management.
Resumo:
Soil erosion models and soil erosion risk maps are often used as indicators to assess potential soil erosion in order to assist policy decisions. This paper shows the scientific basis of the soil erosion risk map of Switzerland and its application in policy and practice. Linking a USLE/RUSLE-based model approach (AVErosion) founded on multiple flow algorithms and the unit contributing area concept with an extremely precise and high-resolution digital terrain model (2 m × 2 m grid) using GIS allows for a realistic assessment of the potential soil erosion risk, on single plots, i.e. uniform and comprehensive for the agricultural area of Switzerland (862,579 ha in the valley area and the lower mountain regions). The national or small-scale soil erosion prognosis has thus reached a level heretofore possible only in smaller catchment areas or single plots. Validation was carried out using soil loss data from soil erosion damage mappings in the field from long-term monitoring in different test areas. 45% of the evaluated agricultural area of Switzerland was classified as low potential erosion risk, 12% as moderate potential erosion risk, and 43% as high potential erosion risk. However, many of the areas classified as high potential erosion risk are located at the transition from valley to mountain zone, where many areas are used as permanent grassland, which drastically lowers their current erosion risk. The present soil erosion risk map serves on the one hand to identify and prioritise the high-erosion risk areas, and on the other hand to promote awareness amongst farmers and authorities. It was published on the internet and will be made available to the authorities in digital form. It is intended as a tool for simplifying and standardising enforcement of the legal framework for soil erosion prevention in Switzerland. The work therefore provides a successful example of cooperation between science, policy and practice.
Resumo:
Vietnam has developed rapidly over the past 15 years. However, progress was not uniformly distributed across the country. Availability, adequate visualization and analysis of spatially explicit data on socio-economic and environmental aspects can support both research and policy towards sustainable development. Applying appropriate mapping techniques allows gleaning important information from tabular socio-economic data. Spatial analysis of socio-economic phenomena can yield insights into locally-specifi c patterns and processes that cannot be generated by non-spatial applications. This paper presents techniques and applications that develop and analyze spatially highly disaggregated socioeconomic datasets. A number of examples show how such information can support informed decisionmaking and research in Vietnam.
Resumo:
The effect of shot particles on the high temperature, low cycle fatigue of a hybrid fiber/particulate metal-matrix composite (MMC) was studied. Two hybrid composites with the general composition A356/35%SiC particle/5%Fiber (one without shot) were tested. It was found that shot particles acting as stress concentrators had little effect on the fatigue performance. It appears that fibers with a high silica content were more likely to debond from the matrix. Final failure of the composite was found to occur preferentially in the matrix. SiC particles fracture progressively during fatigue testing, leading to higher stress in the matrix, and final failure by matrix overload. A continuum mechanics based model was developed to predict failure in fatigue based on the tensile properties of the matrix and particles. By accounting for matrix yielding and recovery, composite creep and particle strength distribution, failure of the composite was predicted.
Resumo:
Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.