937 resultados para grid-based approximation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn–Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ABSTRACT Quantitative evaluations of species distributional congruence allow evaluating previously proposed biogeographic regionalization and even identify undetected areas of endemism. The geographic scenery of Northwestern Argentina offers ideal conditions for the study of distributional patterns of species since the boundaries of a diverse group of biomes converge in a relatively small region, which also includes a diverse fauna of mammals. In this paper we applied a grid-based explicit method in order to recognize Patterns of Distributional Congruence (PDCs) and Areas of Endemism (AEs), and the species (native but non-endemic and endemic, respectively) that determine them. Also, we relate these distributional patterns to traditional biogeographic divisions of the study region and with a very recent phytogeographic study and we reconsider what previously rejected as 'spurious' areas. Finally, we assessed the generality of the patterns found. The analysis resulted in 165 consensus areas, characterized by seven species of marsupials, 28 species of bats, and 63 species of rodents, which represents a large percentage of the total species (10, 41, and 73, respectively). Twenty-five percent of the species that characterize consensus areas are endemic to the study region and define six AEs in strict sense while 12 PDCs are mainly defined by widely distributed species. While detailed quantitative analyses of plant species distribution data made by other authors does not result in units that correspond to Cabrera's phytogeographic divisions at this spatial scale, analyses of animal species distribution data does. We were able to identify previously unknown meaningful faunal patterns and more accurately define those already identified. We identify PDCs and AEs that conform Eastern Andean Slopes Patterns, Western High Andes Patterns, and Merged Eastern and Western Andean Slopes Patterns, some of which are re-interpreted at the light of known patterns of the endemic vascular flora. Endemism do not declines towards the south, but do declines towards the west of the study region. Peaks of endemism are found in the eastern Andean slopes in Jujuy and Tucumán/Catamarca, and in the western Andean biomes in Tucumán/Catamarca. The principal habitat types for endemic small mammal species are the eastern humid Andean slopes. Notwithstanding, arid/semi-arid biomes and humid landscapes are represented by the same number of AEs. Rodent species define 15 of the 18 General Patterns, and only in one they have no participation at all. Clearly, at this spatial scale, non-flying mammals, particularly rodents, are biogeographically more valuable species than flying mammals (bat species).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 0.125 degree raster or grid-based Geographic Information System with data on tsetse, trypanosomosis, animal production, agriculture and land use has recently been developed in Togo. This paper addresses the problem of generating tsetse distribution and abundance maps from remotely sensed data, using a restricted amount of field data. A discriminant analysis model is tested using contemporary tsetse data and remotely sensed, low resolution data acquired from the National Oceanographic and Atmospheric Administration and Meteosat platforms. A split sample technique is adopted where a randomly selected part of the field measured data (training set) serves to predict the other part (predicted set). The obtained results are then compared with field measured data per corresponding grid-square. Depending on the size of the training set the percentage of concording predictions varies from 80 to 95 for distribution figures and from 63 to 74 for abundance. These results confirm the potential of satellite data application and multivariate analysis for the prediction, not only of the tsetse distribution, but more importantly of their abundance. This opens up new avenues because satellite predictions and field data may be combined to strengthen or substitute one another and thus reduce costs of field surveys.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND. Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. RESULTS. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. CONCLUSIONS. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reviews the methods for the inventory of below-ground biotas in the humid tropics, to document the (hypothesized) loss of soil biodiversity associated with deforestation and agricultural intensification at forest margins. The biotas were grouped into eight categories, each of which corresponded to a major functional group considered important or essential to soil function. An accurate inventory of soil organisms can assist in ecosystem management and help sustain agricultural production. The advantages and disadvantages of transect-based and grid-based sampling methods are discussed, illustrated by published protocols ranging from the original "TSBF transect", through versions developed for the alternatives to Slash-and-Burn Project (ASB) to the final schemes (with variants) adopted by the Conservation and Sustainable Management of Below-ground Biodiversity Project (CSM-BGBD). Consideration is given to the place and importance of replication in below-ground biological sampling and it is argued that the new sampling protocols are inclusive, i.e. designed to sample all eight biotic groups in the same field exercise; spatially scaled, i.e. provide biodiversity data at site, locality, landscape and regional levels, and link the data to land use and land cover; and statistically robust, as shown by a partial randomization of plot locations for sampling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interconnection of loads and small size generation forms a new type of distribution systems, the Microgrid. The microgrids can be operated together with the utility grid or be operated autonomously in an island. Thesesmall grids present a new paradigm of the construction of the low voltage distribution systems. The microgrids in the distribution systems can become small, controllable units, which immediately react to the system's changes. Along with that the microgrids can realize the special properties, such as increasing the reliability, reducing losses, voltage sag correction, uninterruptible supplying. The goals of the thesis are to explain the principles of the microgrid's functioning, to clarify the main ideas and positive features of the microgrids, to find out and prove their advantages and explain why they are so popular nowadays all over the world. The practical aims of the thesis are to construct and build a test setup of a microgrid based on two inverters from SMA Technologie AG in the laboratory and to test all the main modes and parameters of the microgrid's operating. Also the purpose of the thesis is to test the main component of the microgrid - the battery inverter which controls allthe processes and energy flows inside a microgrid and communicates with the main grid. Based on received data the main contribution of the thesis consists of the estimation of the established microgrid from the reliability, economy and simplicity of operating points of view and evaluation ofthe advisability of its use in different conditions. Moreover, the thesis assumes to give the recommendations and advice for the future investigations of the built system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’aménagement des systèmes d’assainissement conventionnel des eaux usées domestiques entraine actuellement la déplétion de ressources naturelles et la pollution des milieux récepteurs. L’utilisation d’une approche écosystémique plus globale telle que l’Assainissement Écologique, visant la fermeture du cycle de l’eau et des éléments nutritifs (phosphore et azote), contenus dans les excréments, par leur réutilisation à travers l’agriculture, permettrait d’améliorer de façon écologique cette situation. Toutefois, ce paradigme émergent est peu enseigné aux professionnels de l’aménagement responsables de sa planification, surtout au niveau de son application dans les pays développés nordiques. C’est pourquoi, afin d’améliorer la planification de ce type de système, il faut informer ces derniers des pratiques les plus adéquates à adopter. Pour y arriver, un scénario d’aménagement type a été développé à partir d’une revue exhaustive de la littérature et de l’analyse des données en se basant sur les recommandations de l’approche en fonction du contexte étudié. Ce scénario aidera les professionnels à mieux comprendre l’Assainissement Écologique et son aménagement. Il représente alors un point de départ pour les discussions interdisciplinaires et participatives que celui-ci requiert. En conclusion, il y a encore de nombreux manques d’informations concernant l’utilisation de traitements alternatifs dans les climats nordiques et l’acceptation de ceux-ci par les usagers. De plus, les cadres législatifs demeurent un obstacle considérable à l’aménagement d’un tel système. Cette recherche permet cependant de démystifier l’approche auprès des professionnels et pourrait aider à modifier certains cadres législatifs afin d’y intégrer sa philosophie.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La figure mythique du double se manifeste dans la majorité des cultures sous des formes archétypales renvoyant à l’expérience de la division de l’individu en positions antithétiques ou complémentaires. Dans la littérature gothique et fantastique, le mythe est propice à créer un sentiment d’angoisse et d’horreur soulignant les problèmes et mystères de la schize du sujet. Ce travail d’analyse propose de regrouper les récits de doubles selon deux catégories d’occurrences thématiques en se basant sur le traitement textuel qui en est fait, soit l’apparition du double par homonymie d’une part et par pseudonymie de l’autre. Ceci mènera ultimement à commenter sur la perception qu’a l’auteur de lui-même et du processus de création. Le problème de la division étant au cœur des balbutiements théoriques en psychologie et en psychanalyse, une grille analytique lacanienne et post-structuraliste sera appliquée à cette recherche. Les œuvres traitées seront New York Trilogy de Paul Auster, The Dark Half de Stephen King, William Wilson d’Edgar Allan Poe, Le Double de Fédor Dostoïevski et Despair de Vladimir Nabokov.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Landwirtschaft spielt eine zentrale Rolle im Erdsystem. Sie trägt durch die Emission von CO2, CH4 und N2O zum Treibhauseffekt bei, kann Bodendegradation und Eutrophierung verursachen, regionale Wasserkreisläufe verändern und wird außerdem stark vom Klimawandel betroffen sein. Da all diese Prozesse durch die zugrunde liegenden Nährstoff- und Wasserflüsse eng miteinander verknüpft sind, sollten sie in einem konsistenten Modellansatz betrachtet werden. Dennoch haben Datenmangel und ungenügendes Prozessverständnis dies bis vor kurzem auf der globalen Skala verhindert. In dieser Arbeit wird die erste Version eines solchen konsistenten globalen Modellansatzes präsentiert, wobei der Schwerpunkt auf der Simulation landwirtschaftlicher Erträge und den resultierenden N2O-Emissionen liegt. Der Grund für diese Schwerpunktsetzung liegt darin, dass die korrekte Abbildung des Pflanzenwachstums eine essentielle Voraussetzung für die Simulation aller anderen Prozesse ist. Des weiteren sind aktuelle und potentielle landwirtschaftliche Erträge wichtige treibende Kräfte für Landnutzungsänderungen und werden stark vom Klimawandel betroffen sein. Den zweiten Schwerpunkt bildet die Abschätzung landwirtschaftlicher N2O-Emissionen, da bislang kein prozessbasiertes N2O-Modell auf der globalen Skala eingesetzt wurde. Als Grundlage für die globale Modellierung wurde das bestehende Agrarökosystemmodell Daycent gewählt. Neben der Schaffung der Simulationsumgebung wurden zunächst die benötigten globalen Datensätze für Bodenparameter, Klima und landwirtschaftliche Bewirtschaftung zusammengestellt. Da für Pflanzzeitpunkte bislang keine globale Datenbasis zur Verfügung steht, und diese sich mit dem Klimawandel ändern werden, wurde eine Routine zur Berechnung von Pflanzzeitpunkten entwickelt. Die Ergebnisse zeigen eine gute Übereinstimmung mit Anbaukalendern der FAO, die für einige Feldfrüchte und Länder verfügbar sind. Danach wurde das Daycent-Modell für die Ertragsberechnung von Weizen, Reis, Mais, Soja, Hirse, Hülsenfrüchten, Kartoffel, Cassava und Baumwolle parametrisiert und kalibriert. Die Simulationsergebnisse zeigen, dass Daycent die wichtigsten Klima-, Boden- und Bewirtschaftungseffekte auf die Ertragsbildung korrekt abbildet. Berechnete Länderdurchschnitte stimmen gut mit Daten der FAO überein (R2 = 0.66 für Weizen, Reis und Mais; R2 = 0.32 für Soja), und räumliche Ertragsmuster entsprechen weitgehend der beobachteten Verteilung von Feldfrüchten und subnationalen Statistiken. Vor der Modellierung landwirtschaftlicher N2O-Emissionen mit dem Daycent-Modell stand eine statistische Analyse von N2O-und NO-Emissionsmessungen aus natürlichen und landwirtschaftlichen Ökosystemen. Die als signifikant identifizierten Parameter für N2O (Düngemenge, Bodenkohlenstoffgehalt, Boden-pH, Textur, Feldfrucht, Düngersorte) und NO (Düngemenge, Bodenstickstoffgehalt, Klima) entsprechen weitgehend den Ergebnissen einer früheren Analyse. Für Emissionen aus Böden unter natürlicher Vegetation, für die es bislang keine solche statistische Untersuchung gab, haben Bodenkohlenstoffgehalt, Boden-pH, Lagerungsdichte, Drainierung und Vegetationstyp einen signifikanten Einfluss auf die N2O-Emissionen, während NO-Emissionen signifikant von Bodenkohlenstoffgehalt und Vegetationstyp abhängen. Basierend auf den daraus entwickelten statistischen Modellen betragen die globalen Emissionen aus Ackerböden 3.3 Tg N/y für N2O, und 1.4 Tg N/y für NO. Solche statistischen Modelle sind nützlich, um Abschätzungen und Unsicherheitsbereiche von N2O- und NO-Emissionen basierend auf einer Vielzahl von Messungen zu berechnen. Die Dynamik des Bodenstickstoffs, insbesondere beeinflusst durch Pflanzenwachstum, Klimawandel und Landnutzungsänderung, kann allerdings nur durch die Anwendung von prozessorientierten Modellen berücksichtigt werden. Zur Modellierung von N2O-Emissionen mit dem Daycent-Modell wurde zunächst dessen Spurengasmodul durch eine detailliertere Berechnung von Nitrifikation und Denitrifikation und die Berücksichtigung von Frost-Auftau-Emissionen weiterentwickelt. Diese überarbeitete Modellversion wurde dann an N2O-Emissionsmessungen unter verschiedenen Klimaten und Feldfrüchten getestet. Sowohl die Dynamik als auch die Gesamtsummen der N2O-Emissionen werden befriedigend abgebildet, wobei die Modelleffizienz für monatliche Mittelwerte zwischen 0.1 und 0.66 für die meisten Standorte liegt. Basierend auf der überarbeiteten Modellversion wurden die N2O-Emissionen für die zuvor parametrisierten Feldfrüchte berechnet. Emissionsraten und feldfruchtspezifische Unterschiede stimmen weitgehend mit Literaturangaben überein. Düngemittelinduzierte Emissionen, die momentan vom IPCC mit 1.25 +/- 1% der eingesetzten Düngemenge abgeschätzt werden, reichen von 0.77% (Reis) bis 2.76% (Mais). Die Summe der berechneten Emissionen aus landwirtschaftlichen Böden beträgt für die Mitte der 1990er Jahre 2.1 Tg N2O-N/y, was mit den Abschätzungen aus anderen Studien übereinstimmt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents there important results in visual object recognition based on shape. (1) A new algorithm (RAST; Recognition by Adaptive Sudivisions of Tranformation space) is presented that has lower average-case complexity than any known recognition algorithm. (2) It is shown, both theoretically and empirically, that representing 3D objects as collections of 2D views (the "View-Based Approximation") is feasible and affects the reliability of 3D recognition systems no more than other commonly made approximations. (3) The problem of recognition in cluttered scenes is considered from a Bayesian perspective; the commonly-used "bounded-error errorsmeasure" is demonstrated to correspond to an independence assumption. It is shown that by modeling the statistical properties of real-scenes better, objects can be recognized more reliably.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho apresentado no XXXV CNMAC, Natal-RN, 2014.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)