884 resultados para Hadoop distributed file system (HDFS)
Resumo:
The Southern Hemisphere Westerly Winds (SWW) have been suggested to exert a critical influence on global climate through wind-driven upwelling of deep water in the Southern Ocean and the potentially resulting atmospheric CO2 variations. The investigation of the temporal and spatial evolution of the SWW along with forcings and feedbacks remains a significant challenge in climate research. In this study, the evolution of the SWW under orbital forcing from the early Holocene (9 kyr BP) to pre-industrial modern times is examined with transient experiments using the comprehensive coupled global climate model CCSM3. Analyses of the model results suggest that the annual and seasonal mean SWW were subject to an overall strengthening and poleward shifting trend during the course of the early-to-late Holocene under the influence of orbital forcing, except for the austral spring season, where the SWW exhibited an opposite trend of shifting towards the equator.
Resumo:
Conceptualization of groundwater flow systems is necessary for water resources planning. Geophysical, hydrochemical and isotopic characterization methods were used to investigate the groundwater flow system of a multi-layer fractured sedimentary aquifer along the coastline in Southwestern Nicaragua. A geologic survey was performed along the 46 km2 catchment. Electrical resistivity tomography (ERT) was applied along a 4.4 km transect parallel to the main river channel to identify fractures and determine aquifer geometry. Additionally, three cross sections in the lower catchment and two in hillslopes of the upper part of the catchment were surveyed using ERT. Stable water isotopes, chloride and silica were analyzed for springs, river, wells and piezometers samples during the dry and wet season of 2012. Indication of moisture recycling was found although the identification of the source areas needs further investigation. The upper-middle catchment area is formed by fractured shale/limestone on top of compact sandstone. The lower catchment area is comprised of an alluvial unit of about 15 m thickness overlaying a fractured shale unit. Two major groundwater flow systems were identified: one deep in the shale unit, recharged in the upper-middle catchment area; and one shallow, flowing in the alluvium unit and recharged locally in the lower catchment area. Recharged precipitation displaces older groundwater along the catchment, in a piston flow mechanism. Geophysical methods in combination with hydrochemical and isotopic tracers provide information over different scales and resolutions, which allow an integrated analysis of groundwater flow systems. This approach provides integrated surface and subsurface information where remoteness, accessibility, and costs prohibit installation of groundwater monitoring networks.
Resumo:
Although conventional sediment parameters (mean grain size, sorting, and skewness) and provenance have typically been used to infer sediment transport pathways, most freshwater, brackish, and marine environments are also characterized by abundant sediment constituents of biological, and possibly anthropogenic and volcanic, origin that can provide additional insight into local sedimentary processes. The biota will be spatially distributed according to its response to environmental parameters such as water temperature, salinity, dissolved oxygen, organic carbon content, grain size, and intensity of currents and tidal flow, whereas the presence of anthropogenic and volcanic constituents will reflect proximity to source areas and whether they are fluvially- or aerially-transported. Because each of these constituents have a unique environmental signature, they are a more precise proxy for that source area than the conventional sedimentary process indicators. This San Francisco Bay Coastal System study demonstrates that by applying a multi-proxy approach, the primary sites of sediment transport can be identified. Many of these sites are far from where the constituents originated, showing that sediment transport is widespread in the region. Although not often used, identifying and interpreting the distribution of naturally-occurring and allochthonous biologic, anthropogenic, and volcanic sediment constituents is a powerful tool to aid in the investigation of sediment transport pathways in other coastal systems.
Resumo:
During two field campaigns (Austral springs 2011 and 2012) the sedimentary architecture of a polar gravel-spit system at the northern coast of Potter Peninsula (Area 4) was revealed using ground-penetrating radar (GPR, Geophysical Survey Systems, Inc. SIR-3000). 47 profiles were collected using a mono-static 200 MHz antenna operated in common offset mode. Trace increment was set to 0.05 m. A differential global-positioning system (dGPS, Leica GS09) was used to obtain topographical information along the GPR lines. GPR data are provided in RADAN-Format, dGPS coordinates are provided in ascii format; projection is UTM (WGS 84, zone 21S).
Resumo:
We map the weekly position of the Antarctic Polar Front (PF) in the Southern Ocean over a 12-year period (2002-2014) using satellite sea surface temperature (SST) estimated from cloud-penetrating microwave radiometers. Our study advances previous efforts to map the PF using hydrographic and satellite data and provides a unique realization of the PF at weekly resolution across all longitudes. The mean path of the PF is asymmetric; its latitudinal position spans from 44 to 64° S along its circumpolar path. SST at the PF ranges from 0.6 to 6.9 °C, reflecting the large spread in latitudinal position. The average intensity of the front is 1.7 °C per 100 km, with intensity ranging from 1.4 to 2.3 °C per 100 km. Front intensity is significantly correlated with the depth of bottom topography, suggesting that the front intensifies over shallow bathymetry. Realizations of the PF are consistent with the corresponding surface expressions of the PF estimated using expendable bathythermograph data in the Drake Passage and Australian and African sectors. The climatological mean position of the PF is similar, though not identical, to previously published estimates. As the PF is a key indicator of physical circulation, surface nutrient concentration, and biogeography in the Southern Ocean, future studies of physical and biogeochemical oceanography in this region will benefit from the provided data set.
Resumo:
The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
The use of modular or ‘micro’ maximum power point tracking (MPPT) converters at module level in series association, commercially known as “power optimizers”, allows the individual adaptation of each panel to the load, solving part of the problems related to partial shadows and different tilt and/or orientation angles of the photovoltaic (PV) modules. This is particularly relevant in building integrated PV systems. This paper presents useful behavioural analytical studies of cascade MPPT converters and evaluation test results of a prototype developed under a Spanish national research project. On the one hand, this work focuses on the development of new useful expressions which can be used to identify the behaviour of individual MPPT converters applied to each module and connected in series, in a typical grid-connected PV system. On the other hand, a novel characterization method of MPPT converters is developed, and experimental results of the prototype are obtained: when individual partial shading is applied, and they are connected in a typical grid connected PV array
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
An extended 3D distributed model based on distributed circuit units for the simulation of triple‐junction solar cells under realistic conditions for the light distribution has been developed. A special emphasis has been put in the capability of the model to accurately account for current mismatch and chromatic aberration effects. This model has been validated, as shown by the good agreement between experimental and simulation results, for different light spot characteristics including spectral mismatch and irradiance non‐uniformities. This model is then used for the prediction of the performance of a triple‐junction solar cell for a light spot corresponding to a real optical architecture in order to illustrate its suitability in assisting concentrator system analysis and design process.