845 resultados para distributed computation
Resumo:
Changes in phenotypic traits, such as mollusc shells, are indicative of variations in selective pressure along environmental gradients. Recently, increased sea surface temperature (SST) and ocean acidification (OA) due to increased levels of carbon dioxide in the seawater have been described as selective agents that may affect the biological processes underlying shell formation in calcifying marine organisms. The benthic snail Concholepas concholepas (Muricidae) is widely distributed along the Chilean coast, and so is naturally exposed to a strong physical-chemical latitudinal gradient. In this study, based on elliptical Fourier analysis, we assess changes in shell morphology (outlines analysis) in juvenile C. concholepas collected at northern (23°S), central (33°S) and southern (39°S) locations off the Chilean coast. Shell morphology of individuals collected in northern and central regions correspond to extreme morphotypes, which is in agreement with both the observed regional differences in the shell apex outlines, the high reclassification success of individuals (discriminant function analysis) collected in these regions, and the scaling relationship in shell weight variability among regions. However, these extreme morphotypes showed similar patterns of mineralization of calcium carbonate forms (calcite and aragonite). Geographical variability in shell shape of C. concholepas described by discriminant functions was partially explained by environmental variables (pCO2, SST). This suggests the influence of corrosive waters, such as upwelling and freshwaters penetrating into the coastal ocean, upon spatial variation in shell morphology. Changes in the proportion of calcium carbonate forms precipitated by C. concholepas across their shells and its susceptibility to corrosive coastal waters are discussed.
Resumo:
The gradually increased atmospheric CO2 partial pressure (pCO2) has thrown the carbonate chemistry off balance and resulted in decreased seawater pH in marine ecosystem, termed ocean acidification (OA). Anthropogenic OA is postulated to affect the physiology of many marine calcifying organisms. However, the susceptibility and metabolic pathways of change in most calcifying animals are still far from being well understood. In this work, the effects of exposure to elevated pCO2 were characterized in gills and hepatopancreas of Crassostrea gigas using integrated proteomic and metabolomic approaches. Metabolic responses indicated that high CO2 exposure mainly caused disturbances in energy metabolism and osmotic regulation marked by differentially altered ATP, glucose, glycogen, amino acids and organic osmolytes in oysters, and the depletions of ATP in gills and the accumulations of ATP, glucose and glycogen in hepatopancreas accounted for the difference in energy distribution between these two tissues. Proteomic responses suggested that OA could not only affect energy and primary metabolisms, stress responses and calcium homeostasis in both tissues, but also influence the nucleotide metabolism in gills and cytoskeleton structure in hepatopancreas. This study demonstrated that the combination of proteomics and metabolomics could provide an insightful view into the effects of OA on oyster C. gigas. BIOLOGICAL SIGNIFICANCE: The gradually increased atmospheric CO2 partial pressure (pCO2) has thrown the carbonate chemistry off balance and resulted in decreased seawater pH in marine ecosystem, termed ocean acidification (OA). Anthropogenic OA is postulated to affect the physiology of many marine calcifying organisms. However, the susceptibility and metabolic pathways of change in most calcifying animals are still far from being understood. To our knowledge, few studies have focused on the responses induced by pCO2 at both protein and metabolite levels. The pacific oyster C. gigas, widely distributed throughout most of the world's oceans, is a model organism for marine environmental science. In the present study, an integrated metabolomic and proteomic approach was used to elucidate the effects of ocean acidification on Pacific oyster C. gigas, hopefully shedding light on the physiological responses of marine mollusk to the OA stress.
Resumo:
Deep-water ecosystems are characterized by relatively low carbonate concentration values and, due to ocean acidification (OA), these habitats might be among the first to be exposed to undersaturated conditions in the forthcoming years. However, until now, very few studies have been conducted to test how cold-water coral (CWC) species react to such changes in the seawater chemistry. The present work aims to investigate the mid-term effect of decreased pH on calcification of the two branching CWC species most widely distributed in the Mediterranean, Lophelia pertusa and Madrepora oculata. No significant effects were observed in the skeletal growth rate, microdensity and porosity of both species after 6 months of exposure. However, while the calcification rate of M. oculata was similar for all colony fragments, a heterogeneous skeletal growth pattern was observed in L. pertusa, the younger nubbins showing higher growth rates than the older ones. A higher energy demand is expected in these young, fast-growing fragments and, therefore, a reduction in calcification might be noticed earlier during long-term exposure to acidified conditions.
Resumo:
Macrocystis pyrifera is a widely distributed, highly productive, seaweed. It is known to use bicarbonate (HCO3-) from seawater in photosynthesis and the main mechanism of utilization is attributed to the external catalyzed dehydration of HCO3- by the surface-bound enzyme carbonic anhydrase (CAext). Here, we examined other putative HCO3- uptake mechanisms in M. pyrifera under pHT 9.00 (HCO3-: CO2 = 940:1) and pHT 7.65 (HCO3-: CO2 = 51:1). Rates of photosynthesis, and internal CA (CAint) and CAext activity were measured following the application of AZ which inhibits CAext, and DIDS which inhibits a different HCO3- uptake system, via an anion exchange (AE) protein. We found that the main mechanism of HCO3- uptake by M. pyrifera is via an AE protein, regardless of the HCO3-: CO2 ratio, with CAext making little contribution. Inhibiting the AE protein led to a 55%-65% decrease in photosynthetic rates. Inhibiting both the AE protein and CAext at pHT 9.00 led to 80%-100% inhibition of photosynthesis, whereas at pHT 7.65, passive CO2 diffusion supported 33% of photosynthesis. CAint was active at pHT 7.65 and 9.00, and activity was always higher than CAext, because of its role in dehydrating HCO3- to supply CO2 to RuBisCO. Interestingly, the main mechanism of HCO3- uptake in M. pyrifera was different than that in other Laminariales studied (CAext-catalyzed reaction) and we suggest that species-specific knowledge of carbon uptake mechanisms is required in order to elucidate how seaweeds might respond to future changes in HCO3-:CO2 due to ocean acidification.
Resumo:
The literature on the use of free trade agreements (FTAs) has recently been growing because it is becoming more important to encourage the use of current FTAs than to increase the number of FTAs. In this paper, we discuss some practical issues in the computation of FTA utilization rates, which provide a useful measure to discover how much FTA schemes are used in trade. For example, compared with the use of customs data on FTA utilization in imports, when using certificates of origin data on FTA utilization in exports, there are several points about which we should be careful. Our practical guidance on the computation of FTA utilization rates will be helpful when computing such rates and in examining the determinants of those rates empirically.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
The use of modular or ‘micro’ maximum power point tracking (MPPT) converters at module level in series association, commercially known as “power optimizers”, allows the individual adaptation of each panel to the load, solving part of the problems related to partial shadows and different tilt and/or orientation angles of the photovoltaic (PV) modules. This is particularly relevant in building integrated PV systems. This paper presents useful behavioural analytical studies of cascade MPPT converters and evaluation test results of a prototype developed under a Spanish national research project. On the one hand, this work focuses on the development of new useful expressions which can be used to identify the behaviour of individual MPPT converters applied to each module and connected in series, in a typical grid-connected PV system. On the other hand, a novel characterization method of MPPT converters is developed, and experimental results of the prototype are obtained: when individual partial shading is applied, and they are connected in a typical grid connected PV array
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
With electricity consumption increasing within the UnitedStates, new paradigms of delivering electricity are required in order to meet demand. One promising option is the increased use of distributedpowergeneration. Already a growing percentage of electricity generation, distributedgeneration locates the power plant physically close to the consumer, avoiding transmission and distribution losses as well as providing the possibility of combined heat and power. Despite the efficiency gains possible, regulators and utilities have been reluctant to implement distributedgeneration, creating numerous technical, regulatory, and business barriers. Certain governments, most notable California, are making concerted efforts to overcome these barriers in order to ensure distributedgeneration plays a part as the country meets demand while shifting to cleaner sources of energy.
Resumo:
Work on distributed data management commenced shortly after the introduction of the relational model in the mid-1970's. 1970's and 1980's were very active periods for the development of distributed relational database technology, and claims were made that in the following ten years centralized databases will be an “antique curiosity” and most organizations will move toward distributed database managers [1]. That prediction has certainly become true, and all commercial DBMSs today are distributed.
Resumo:
The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
Resumo:
An extended 3D distributed model based on distributed circuit units for the simulation of triple‐junction solar cells under realistic conditions for the light distribution has been developed. A special emphasis has been put in the capability of the model to accurately account for current mismatch and chromatic aberration effects. This model has been validated, as shown by the good agreement between experimental and simulation results, for different light spot characteristics including spectral mismatch and irradiance non‐uniformities. This model is then used for the prediction of the performance of a triple‐junction solar cell for a light spot corresponding to a real optical architecture in order to illustrate its suitability in assisting concentrator system analysis and design process.
Resumo:
The consideration of real operating conditions for the design and optimization of a multijunction solar cell receiver-concentrator assembly is indispensable. Such a requirement involves the need for suitable modeling and simulation tools in order to complement the experimental work and circumvent its well-known burdens and restrictions. Three-dimensional distributed models have been demonstrated in the past to be a powerful choice for the analysis of distributed phenomena in single- and dual-junction solar cells, as well as for the design of strategies to minimize the solar cell losses when operating under high concentrations. In this paper, we present the application of these models for the analysis of triple-junction solar cells under real operating conditions. The impact of different chromatic aberration profiles on the short-circuit current of triple-junction solar cells is analyzed in detail using the developed distributed model. Current spreading conditions the impact of a given chromatic aberration profile on the solar cell I-V curve. The focus is put on determining the role of current spreading in the connection between photocurrent profile, subcell voltage and current, and semiconductor layers sheet resistance.