908 resultados para Optimal reactive power plan- ning in interconected multi-area power systems
Resumo:
La modelización es un proceso por el que se obtienen modelos de los procesos del ´mundo real´ mediante la utilización de simplificaciones. Sin embargo, las estimaciones obtenidas con el modelo llevan implícitas incertidumbre que se debe evaluar. Mediante un análisis de sensibilidad se puede mejorar la confianza en los resultados, sin embargo, este paso a veces no se realiza debido básicamente al trabajo que lleva consigo este tipo de análisis. Además, al crear un modelo, hay que mantener un equilibrio entre la obtención de resultados lo más exactos posible mediante un modelo lo más sencillo posible. Por ello, una vez creado un modelo, es imprescindible comprobar si es necesario o no incluir más procesos que en un principio no se habían incluido. Los servicios ecosistémicos son los procesos mediante los cuales los ecosistemas mantienen y satisfacen el bienestar humano. La importancia que los servicios ecosistémicos y sus beneficios asociados tienen, junto con la necesidad de realizar una buena gestión de los mismos, han estimulado la aparición de modelos y herramientas para cuantificarlos. InVEST (Integrated Valuation of Ecosystem Services and Tradoffs) es una de estas herramientas específicas para calcular servicios eco-sistémicos, desarrollada por Natural Capital Project (Universidad de Stanford, EEUU). Como resultado del creciente interés en calcular los servicios eco-sistémicos, se prevé un incremento en la aplicación del InVEST. La investigación desarrollada en esta Tesis pretende ayudar en esas otras importantes fases necesarias después de la creación de un modelo, abarcando los dos siguientes trabajos. El primero es la aplicación de un análisis de sensibilidad al modelo en una cuenca concreta mediante la metodología más adecuada. El segundo es relativo a los procesos dentro de la corriente fluvial que actualmente no se incluyen en el modelo mediante la creación y aplicación de una metodología que estudiara el papel que juegan estos procesos en el modelo InVEST de retención de nutrientes en el área de estudio. Los resultados de esta Tesis contribuirán a comprender la incertidumbre involucrada en el proceso de modelado. También pondrá de manifiesto la necesidad de comprobar el comportamiento de un modelo antes de utilizarlo y en el momento de interpretar los resultados obtenidos. El trabajo en esta Tesis contribuirá a mejorar la plataforma InVEST, que es una herramienta importante en el ámbito de los servicios de los ecosistemas. Dicho trabajo beneficiará a los futuros usuarios de la herramienta, ya sean investigadores (en investigaciones futuras), o técnicos (en futuros trabajos de toma de decisiones o gestión ecosistemas). ABSTRACT Modeling is the process to idealize real-world situations through simplifications in order to obtain a model. However, model estimations lead to uncertainties that have to be evaluated formally. The role of the sensitivity analysis (SA) is to assign model output uncertainty based on the inputs and can increase confidence in model, however, it is often omitted in modelling, usually as a result of the growing effort it involves. In addition, the balance between accuracy and simplicity is not easy to assess. For this reason, when a model is developed, it is necessary to test it in order to understand its behavior and to include, if necessary, more complexity to get a better response. Ecosystem services are the conditions and processes through which natural ecosystems, and their constituent species, sustain and fulfill human life. The relevance of ecosystem services and the need to better manage them and their associated benefits have stimulated the emergence of models and tools to measure them. InVEST, Integrated Valuation of Ecosystem Services and Tradoffs, is one of these ecosystem services-specific tools developed by the Natural Capital Project (Stanford University, USA). As a result of the growing interest in measuring ecosystem services, the use of InVEST is anticipated to grow exponentially in the coming years. However, apart from model development, making a model involves other crucial stages such as its evaluation and application in order to validate estimations. The work developed in this thesis tries to help in this relevant and imperative phase of the modeling process, and does so in two different ways. The first one is to conduct a sensitivity analysis of the model, which consists in choosing and applying a methodology in an area and analyzing the results obtained. The second is related to the in-stream processes that are not modeled in the current model, and consists in creating and applying a methodology for testing the streams role in the InVEST nutrient retention model in a case study, analyzing the results obtained. The results of this Thesis will contribute to the understanding of the uncertainties involved in the modeling process. It will also illustrate the need to check the behavior of every model developed before putting them in production and illustrate the importance of understanding their behavior in terms of correctly interpreting the results obtained in light of uncertainty. The work in this thesis will contribute to improve the InVEST platform, which is an important tool in the field of ecosystem services. Such work will benefit future users, whether they are researchers (in their future research), or technicians (in their future work in ecosystem conservation or management decisions).
Resumo:
The aim of this study is to determine the yield and composition of the essential oil of cornmint (Mentha arvensis L.) grown in the irrigation area of Santiago del Estero, Argentina. Field tests were carried out under irrigation conditions, harvesting when 70% flowering was reached (in the summer and at the end of the winter seasons). Essential oil yields were 2% in the first cut and 1.6% in the second cut, respectively, the major constituents of the essential oil being menthol, menthone, isomenthone and menthofuran. In both cases, a high concentration of menthol was obtained, although during the winter the content decreased, increasing the concentration of menthofuran. It is concluded that during the summer a higher yield and better quality of essential oil are produced.
Resumo:
We reconstructed vegetation responses to climate oscillations, fire and human activities since the last glacial maximum in inland NW Iberia, where previous paleoecological research is scarce. Extremely sparse and open vegetation composed of steppic grasslands and heathlands with scattered pioneer trees suggests very cold and dry conditions during the Oldest Dryas, unsuitable for tree survival in the surroundings of the study site. Slight woodland expansion during the Bolling/Allerod was interrupted by the Younger Dryas cooling. Pinewoods dominated for most of the early Holocene, when a marked increase in fire activity occurred. Deciduous trees expanded later reaching their maximum representation during the mid-Holocene. Enhanced fire activity and the presence of coprophilous fungi around 6400-6000 cal yr BP suggest an early human occupation around the site. However, extensive deforestation only started at 4500 calyrBP, when fire was used to clear the tree canopy. Final replacement of woodlands with heathlands, grasslands and cereal crops occurred from 2700 cal yr BP onwards due to land-use intensification. Our paleoecological record can help efforts aimed at restoring the natural vegetation by indicating which communities were dominant at the onset of heavy human impact, thus promoting the recovery of currently rare oak and alder stands.
Resumo:
La modelización es un proceso por el que se obtienen modelos de los procesos del ´mundo real´ mediante la utilización de simplificaciones. Sin embargo, las estimaciones obtenidas con el modelo llevan implícitas incertidumbre que se debe evaluar. Mediante un análisis de sensibilidad se puede mejorar la confianza en los resultados, sin embargo, este paso a veces no se realiza debido básicamente al trabajo que lleva consigo este tipo de análisis. Además, al crear un modelo, hay que mantener un equilibrio entre la obtención de resultados lo más exactos posible mediante un modelo lo más sencillo posible. Por ello, una vez creado un modelo, es imprescindible comprobar si es necesario o no incluir más procesos que en un principio no se habían incluido. Los servicios ecosistémicos son los procesos mediante los cuales los ecosistemas mantienen y satisfacen el bienestar humano. La importancia que los servicios ecosistémicos y sus beneficios asociados tienen, junto con la necesidad de realizar una buena gestión de los mismos, han estimulado la aparición de modelos y herramientas para cuantificarlos. InVEST (Integrated Valuation of Ecosystem Services and Tradoffs) es una de estas herramientas específicas para calcular servicios eco-sistémicos, desarrollada por Natural Capital Project (Universidad de Stanford, EEUU). Como resultado del creciente interés en calcular los servicios eco-sistémicos, se prevé un incremento en la aplicación del InVEST. La investigación desarrollada en esta Tesis pretende ayudar en esas otras importantes fases necesarias después de la creación de un modelo, abarcando los dos siguientes trabajos. El primero es la aplicación de un análisis de sensibilidad al modelo en una cuenca concreta mediante la metodología más adecuada. El segundo es relativo a los procesos dentro de la corriente fluvial que actualmente no se incluyen en el modelo mediante la creación y aplicación de una metodología que estudiara el papel que juegan estos procesos en el modelo InVEST de retención de nutrientes en el área de estudio. Los resultados de esta Tesis contribuirán a comprender la incertidumbre involucrada en el proceso de modelado. También pondrá de manifiesto la necesidad de comprobar el comportamiento de un modelo antes de utilizarlo y en el momento de interpretar los resultados obtenidos. El trabajo en esta Tesis contribuirá a mejorar la plataforma InVEST, que es una herramienta importante en el ámbito de los servicios de los ecosistemas. Dicho trabajo beneficiará a los futuros usuarios de la herramienta, ya sean investigadores (en investigaciones futuras), o técnicos (en futuros trabajos de toma de decisiones o gestión ecosistemas). ABSTRACT Modeling is the process to idealize real-world situations through simplifications in order to obtain a model. However, model estimations lead to uncertainties that have to be evaluated formally. The role of the sensitivity analysis (SA) is to assign model output uncertainty based on the inputs and can increase confidence in model, however, it is often omitted in modelling, usually as a result of the growing effort it involves. In addition, the balance between accuracy and simplicity is not easy to assess. For this reason, when a model is developed, it is necessary to test it in order to understand its behavior and to include, if necessary, more complexity to get a better response. Ecosystem services are the conditions and processes through which natural ecosystems, and their constituent species, sustain and fulfill human life. The relevance of ecosystem services and the need to better manage them and their associated benefits have stimulated the emergence of models and tools to measure them. InVEST, Integrated Valuation of Ecosystem Services and Tradoffs, is one of these ecosystem services-specific tools developed by the Natural Capital Project (Stanford University, USA). As a result of the growing interest in measuring ecosystem services, the use of InVEST is anticipated to grow exponentially in the coming years. However, apart from model development, making a model involves other crucial stages such as its evaluation and application in order to validate estimations. The work developed in this thesis tries to help in this relevant and imperative phase of the modeling process, and does so in two different ways. The first one is to conduct a sensitivity analysis of the model, which consists in choosing and applying a methodology in an area and analyzing the results obtained. The second is related to the in-stream processes that are not modeled in the current model, and consists in creating and applying a methodology for testing the streams role in the InVEST nutrient retention model in a case study, analyzing the results obtained. The results of this Thesis will contribute to the understanding of the uncertainties involved in the modeling process. It will also illustrate the need to check the behavior of every model developed before putting them in production and illustrate the importance of understanding their behavior in terms of correctly interpreting the results obtained in light of uncertainty. The work in this thesis will contribute to improve the InVEST platform, which is an important tool in the field of ecosystem services. Such work will benefit future users, whether they are researchers (in their future research), or technicians (in their future work in ecosystem conservation or management decisions).
Resumo:
The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.
Resumo:
Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S) communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs) and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs). Our approach represents a solution that helps to efficiently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (un)subscription cost per node, while improving the global performance/throughput of the network of brokers without fltering the cost of SIENA-like topology changes.
Resumo:
The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.
Resumo:
Background: Component-based diagnosis on multiplex platforms is widely used in food allergy but its clinical performance has not been evaluated in nut allergy. Objective: To assess the diagnostic performance of a commercial protein microarray in the determination of specific IgE (sIgE) in peanut, hazelnut, and walnut allergy. Methods: sIgE was measured in 36 peanut-allergic, 36 hazelnut-allergic, and 44 walnut-allergic patients by ISAC 112, and subsequently, sIgE against available components was determined by ImmunoCAP in patients with negative ISAC results. ImmunoCAP was also used to measure sIgE to Ara h 9, Cor a 8, and Jug r 3 in a subgroup of lipid transfer protein (LTP)-sensitized nut-allergic patients (positive skin prick test to LTP-enriched extract). sIgE levels by ImmunoCAP were compared with ISAC ranges. Results: Most peanut-, hazelnut-, and walnut-allergic patients were sensitized to the corresponding nut LTP (Ara h 9, 66.7%; Cor a 8, 80.5%; Jug r 3, 84% respectively). However, ISAC did not detect sIgE in 33.3% of peanut-allergic patients, 13.9% of hazelnut-allergic patients, or 13.6% of walnut-allergic patients. sIgE determination by ImmunoCAP detected sensitization to Ara h 9, Cor a 8, and Jug r 3 in, respectively, 61.5% of peanut-allergic patients, 60% of hazelnut-allergic patients, and 88.3% of walnut-allergic patients with negative ISAC results. In the subgroup of peach LTP?sensitized patients, Ara h 9 sIgE was detected in more cases by ImmunoCAP than by ISAC (94.4% vs 72.2%, P<.05). Similar rates of Cor a 8 and Jug r 3 sensitization were detected by both techniques. Conclusions: The diagnostic performance of ISAC was adequate for hazelnut and walnut allergy but not for peanut allergy. sIgE sensitivity against Ara h 9 in ISAC needs to be improved.
Resumo:
Syntax denotes a rule system that allows one to predict the sequencing of communication signals. Despite its significance for both human speech processing and animal acoustic communication, the representation of syntactic structure in the mammalian brain has not been studied electrophysiologically at the single-unit level. In the search for a neuronal correlate for syntax, we used playback of natural and temporally destructured complex species-specific communication calls—so-called composites—while recording extracellularly from neurons in a physiologically well defined area (the FM–FM area) of the mustached bat’s auditory cortex. Even though this area is known to be involved in the processing of target distance information for echolocation, we found that units in the FM–FM area were highly responsive to composites. The finding that neuronal responses were strongly affected by manipulation in the time domain of the natural composite structure lends support to the hypothesis that syntax processing in mammals occurs at least at the level of the nonprimary auditory cortex.
Resumo:
We report characterization of a human T-cell lymphotropic virus type II (HTLV-II) isolated from an interleukin 2-dependent CD8 T-cell line derived from peripheral blood mononuclear cells of a healthy, HTLV-II-seropositive female Bakola Pygmy, aged 59, living in a remote equatorial forest area in south Cameroon. This HTLLV-II isolate, designated PYGCAM-1, reacted in an indirect immunofluorescence assay with HTLV-II and HTLV-I polyclonal antibodies and with an HTLV-I/II gp46 monoclonal antibody but not with HTLV-I gag p19 or p24 monoclonal antibodies. The cell line produced HTLV-I/II p24 core antigen and retroviral particles. The entire env gene (1462 bp) and most of the long terminal repeat (715 bp) of the PYGCAM-1 provirus were amplified by the polymerase chain reaction using HTLV-II-specific primers. Comparison with the long terminal repeat and envelope sequences of prototype HTLV-II strains indicated that PYGCAM-1 belongs to the subtype B group, as it has only 0.5-2% nucleotide divergence from HTLV-II B strains. The finding of antibodies to HTLV-II in sera taken from the father of the woman in 1984 and from three unrelated members of the same population strongly suggests that PYGCAM-1 is a genuine HTLV-II that has been present in this isolated population for a long time. The low genetic divergence of this African isolate from American isolates raises questions about the genetic variability over time and the origin and dissemination of HTLV-II, hitherto considered to be predominantly a New World virus.
Resumo:
"The purpose of t his thesis is to present a history of stained glass used in the De.nver area in tbe past hundred years. It has been necessary to present a sufficient background on t he history of the art since its heritage in the twelfth century so that t he evolution can be properly understood. Differ.ent movements, first in Europe and later in America and Europe, have influenced the art and brought it to its present state in t he Denver area"
Resumo:
We carry out a seismic noise study based on array measurements at three sites in the Málaga basin, South Spain, for the further estimation of shear wave velocity profiles. For this purpose, we use both the H/V method and the f–k technique in order to characterize the different materials present in the zone, i.e., Quaternary sediments and Pliocene sedimentary rocks above the bedrock. The H/V analysis shows frequency peaks going from 1 Hz, in areas close to the border of the basin, to 0.3 Hz in places located toward the center of the formation. The f–k analysis allows obtaining the dispersion curves associated with each site and subsequently, estimating the Vs profiles by inversion of the respective group velocities. In this way, the basin basement can be characterized by S-wave velocities greater than 2000 m/s. Regarding the basin fill, it is divided into three layers defined by different wave velocity intervals. The shallowest one is featured by velocities ranging from 150 to 400 m/s and comprises the Quaternary sediments, while velocities going from 550–700 to1200–1600 m/s characterize the two underlying layers composed by Pliocene sediments. Finally, the information provided by the three Vs profiles is integrated in a 2D cross-section of the basin to have a spatial view of its sedimentary structure. The results obtained here, in addition to providing useful information about the infill of the basin near the metropolitan area of Málaga, will be very helpful for future seismic zonation studies in the region.
Resumo:
The distribution and composition of Amphipoda assemblages were analysed off the coasts of Alicante (Spain, Western Mediterranean), a disturbed area affected by several co-occurring anthropogenic impacts. Although differences among sampled stations were mainly related to natural parameters, anthropogenic activities were linked with changes in amphipod assemblages. Expansion of the Port of Alicante, a sewage outfall and a high salinity brine discharge could be causing the disappearance of amphipods at stations closer to these disturbances. However, the completion of port enlargement works and mitigatory dilution of the brine discharge has led to the recovery of the amphipod assemblage. Among the natural parameters, depth determines the distribution of some of the species. While Siphonoecetes sabatieri was abundant at shallow stations, Ampelisca spp., Photis longipes, Pseudolirius kroyeri, Apherusa chiereghinii and Phtisica marina were more abundant at deeper stations. Grain size and percentage of organic matter also influenced amphipod distribution, resulting in changes in species composition and in the relative percentages of different trophic groups. Species such as Ampelisca brevicornis, Perioculodes longimanus, Urothoe hesperiae and Urothoe elegans were more abundant at stations with a high content of fine sand. Carnivorous species, mainly of the Oedicerotidae family, were more abundant at those stations with a low organic matter content, while detritivorous species were more abundant at stations with a higher mud content. Among 62 identified species, three were reported for the first time from the Spanish Mediterranean coast, two species were recorded for the second time and a new species of Siphonoecetes was found, Siphonoecetes (Centraloecetes) bulborostrum. These results confirm the need for further data on amphipods from the Mediterranean Spanish coast.