881 resultados para grid databases
Resumo:
Smart grid research has tended to be compartmentalised, with notable contributions from economics, electrical engineering and science and technology studies. However, there is an acknowledged and growing need for an integrated systems approach to the evaluation of smart grid initiatives. The capacity to simulate and explore smart grid possibilities on various scales is key to such an integrated approach but existing models – even if multidisciplinary – tend to have a limited focus. This paper describes an innovative and flexible framework that has been developed to facilitate the simulation of various smart grid scenarios and the interconnected social, technical and economic networks from a complex systems perspective. The architecture is described and related to realised examples of its use, both to model the electricity system as it is today and to model futures that have been envisioned in the literature. Potential future applications of the framework are explored, along with its utility as an analytic and decision support tool for smart grid stakeholders.
Resumo:
Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.
Resumo:
Spectroscopic catalogues, such as GEISA and HITRAN, do not yet include information on the water vapour continuum that pervades visible, infrared and microwave spectral regions. This is partly because, in some spectral regions, there are rather few laboratory measurements in conditions close to those in the Earth’s atmosphere; hence understanding of the characteristics of the continuum absorption is still emerging. This is particularly so in the near-infrared and visible, where there has been renewed interest and activity in recent years. In this paper we present a critical review focusing on recent laboratory measurements in two near-infrared window regions (centred on 4700 and 6300 cm−1) and include reference to the window centred on 2600 cm−1 where more measurements have been reported. The rather few available measurements, have used Fourier transform spectroscopy (FTS), cavity ring down spectroscopy, optical-feedback – cavity enhanced laser spectroscopy and, in very narrow regions, calorimetric interferometry. These systems have different advantages and disadvantages. Fourier Transform Spectroscopy can measure the continuum across both these and neighbouring windows; by contrast, the cavity laser techniques are limited to fewer wavenumbers, but have a much higher inherent sensitivity. The available results present a diverse view of the characteristics of continuum absorption, with differences in continuum strength exceeding a factor of 10 in the cores of these windows. In individual windows, the temperature dependence of the water vapour self-continuum differs significantly in the few sets of measurements that allow an analysis. The available data also indicate that the temperature dependence differs significantly between different near-infrared windows. These pioneering measurements provide an impetus for further measurements. Improvements and/or extensions in existing techniques would aid progress to a full characterisation of the continuum – as an example, we report pilot measurements of the water vapour self-continuum using a supercontinuum laser source coupled to an FTS. Such improvements, as well as additional measurements and analyses in other laboratories, would enable the inclusion of the water vapour continuum in future spectroscopic databases, and therefore allow for a more reliable forward modelling of the radiative properties of the atmosphere. It would also allow a more confident assessment of different theoretical descriptions of the underlying cause or causes of continuum absorption.
Resumo:
Motivation: DNA assembly programs classically perform an all-against-all comparison of reads to identify overlaps, followed by a multiple sequence alignment and generation of a consensus sequence. If the aim is to assemble a particular segment, instead of a whole genome or transcriptome, a target-specific assembly is a more sensible approach. GenSeed is a Perl program that implements a seed-driven recursive assembly consisting of cycles comprising a similarity search, read selection and assembly. The iterative process results in a progressive extension of the original seed sequence. GenSeed was tested and validated on many applications, including the reconstruction of nuclear genes or segments, full-length transcripts, and extrachromosomal genomes. The robustness of the method was confirmed through the use of a variety of DNA and protein seeds, including short sequences derived from SAGE and proteome projects.
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.
Resumo:
The InteGrade project is a multi-university effort to build a novel grid computing middleware based on the opportunistic use of resources belonging to user workstations. The InteGrade middleware currently enables the execution of sequential, bag-of-tasks, and parallel applications that follow the BSP or the MPI programming models. This article presents the lessons learned over the last five years of the InteGrade development and describes the solutions achieved concerning the support for robust application execution. The contributions cover the related fields of application scheduling, execution management, and fault tolerance. We present our solutions, describing their implementation principles and evaluation through the analysis of several experimental results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Selective Estrogen Receptor Modulators ( SERMs) have been developed, but the selectivity towards the subtypes ( ER or ER is not well understood. Based on three-dimensional structural properties of ligand binding domains, a model that takes into account this aspect was developed via molecular interaction fields and consensus principal component analysis (GRID/CPCA).
Resumo:
The work presented in this thesis concerns the dimensioning of an Energy Storage System (ESS) which will be used as an energy buffer for a grid-connected PV plant. This ESS should help managing the PV plant to inject electricity into the grid according to the requirements of the grid System Operator. It is desired to obtain a final production not below 1300kWh/kWp with a maximum ESS budget of 0.9€/Wp. The PV plant will be sited in Martinique Island and connected to the main grid. This grid is a small one where the perturbations due clouds in the PV generation are not negligible anymore. A software simulation tool, incorporating a model for the PV-plant production, the ESS and the required injection pattern of electricity into the grid has been developed in MS Excel. This tool has been used to optimize the relevant parameters defining the ESS so that the feed-in of electricity into the grid can be controlled to fulfill the conditions given by the System Operator. The inputs used for this simulation tool are, besides the conditions given by the System Operator on the allowed injection pattern, the production data from a similar PV-plant in a close-by location, and variables for defining the ESS. The PV production data used is from a site with similar climate and weather conditions as for the site on the Martinique Island and hence gives information on the short term insolation variations as well as expected annual electricity production. The ESS capacity and the injected electric energy will be the main figures to compare while doing an economic study of the whole plant. Hence, the Net Present Value, Benefit to Cost method and Pay-back period studies are carried on as dependent of the ESS capacity. The conclusion of this work is that it is possible to obtain the requested injection pattern by using an ESS. The design of the ESS can be made within an acceptable budget. The capacity of ESS to link with the PV system depends on the priorities of the final output characteristics, and it also depends on which economic parameter that is chosen as a priority.
Resumo:
The authors take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. They argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. They also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, they analyse the performance of each of these contributions in our implementation.
Resumo:
We take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. We argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. We also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, we analyse the performance of each of these contributions in our implementation.