945 resultados para Graph databases
Resumo:
We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
In this paper, Bond Graphs are employed to develop a novel mathematical model of conventional switched-mode DC-DC converters valid for both continuous and discontinuous conduction modes. A unique causality bond graph model of hybrid models is suggested with the operation of the switch and the diode to be represented by a Modulated Transformer with a binary input and a resistor with fixed conductance causality. The operation of the diode is controlled using an if-then function within the model. The extracted hybrid model is implemented on a Boost and Buck converter with their operations to change from CCM to DCM and to return to CCM. The vector fields of the models show validity in a wide operation area and comparison with the simulation of the converters using PSPICE reveals high accuracy of the proposed model, with the Normalised Root Means Square Error and the Maximum Absolute Error remaining adequately low. The model is also experimentally tested on a Buck topology.
Resumo:
This article is concerned with the risks associated with the monopolisation of information that is available from a single source only. Although there is a longstanding consensus that sole-source databases should not receive protection under the EU Database Directive, and there are legislative provisions to ensure that lawful users have access to a database’s contents, Ryanair v PR Aviation challenges this assumption by affirming that the use of non-protected databases can be restricted by contract. Owners of non-protected databases can contractually exclude lawful users from taking the benefit of statutorily permitted uses, because such databases are not covered from the legislation that declares this kind of contract null and void. We argue that this judgment is not consistent with the legislative history and can have a profound impact on the functioning of the digital single market, where new information services, such as meta-search engines or price-comparison websites, base their operation on the systematic extraction and re-utilisation of materials available from online sources. This is an issue that the Commission should address in a forthcoming evaluation of the Database Directive.
Resumo:
The present work describes a new tool that helps bidders improve their competitive bidding strategies. This new tool consists of an easy-to-use graphical tool that allows the use of more complex decision analysis tools in the field of Competitive Bidding. The graphic tool described here tries to move away from previous bidding models which attempt to describe the result of an auction or a tender process by means of studying each possible bidder with probability density functions. As an illustration, the tool is applied to three practical cases. Theoretical and practical conclusions on the great potential breadth of application of the tool are also presented.
Resumo:
Spectroscopic catalogues, such as GEISA and HITRAN, do not yet include information on the water vapour continuum that pervades visible, infrared and microwave spectral regions. This is partly because, in some spectral regions, there are rather few laboratory measurements in conditions close to those in the Earth’s atmosphere; hence understanding of the characteristics of the continuum absorption is still emerging. This is particularly so in the near-infrared and visible, where there has been renewed interest and activity in recent years. In this paper we present a critical review focusing on recent laboratory measurements in two near-infrared window regions (centred on 4700 and 6300 cm−1) and include reference to the window centred on 2600 cm−1 where more measurements have been reported. The rather few available measurements, have used Fourier transform spectroscopy (FTS), cavity ring down spectroscopy, optical-feedback – cavity enhanced laser spectroscopy and, in very narrow regions, calorimetric interferometry. These systems have different advantages and disadvantages. Fourier Transform Spectroscopy can measure the continuum across both these and neighbouring windows; by contrast, the cavity laser techniques are limited to fewer wavenumbers, but have a much higher inherent sensitivity. The available results present a diverse view of the characteristics of continuum absorption, with differences in continuum strength exceeding a factor of 10 in the cores of these windows. In individual windows, the temperature dependence of the water vapour self-continuum differs significantly in the few sets of measurements that allow an analysis. The available data also indicate that the temperature dependence differs significantly between different near-infrared windows. These pioneering measurements provide an impetus for further measurements. Improvements and/or extensions in existing techniques would aid progress to a full characterisation of the continuum – as an example, we report pilot measurements of the water vapour self-continuum using a supercontinuum laser source coupled to an FTS. Such improvements, as well as additional measurements and analyses in other laboratories, would enable the inclusion of the water vapour continuum in future spectroscopic databases, and therefore allow for a more reliable forward modelling of the radiative properties of the atmosphere. It would also allow a more confident assessment of different theoretical descriptions of the underlying cause or causes of continuum absorption.
Resumo:
Motivation: DNA assembly programs classically perform an all-against-all comparison of reads to identify overlaps, followed by a multiple sequence alignment and generation of a consensus sequence. If the aim is to assemble a particular segment, instead of a whole genome or transcriptome, a target-specific assembly is a more sensible approach. GenSeed is a Perl program that implements a seed-driven recursive assembly consisting of cycles comprising a similarity search, read selection and assembly. The iterative process results in a progressive extension of the original seed sequence. GenSeed was tested and validated on many applications, including the reconstruction of nuclear genes or segments, full-length transcripts, and extrachromosomal genomes. The robustness of the method was confirmed through the use of a variety of DNA and protein seeds, including short sequences derived from SAGE and proteome projects.
Resumo:
Chagas disease is nowadays the most serious parasitic health problem. This disease is caused by Trypanosoma cruzi. The great number of deaths and the insufficient effectiveness of drugs against this parasite have alarmed the scientific community worldwide. In an attempt to overcome this problem, a model for the design and prediction of new antitrypanosomal agents was obtained. This used a mixed approach, containing simple descriptors based on fragments and topological substructural molecular design descriptors. A data set was made up of 188 compounds, 99 of them characterized an antitrypanosomal activity and 88 compounds that belong to other pharmaceutical categories. The model showed sensitivity, specificity and accuracy values above 85%. Quantitative fragmental contributions were also calculated. Then, and to confirm the quality of the model, 15 structures of molecules tested as antitrypanosomal compounds (that we did not include in this study) were predicted, taking into account the information on the abovementioned calculated fragmental contributions. The model showed an accuracy of 100% which means that the ""in silico"" methodology developed by our team is promising for the rational design of new antitrypanosomal drugs. (C) 2009 Wiley Periodicals, Inc. J Comput Chem 31: 882-894. 2010
Resumo:
The increasing resistance of Mycobacterium tuberculosis to the existing drugs has alarmed the worldwide scientific community. In an attempt to overcome this problem, two models for the design and prediction of new antituberculosis agents were obtained. The first used a mixed approach, containing descriptors based on fragments and the topological substructural molecular design approach (TOPS-MODE) descriptors. The other model used a combination of two-dimensional (2D) and three-dimensional (3D) descriptors. A data set of 167 compounds with great structural variability, 72 of them antituberculosis agents and 95 compounds belonging to other pharmaceutical categories, was analyzed. The first model showed sensitivity, specificity, and accuracy values above 80% and the second one showed values higher than 75% for these statistical indices. Subsequently, 12 structures of imidazoles not included in this study were designed, taking into account the two models. In both cases accuracy was 100%, showing that the methodology in silico developed by us is promising for the rational design of antituberculosis drugs.
Resumo:
The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer.In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time.We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space.A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.
Resumo:
The traveling salesman problem is although looking very simple problem but it is an important combinatorial problem. In this thesis I have tried to find the shortest distance tour in which each city is visited exactly one time and return to the starting city. I have tried to solve traveling salesman problem using multilevel graph partitioning approach.Although traveling salesman problem itself very difficult as this problem is belong to the NP-Complete problems but I have tried my best to solve this problem using multilevel graph partitioning it also belong to the NP-Complete problems. I have solved this thesis by using the k-mean partitioning algorithm which divides the problem into multiple partitions and solving each partition separately and its solution is used to improve the overall tour by applying Lin Kernighan algorithm on it. Through all this I got optimal solution which proofs that solving traveling salesman problem through graph partition scheme is good for this NP-Problem and through this we can solved this intractable problem within few minutes.Keywords: Graph Partitioning Scheme, Traveling Salesman Problem.