969 resultados para Data Flow Algorithm
Resumo:
Representation error arises from the inability of the forecast model to accurately simulate the climatology of the truth. We present a rigorous framework for understanding this kind of error of representation. This framework shows that the lack of an inverse in the relationship between the true climatology (true attractor) and the forecast climatology (forecast attractor) leads to the error of representation. A new gain matrix for the data assimilation problem is derived that illustrates the proper approaches one may take to perform Bayesian data assimilation when the observations are of states on one attractor but the forecast model resides on another. This new data assimilation algorithm is the optimal scheme for the situation where the distributions on the true attractor and the forecast attractors are separately Gaussian and there exists a linear map between them. The results of this theory are illustrated in a simple Gaussian multivariate model.
Resumo:
We describe the public ESO near-IR variability survey (VVV) scanning the Milky Way bulge and an adjacent section of the mid-plane where star formation activity is high. The survey will take 1929 h of observations with the 4-m VISTA telescope during 5 years (2010-2014), covering similar to 10(9) point sources across an area of 520 deg(2), including 33 known globular clusters and similar to 350 open clusters. The final product will be a deep near-IR atlas in five passbands (0.9-2.5 mu m) and a catalogue of more than 106 variable point sources. Unlike single-epoch surveys that, in most cases, only produce 2-D maps, the VVV variable star survey will enable the construction of a 3-D map of the surveyed region using well-understood distance indicators such as RR Lyrae stars, and Cepheids. It will yield important information on the ages of the populations. The observations will be combined with data from MACHO, OGLE, EROS, VST, Spitzer, HST, Chandra, INTEGRAL, WISE, Fermi LAT, XMM-Newton, GAIA and ALMA for a complete understanding of the variable sources in the inner Milky Way. This public survey will provide data available to the whole community and therefore will enable further studies of the history of the Milky Way, its globular cluster evolution, and the population census of the Galactic Bulge and center, as well as the investigations of the star forming regions in the disk. The combined variable star catalogues will have important implications for theoretical investigations of pulsation properties of stars. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.
Resumo:
In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte
Resumo:
In this paper, a multi-objective approach for observing the performance of distribution systems with embedded generators in the steady state, based on heuristic and power system analysis, is proposed. The proposed hybrid performance index describes the quality of the operating state in each considered distribution network configuration. In order to represent the system state, the loss allocation in the distribution systems, based on the Z-bus loss allocation method and compensation-based power flow algorithm, is determined. Also, an investigation of the impact of the integration of embedded generators on the overall performance of the distribution systems in the steady state, is performed. Results obtained from several case studies are presented and discussed. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications
Resumo:
The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared
Resumo:
Distribution systems with distributed generation require new analysis methods since networks are not longer passive. Two of the main problems in this new scenario are the network reconfiguration and the loss allocation. This work presents a distribution systems graphic simulator, developed with reconfiguration functions and a special focus on loss allocation, both considering the presence of distributed generation. This simulator uses a fast and robust power flow algorithm based on the current summation backward-forward technique. Reconfiguration problem is solved through a heuristic methodology and the losses allocation function, based on the Zbus method, is presented as an attached result for each obtained configuration. Results are presented and discussed, remarking the easiness of analysis through the graphic simulator as an excellent tool for planning and operation engineers, and very useful for training. © 2004 IEEE.
Resumo:
Low flexibility and reliability in the operation of radial distribution networks make those systems be constructed with extra equipment as sectionalising switches in order to reconfigure the network, so the operation quality of the network can be improved. Thus, sectionalising switches are used for fault isolation and for configuration management (reconfiguration). Moreover, distribution systems are being impacted by the increasing insertion of distributed generators. Hence, distributed generation became one of the relevant parameters in the evaluation of systems reconfiguration. Distributed generation may affect distribution networks operation in various ways, causing noticeable impacts depending on its location. Thus, the loss allocation problem becomes more important considering the possibility of open access to the distribution networks. In this work, a graphic simulator for distribution networks with reconfiguration and loss allocation functions, is presented. Reconfiguration problem is solved through a heuristic methodology, using a robust power flow algorithm based on the current summation backward-forward technique, considering distributed generation. Four different loss allocation methods (Zbus, Direct Loss Coefficient, Substitution and Marginal Loss Coefficient) are implemented and compared. Results for a 32-bus medium voltage distribution network, are presented and discussed.
Resumo:
Unbalance and harmonics are two major distortions in the three-phase distribution systems. In this paper an investigation into unbalance phenomena in the distribution networks using instantaneous space vector theory, is presented. Power oscillation index (POI) and effective power factor (PFe) are calculated in the network nodes for several unbalance loading conditions. For system analysis a general power flow algorithm for three-phase four-wire radial distribution networks, based on backward-forward technique, is applied. Results obtained from several case studies using medium and low voltage test feeder with unbalanced load, are presented and discussed. © 2010 Praise Worthy Prize S.r.l. - All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)