814 resultados para swd: Benchmark


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The two largest causes for battery consumption on mobile devices are related with the display and network operations. Since most application need to share data and communicate with remote servers, communications should be as lightweight and efficient as possible. In network communication, serialization plays a central role as the process of converting an object into a stream of bytes. One of the most popular data-interchange format is JSON (JavaScript Object Notation). This paper presents a survey on JSON parsers in mobile scenarios. The aim of the survey is to find the most efficient JSON parser in mobile communications characterised by high transfer rate of small amounts of data. In the performance benchmark we compare the time required to read and write data with several popular JSON parser implementations such as Gson, Jackson, org.json and others. The results of this survey are important for others that need to select an efficient parser for mobile communication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with a hierarchical structure composed by an event-based supervisor in a higher level and two distinct proportional integral (PI) controllers in a lower level. The controllers are applied to a variable speed wind energy conversion system with doubly-fed induction generator, namely, the fuzzy PI control and the fractional-order PI control. The event-based supervisor analyses the operation state of the wind energy conversion system among four possible operational states: park, start-up, generating or brake and sends the operation state to the controllers in the lower level. In start-up state, the controllers only act on electric torque while pitch angle is equal to zero. In generating state, the controllers must act on the pitch angle of the blades in order to maintain the electric power around the nominal value, thus ensuring that the safety conditions required for integration in the electric grid are met. Comparisons between fuzzy PI and fractional-order PI pitch controllers applied to a wind turbine benchmark model are given and simulation results by Matlab/Simulink are shown. From the results regarding the closed loop point of view, fuzzy PI controller allows a smoother response at the expense of larger number of variations of the pitch angle, implying frequent switches between operational states. On the other hand fractional-order PI controller allows an oscillatory response with less control effort, reducing switches between operational states. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is on an onshore variable speed wind turbine with doubly fed induction generator and under supervisory control. The control architecture is equipped with an event-based supervisor for the supervision level and fuzzy proportional integral or discrete adaptive linear quadratic as proposed controllers for the execution level. The supervisory control assesses the operational state of the variable speed wind turbine and sends the state to the execution level. Controllers operation are in the full load region to extract energy at full power from the wind while ensuring safety conditions required to inject the energy into the electric grid. A comparison between the simulations of the proposed controllers with the inclusion of the supervisory control on the variable speed wind turbine benchmark model is presented to assess advantages of these controls. (C) 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering ensemble methods produce a consensus partition of a set of data points by combining the results of a collection of base clustering algorithms. In the evidence accumulation clustering (EAC) paradigm, the clustering ensemble is transformed into a pairwise co-association matrix, thus avoiding the label correspondence problem, which is intrinsic to other clustering ensemble schemes. In this paper, we propose a consensus clustering approach based on the EAC paradigm, which is not limited to crisp partitions and fully exploits the nature of the co-association matrix. Our solution determines probabilistic assignments of data points to clusters by minimizing a Bregman divergence between the observed co-association frequencies and the corresponding co-occurrence probabilities expressed as functions of the unknown assignments. We additionally propose an optimization algorithm to find a solution under any double-convex Bregman divergence. Experiments on both synthetic and real benchmark data show the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CO2 capture from gaseous effluents is one of the great challenges faced by chemical and environmental engineers, as the increase in CO2 levels in the Earth atmosphere might be responsible for dramatic climate changes. From the existing capture technologies, the only proven and mature technology is chemical absorption using aqueous amine solutions. However, bearing in mind that this process is somewhat expensive, it is important to choose the most efficient and, at the same time, the least expensive solvents. For this purpose, a pilot test facility was assembled and includes an absorption column, as well as a stripping column, a heat exchanger between the two columns, a reboiler for the stripping column, pumping systems, surge tanks and all necessary instrumentation and control systems. Some different aquous amine solutions were tested on this facility and it was found that, from a set of six tested amines, diethanol amine is the one that turned out to be the most economical choice, as it showed a higher CO2 loading capacity (0.982 mol of CO2 per mol of amine) and the lowest price per litre (25.70 ¤/L), even when compared with monoethanolamine, the benchmark solvent, exhibiting a price per litre of 30.50 ¤/L.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we address the problem of computing multiple roots of a system of nonlinear equations through the global optimization of an appropriate merit function. The search procedure for a global minimizer of the merit function is carried out by a metaheuristic, known as harmony search, which does not require any derivative information. The multiple roots of the system are sequentially determined along several iterations of a single run, where the merit function is accordingly modified by penalty terms that aim to create repulsion areas around previously computed minimizers. A repulsion algorithm based on a multiplicative kind penalty function is proposed. Preliminary numerical experiments with a benchmark set of problems show the effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the discovery of the Higgs boson at the Large Hadron Collider the high energy physics community's attention has now turned to understanding the properties of the Higgs boson, together with the hope of finding more scalars during run 2. In this work we discuss scenarios where using a combination of three decays, involving the 125 GeV Higgs boson, the Z boson and at least one more scalar, an indisputable signal of CP-violation arises. We use a complex two-Higgs doublet model as a reference model and present some benchmark points that have passed all current experimental and theoretical constraints, and that have cross sections large enough to be probed during run 2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature discretization (FD) techniques often yield adequate and compact representations of the data, suitable for machine learning and pattern recognition problems. These representations usually decrease the training time, yielding higher classification accuracy while allowing for humans to better understand and visualize the data, as compared to the use of the original features. This paper proposes two new FD techniques. The first one is based on the well-known Linde-Buzo-Gray quantization algorithm, coupled with a relevance criterion, being able perform unsupervised, supervised, or semi-supervised discretization. The second technique works in supervised mode, being based on the maximization of the mutual information between each discrete feature and the class label. Our experimental results on standard benchmark datasets show that these techniques scale up to high-dimensional data, attaining in many cases better accuracy than existing unsupervised and supervised FD approaches, while using fewer discretization intervals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sustentabilidade energética do planeta é uma preocupação corrente e, neste sentido, a eficiência energética afigura-se como sendo essencial para a redução do consumo em todos os setores de atividade. No que diz respeito ao setor residencial, o indevido comportamento dos utilizadores aliado ao desconhecimento do consumo dos diversos aparelhos, são factores impeditivos para a redução do consumo energético. Uma ferramenta importante, neste sentido, é a monitorização de consumos nomeadamente a monitorização não intrusiva, que apresenta vantagens económicas relativamente à monitorização intrusiva, embora levante alguns desafios na desagregação de cargas. Abordou-se então, neste documento, a temática da monitorização não intrusiva onde se desenvolveu uma ferramenta de desagregação de cargas residenciais, sobretudo de aparelhos que apresentavam elevados consumos. Para isso, monitorizaram-se os consumos agregados de energia elétrica, água e gás de seis habitações do município de Vila Nova de Gaia. Através da incorporação dos vetores de água e gás, a acrescentar ao da energia elétrica, provou-se que a performance do algoritmo de desagregação de aparelhos poderá aumentar, no caso de aparelhos que utilizem simultaneamente energia elétrica e água ou energia elétrica e gás. A eficiência energética é também parte constituinte deste trabalho e, para tal, implementaram-se medidas de eficiência energética para uma das habitações em estudo, de forma a concluir as que exibiam maior potencial de poupança, assim como rápidos períodos de retorno de investimento. De um modo geral, os objetivos propostos foram alcançados e espera-se que num futuro próximo, a monitorização de consumos não intrusiva se apresente como uma solução de referência no que respeita à sustentabilidade energética do setor residencial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação submetida à Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática