167 resultados para Faustino, Mário


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering ensemble methods produce a consensus partition of a set of data points by combining the results of a collection of base clustering algorithms. In the evidence accumulation clustering (EAC) paradigm, the clustering ensemble is transformed into a pairwise co-association matrix, thus avoiding the label correspondence problem, which is intrinsic to other clustering ensemble schemes. In this paper, we propose a consensus clustering approach based on the EAC paradigm, which is not limited to crisp partitions and fully exploits the nature of the co-association matrix. Our solution determines probabilistic assignments of data points to clusters by minimizing a Bregman divergence between the observed co-association frequencies and the corresponding co-occurrence probabilities expressed as functions of the unknown assignments. We additionally propose an optimization algorithm to find a solution under any double-convex Bregman divergence. Experiments on both synthetic and real benchmark data show the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho de Projecto submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro – Especialização em Encenação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Chaves basin is a pull-apart tectonic depression implanted on granites, schists, and graywackes, and filled with a sedimentary sequence of variable thickness. It is a rather complex structure, as it includes an intricate network of faults and hydrogeological systems. The topography of the basement of the Chaves basin still remains unclear, as no drill hole has ever intersected the bottom of the sediments, and resistivity surveys suffer from severe equivalence issues resulting from the geological setting. In this work, a joint inversion approach of 1D resistivity and gravity data designed for layered environments is used to combine the consistent spatial distribution of the gravity data with the depth sensitivity of the resistivity data. A comparison between the results from the inversion of each data set individually and the results from the joint inversion show that although the joint inversion has more difficulty adjusting to the observed data, it provides more realistic and geologically meaningful models than the ones calculated by the inversion of each data set individually. This work provides a contribution for a better understanding of the Chaves basin, while using the opportunity to study further both the advantages and difficulties comprising the application of the method of joint inversion of gravity and resistivity data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a general coupling of two identical chaotic dynamical systems, and we obtain the conditions for synchronization. We consider two types of synchronization: complete synchronization and delayed synchronization. Then, we consider four different couplings having different behaviors regarding their ability to synchronize either completely or with delay: Symmetric Linear Coupled System, Commanded Linear Coupled System, Commanded Coupled System with delay and symmetric coupled system with delay. The values of the coupling strength for which a coupling synchronizes define its Window of synchronization. We obtain analytically the Windows of complete synchronization, and we apply it for the considered couplings that admit complete synchronization. We also obtain analytically the Window of chaotic delayed synchronization for the only considered coupling that admits a chaotic delayed synchronization, the commanded coupled system with delay. At last, we use four different free chaotic dynamics (based in tent map, logistic map, three-piecewise linear map and cubic-like map) in order to observe numerically the analytically predicted windows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sparse matrix-vector multiplication (SMVM) is a fundamental operation in many scientific and engineering applications. In many cases sparse matrices have thousands of rows and columns where most of the entries are zero, while non-zero data is spread over the matrix. This sparsity of data locality reduces the effectiveness of data cache in general-purpose processors quite reducing their performance efficiency when compared to what is achieved with dense matrix multiplication. In this paper, we propose a parallel processing solution for SMVM in a many-core architecture. The architecture is tested with known benchmarks using a ZYNQ-7020 FPGA. The architecture is scalable in the number of core elements and limited only by the available memory bandwidth. It achieves performance efficiencies up to almost 70% and better performances than previous FPGA designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partial dynamic reconfiguration of FPGAs can be used to implement complex applications using the concept of virtual hardware. In this work we have used partial dynamic reconfiguration to implement a JPEG decoder with reduced area. The image decoding process was adapted to be implemented on the FPGA fabric using this technique. The architecture was tested in a low cost ZYNQ-7020 FPGA that supports dynamic reconfiguration. The results show that the proposed solution needs only 40% of the resources utilized by a static implementation. The performance of the dynamic solution is about 9X slower than the static solution by trading-off internal resources of the FPGA. A throughput of 7 images per second is achievable with the proposed partial dynamic reconfiguration solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cleaning of syngas is one of the most important challenges in the development of technologies based on gasification of biomass. Tar is an undesired byproduct because, once condensed, it can cause fouling and plugging and damage the downstream equipment. Thermochemical methods for tar destruction, which include catalytic cracking and thermal cracking, are intrinsically attractive because they are energetically efficient and no movable parts are required nor byproducts are produced. The main difficulty with these methods is the tendency for tar to polymerize at high temperatures. An alternative to tar removal is the complete combustion of the syngas in a porous burner directly as it leaves the particle capture system. In this context, the main aim of this study is to evaluate the destruction of the tar present in the syngas from biomass gasification by combustion in porous media. A gas mixture was used to emulate the syngas, which included toluene as a tar surrogate. Initially, CHEMKIN was used to assess the potential of the proposed solution. The calculations revealed the complete destruction of the tar surrogate for a wide range of operating conditions and indicated that the most important reactions in the toluene conversion are C6H5CH3 + OH <-> C6H5CH2 + H2O, C6H5CH3 + OH <-> C6H4CH3 + H2O, and C6H5CH3 + O <-> OC6H4CH3 + H and that the formation of toluene can occur through C6H5CH2 + H <-> C6H5CH3. Subsequently, experimental tests were performed in a porous burner fired with pure methane and syngas for two equivalence ratios and three flow velocities. In these tests, the toluene concentration in the syngas varied from 50 to 200 g/Nm(3). In line with the CHEMKIN calculations, the results revealed that toluene was almost completely destroyed for all tested conditions and that the process did not affect the performance of the porous burner regarding the emissions of CO, hydrocarbons, and NOx.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes an implementation, based on a multi-agent system, of a management system for automated negotiation of electricity allocation for charging electric vehicles (EVs) and simulates its performance. The widespread existence of charging infrastructures capable of autonomous operation is recognised as a major driver towards the mass adoption of EVs by mobility consumers. Eventually, conflicting requirements from both power grid and EV owners require automated middleman aggregator agents to intermediate all operations, for example, bidding and negotiation, between these parts. Multi-agent systems are designed to provide distributed, modular, coordinated and collaborative management systems; therefore, they seem suitable to address the management of such complex charging infrastructures. Our solution consists in the implementation of virtual agents to be integrated into the management software of a charging infrastructure. We start by modelling the multi-agent architecture using a federated, hierarchical layers setup and as well as the agents' behaviours and interactions. Each of these layers comprises several components, for example, data bases, decision-making and auction mechanisms. The implementation of multi-agent platform and auctions rules, and of models for battery dynamics, is also addressed. Four scenarios were predefined to assess the management system performance under real usage conditions, considering different types of profiles for EVs owners', different infrastructure configurations and usage and different loads on the utility grid (where real data from the concession holder of the Portuguese electricity transmission grid is used). Simulations carried with the four scenarios validate the performance of the modelled system while complying with all the requirements. Although all of these have been performed for one charging station alone, a multi-agent design may in the future be used for the higher level problem of distributing energy among charging stations. Copyright (c) 2014 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature discretization (FD) techniques often yield adequate and compact representations of the data, suitable for machine learning and pattern recognition problems. These representations usually decrease the training time, yielding higher classification accuracy while allowing for humans to better understand and visualize the data, as compared to the use of the original features. This paper proposes two new FD techniques. The first one is based on the well-known Linde-Buzo-Gray quantization algorithm, coupled with a relevance criterion, being able perform unsupervised, supervised, or semi-supervised discretization. The second technique works in supervised mode, being based on the maximization of the mutual information between each discrete feature and the class label. Our experimental results on standard benchmark datasets show that these techniques scale up to high-dimensional data, attaining in many cases better accuracy than existing unsupervised and supervised FD approaches, while using fewer discretization intervals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes an FPGA-based architecture for onboard hyperspectral unmixing. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral datasets. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The localization of magma melting areas at the lithosphere bottom in extensional volcanic domains is poorly understood. Large polygenetic volcanoes of long duration and their associated magma chambers suggest that melting at depth may be focused at specific points within the mantle. To validate the hypothesis that the magma feeding a mafic crust, comes from permanent localized crustal reservoirs, it is necessary to map the fossilized magma flow within the crustal planar intrusions. Using the AMS, we obtain magmatic flow vectors from 34 alkaline basaltic dykes from São Jorge, São Miguel and Santa Maria islands in the Azores Archipelago, a hot-spot related triple junction. The dykes contain titanomagnetite showing a wide spectrum of solid solution ranging from Ti-rich to Ti-poor compositions with vestiges of maghemitization. Most of the dykes exhibit a normal magnetic fabric. The orientation of the magnetic lineation k1 axis is more variable than that of the k3 axis, which is generally well grouped. The dykes of São Jorge and São Miguel show a predominance of subhorizontal magmatic flows. In Santa Maria the deduced flow pattern is less systematic changing from subhorizontal in the southern part of the island to oblique in north. These results suggest that the ascent of magma beneath the islands of Azores is predominantly over localized melting sources and then collected within shallow magma chambers. According to this concept, dykes in the upper levels of the crust propagate laterally away from these magma chambers thus feeding the lava flows observed at the surface.