949 resultados para Multi microprocessor applications
Resumo:
This paper compares the effectiveness of the Tsallis entropy over the classic Boltzmann-Gibbs-Shannon entropy for general pattern recognition, and proposes a multi-q approach to improve pattern analysis using entropy. A series of experiments were carried out for the problem of classifying image patterns. Given a dataset of 40 pattern classes, the goal of our image case study is to assess how well the different entropies can be used to determine the class of a newly given image sample. Our experiments show that the Tsallis entropy using the proposed multi-q approach has great advantages over the Boltzmann-Gibbs-Shannon entropy for pattern classification, boosting image recognition rates by a factor of 3. We discuss the reasons behind this success, shedding light on the usefulness of the Tsallis entropy and the multi-q approach. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a structural damage detection methodology based on genetic algorithms and dynamic parameters. Three chromosomes are used to codify an individual in the population. The first and second chromosomes locate and quantify damage, respectively. The third permits the self-adaptation of the genetic parameters. The natural frequencies and mode shapes are used to formulate the objective function. A numerical analysis was performed for several truss structures under different damage scenarios. The results have shown that the methodology can reliably identify damage scenarios using noisy measurements and that it results in only a few misidentified elements. (C) 2012 Civil-Comp Ltd and Elsevier Ltd. All rights reserved.
Resumo:
Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
Nanocomposite fibers based on multi-walled carbon nanotubes (MWCNT) and poly(lactic acid) (PLA) were prepared by solution blow spinning (SBS). Fiber morphology was characterized by scanning electron microscopy (SEM) and optical microscopy (OM). Electrical, thermal, surface and crystalline properties of the spun fibers were evaluated, respectively, by conductivity measurements (4-point probe), thermogravimetric analyses (TGA), differential scanning calorimetry (DSC), contact angle and X-ray diffraction (XRD). OM analysis of the spun mats showed a poor dispersion of MWCNT in the matrix, however dispersion in solution was increased during spinning where droplets of PLA in solution loaded with MWCNT were pulled by the pressure drop at the nozzle, producing PLA fibers filled with MWCNT. Good electrical conductivity and hydrophobicity can be achieved at low carbon nanotube contents. When only 1 wt% MWCNT was added to low-crystalline PLA, surface conductivity of the composites increased from 5 x 10(-8) to 0.46 S/cm. Addition of MWCNT can slightly influence the degree of crystallinity of PLA fibers as studied by XRD and DSC. Thermogravimetric analyses showed that MWCNT loading can decrease the onset degradation temperature of the composites which was attributed to the catalytic effect of metallic residues in MWCNT. Moreover, it was demonstrated that hydrophilicity slightly increased with an increase in MWCNT content. These results show that solution blow spinning can also be used to produce nanocomposite fibers with many potential applications such as in sensors and biosensors.
Resumo:
In this work, a new enrichment space to accommodate jumps in the pressure field at immersed interfaces in finite element formulations, is proposed. The new enrichment adds two degrees of freedom per element that can be eliminated by means of static condensation. The new space is tested and compared with the classical P1 space and to the space proposed by Ausas et al (Comp. Meth. Appl. Mech. Eng., Vol. 199, 10191031, 2010) in several problems involving jumps in the viscosity and/or the presence of singular forces at interfaces not conforming with the element edges. The combination of this enrichment space with another enrichment that accommodates discontinuities in the pressure gradient has also been explored, exhibiting excellent results in problems involving jumps in the density or the volume forces. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.
Resumo:
This work proposes a novel texture descriptor based on fractal theory. The method is based on the Bouligand- Minkowski descriptors. We decompose the original image recursively into four equal parts. In each recursion step, we estimate the average and the deviation of the Bouligand-Minkowski descriptors computed over each part. Thus, we extract entropy features from both average and deviation. The proposed descriptors are provided by concatenating such measures. The method is tested in a classification experiment under well known datasets, that is, Brodatz and Vistex. The results demonstrate that the novel technique achieves better results than classical and state-of-the-art texture descriptors, such as Local Binary Patterns, Gabor-wavelets and co-occurrence matrix.
Resumo:
[EN]This presentation will give examples on how multi-parameter platforms have been used in a variety of applications ranging from shallow coastal on-line observatories down to measuring in the deepest Ocean trenches. Focus will be on projects in which optode technology (primarily for CO2 and O2) has served to study different aspects of the carbon system including primary production/consumption, air-sea exchange, leakage detection from underwater storage of CO2 and measurements from moving platforms like gliders and ferries. The performance of recently developed pH optodes will als
Resumo:
[EN]An accurate estimation of the number of people entering / leaving a controlled area is an interesting capability for automatic surveil- lance systems. Potential applications where this technology can be ap- plied include those related to security, safety, energy saving or fraud control. In this paper we present a novel con guration of a multi-sensor system combining both visual and range data specially suited for trou- blesome scenarios such as public transportation. The approach applies probabilistic estimation lters on raw sensor data to create intermediate level hypothesis that are later fused using a certainty-based integration stage. Promising results have been obtained in several tests performed on a realistic test bed scenario under variable lightning conditions.
Resumo:
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
Resumo:
During the last years we assisted to an exponential growth of scientific discoveries for catalysis by gold and many applications have been found for Au-based catalysts. In the literature there are several studies concerning the use of gold-based catalysts for environmental applications and good results are reported for the catalytic combustion of different volatile organic compounds (VOCs). Recently it has also been established that gold-based catalysts are potentially capable of being effectively employed in fuel cells in order to remove CO traces by preferential CO oxidation in H2-rich streams. Bi-metallic catalysts have attracted increasing attention because of their markedly different properties from either of the costituent metals, and above all their enhanced catalytic activity, selectivity and stability. In the literature there are several studies demostrating the beneficial effect due to the addition of an iron component to gold supported catalysts in terms of enhanced activity, selectivity, resistence to deactivation and prolonged lifetime of the catalyst. In this work we tried to develop a methodology for the preparation of iron stabilized gold nanoparticles with controlled size and composition, particularly in terms of obtaining an intimate contact between different phases, since it is well known that the catalytic behaviour of multi-component supported catalysts is strongly influenced by the size of the metal particles and by their reciprocal interaction. Ligand stabilized metal clusters, with nanometric dimensions, are possible precursors for the preparation of catalytically active nanoparticles with controlled dimensions and compositions. Among these, metal carbonyl clusters are quite attractive, since they can be prepared with several different sizes and compositions and, moreover, they are decomposed under very mild conditions. A novel preparation method was developed during this thesis for the preparation of iron and gold/iron supported catalysts using bi-metallic carbonyl clusters as precursors of highly dispersed nanoparticles over TiO2 and CeO2, which are widely considered two of the most suitable supports for gold nanoparticles. Au/FeOx catalysts were prepared by employing the bi-metallic carbonyl cluster salts [NEt4]4[Au4Fe4(CO)16] (Fe/Au=1) and [NEt4][AuFe4(CO)16] (Fe/Au=4), and for comparison FeOx samples were prepared by employing the homometallic [NEt4][HFe3(CO)11] cluster. These clusters were prepared by Prof. Longoni research group (Department of Physical and Inorganic Chemistry- University of Bologna). Particular attention was dedicated to the optimization of a suitable thermal treatment in order to achieve, apart from a good Au and Fe metal dispersion, also the formation of appropriate species with good catalytic properties. A deep IR study was carried out in order to understand the physical interaction between clusters and different supports and detect the occurrence of chemical reactions between them at any stage of the preparation. The characterization by BET, XRD, TEM, H2-TPR, ICP-AES and XPS was performed in order to investigate the catalysts properties, whit particular attention to the interaction between Au and Fe and its influence on the catalytic activity. This novel preparation method resulted in small gold metallic nanoparticles surrounded by highly dispersed iron oxide species, essentially in an amorphous phase, on both TiO2 and CeO2. The results presented in this thesis confirmed that FeOx species can stabilize small Au particles, since keeping costant the gold content but introducing a higher iron amount a higher metal dispersion was achieved. Partial encapsulation of gold atoms by iron species was observed since the Au/Fe surface ratio was found much lower than bulk ratio and a strong interaction between gold and oxide species, both of iron oxide and supports, was achieved. The prepared catalysts were tested in the total oxidation of VOCs, using toluene and methanol as probe molecules for aromatics and alchols, respectively, and in the PROX reaction. Different performances were observed on titania and ceria catalysts, on both toluene and methanol combustion. Toluene combustion on titania catalyst was found to be enhanced increasing iron loading while a moderate effect on FeOx-Ti activity was achieved by Au addition. In this case toluene combustion was improved due to a higher oxygen mobility depending on enhanced oxygen activation by FeOx and Au/FeOx dispersed on titania. On the contrary ceria activity was strongly decreased in the presence of FeOx, while the introduction of gold was found to moderate the detrimental effect of iron species. In fact, excellent ceria performances are due to its ability to adsorb toluene and O2. Since toluene activation is the determining factor for its oxidation, the partial coverage of ceria sites, responsible of toluene adsorption, by FeOx species finely dispersed on the surface resulted in worse efficiency in toluene combustion. Better results were obtained for both ceria and titania catalysts on methanol total oxidation. In this case, the performances achieved on differently supported catalysts indicate that the oxygen mobility is the determining factor in this reaction. The introduction of gold on both TiO2 and CeO2 catalysts, lead to a higher oxygen mobility due to the weakening of both Fe-O and Ce-O bonds and consequently to enhanced methanol combustion. The catalytic activity was found to strongly depend on oxygen mobility and followed the same trend observed for catalysts reducibility. Regarding CO PROX reaction, it was observed that Au/FeOx titania catalysts are less active than ceria ones, due to the lower reducibility of titania compared to ceria. In fact the availability of lattice oxygen involved in PROX reaction is much higher in the latter catalysts. However, the CO PROX performances observed for ceria catalysts are not really high compared to data reported in literature, probably due to the very low Au/Fe surface ratio achieved with this preparation method. CO preferential oxidation was found to strongly depend on Au particle size but also on surface oxygen reducibility, depending on the different oxide species which can be formed using different thermal treatment conditions or varying the iron loading over the support.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.