868 resultados para Multi microprocessor applications
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Understanding the origin of the properties of metal-supported metal thin films is important for the rational design of bimetallic catalysts and other applications, but it is generally difficult to separate effects related to strain from those arising from interface interactions. Here we use density functional (DFT) theory to examine the structure and electronic behavior of few-layer palladium films on the rhenium (0001) surface, where there is negligible interfacial strain and therefore other effects can be isolated. Our DFT calculations predict stacking sequences and interlayer separations in excellent agreement with quantitative low-energy electron diffraction experiments. By theoretically simulating the Pd core-level X-ray photoemission spectra (XPS) of the films, we are able to interpret and assign the basic features of both low-resolution and high-resolution XPS measurements. The core levels at the interface shift to more negative energies, rigidly following the shifts in the same direction of the valence d-band center. We demonstrate that the valence band shift at the interface is caused by charge transfer from Re to Pd, which occurs mainly to valence states of hybridized s-p character rather than to the Pd d-band. Since the d-band filling is roughly constant, there is a correlation between the d-band center shift and its bandwidth. The resulting effect of this charge transfer on the valence d-band is thus analogous to the application of a lateral compressive strain on the adlayers. Our analysis suggests that charge transfer should be considered when describing the origin of core and valence band shifts in other metal / metal adlayer systems.
Resumo:
Datasets containing information to locate and identify water bodies have been generated from data locating static-water-bodies with resolution of about 300 m (1/360 deg) recently released by the Land Cover Climate Change Initiative (LC CCI) of the European Space Agency. The LC CCI water-bodies dataset has been obtained from multi-temporal metrics based on time series of the backscattered intensity recorded by ASAR on Envisat between 2005 and 2010. The new derived datasets provide coherently: distance to land, distance to water, water-body identifiers and lake-centre locations. The water-body identifier dataset locates the water bodies assigning the identifiers of the Global Lakes and Wetlands Database (GLWD), and lake centres are defined for in-land waters for which GLWD IDs were determined. The new datasets therefore link recent lake/reservoir/wetlands extent to the GLWD, together with a set of coordinates which locates unambiguously the water bodies in the database. Information on distance-to-land for each water cell and the distance-to-water for each land cell has many potential applications in remote sensing, where the applicability of geophysical retrieval algorithms may be affected by the presence of water or land within a satellite field of view (image pixel). During the generation and validation of the datasets some limitations of the GLWD database and of the LC CCI water-bodies mask have been found. Some examples of the inaccuracies/limitations are presented and discussed. Temporal change in water-body extent is common. Future versions of the LC CCI dataset are planned to represent temporal variation, and this will permit these derived datasets to be updated.
Resumo:
1,3-beta-Glucan depolymerizing enzymes have considerable biotechnological applications including biofuel production, feedstock-chemicals and pharmaceuticals. Here we describe a comprehensive functional characterization and low-resolution structure of a hyperthermophilic laminarinase from Thermotoga petrophila (TpLam). We determine TpLam enzymatic mode of operation, which specifically cleaves internal beta-1,3-glucosidic bonds. The enzyme most frequently attacks the bond between the 3rd and 4th residue from the non-reducing end, producing glucose, laminaribiose and laminaritriose as major products. Far-UV circular dichroism demonstrates that TpLam is formed mainly by beta structural elements, and the secondary structure is maintained after incubation at 90 degrees C. The structure resolved by small angle X-ray scattering, reveals a multi-domain structural architecture of a V-shape envelope with a catalytic domain flanked by two carbohydrate-binding modules. Crown Copyright (C) 2011 Published by Elsevier Inc. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
At the age of multi-media, portable electronic devices such as mobile phones, personal digital assistant and handheld gaming systems have increased the demand for high performance displays with low cost production. Inkjet printing color optical filters (COF) for LCD applications seem to be an interesting alternative to decrease the production costs. The advantage of inkjet printing technology is to be fast, accurate, easy to run and cheaper than other technologies. In this master thesis work, we used various disciplines such as optical microscopy, rheology, inkjet printing, profilometering and colorimetry. The specific aim of the thesis was to investigate the feasibility of using company-A pigment formulation in inkjet production of COF for active matrix LCD applications. Ideal viscosity parameters were determined from 10 to 20mPa·s for easy inkjet printing at room temperature. The red pigments used are fully dispersed into the solvent and present an excellent homogenous repartition after printing. Thickness investigations revealed that the printed COF were equal or slightly thicker than typically manufactured ones. The colorimetry investigations demonstrated color coordinates very close to the NTSC red standard. LED backlighting seems to be a valuable solution to combine with the printed COF regarding to the spectrum and color analysis. The results on this thesis will increase the understanding of inkjet printing company-A pigments to produce COF for LCD applications.
Resumo:
Exploiting solar energy technology for both heating and cooling purposes has the potential of meeting an appreciable portion of the energy demand in buildings throughout the year. By developing an integrated, multi-purpose solar energy system, that can operate all twelve months of the year, a high utilisation factor can be achieved which translates to more economical systems. However, there are still some techno-economic barriers to the general commercialisation and market penetration of such technologies. These are associated with high system and installation costs, significant system complexity, and lack of knowledge of system implementation and expected performance. A sorption heat pump module that can be integrated directly into a solar thermal collector has thus been developed in order to tackle the aforementioned market barriers. This has been designed for the development of cost-effective pre-engineered solar energy system kits that can provide both heating and cooling. This thesis summarises the characterisation studies of the operation of individual sorption modules, sorption module integrated solar collectors and a full solar heating and cooling system employing sorption module integrated collectors. Key performance indicators for the individual sorption modules showed cooling delivery for 6 hours at an average power of 40 W and a temperature lift of 21°C. Upon integration of the sorption modules into a solar collector, measured solar radiation energy to cooling energy conversion efficiencies (solar cooling COP) were between 0.10 and 0.25 with average cooling powers between 90 and 200 W/m2 collector aperture area. Further investigations of the sorption module integrated collectors implementation in a full solar heating and cooling system yielded electrical cooling COP ranging from 1.7 to 12.6 with an average of 10.6 for the test period. Additionally, simulations were performed to determine system energy and cost saving potential for various system sizes over a full year of operation for a 140 m2 single-family dwelling located in Madrid, Spain. Simulations yielded an annual solar fraction of 42% and potential cost savings of €386 per annum for a solar heating and cooling installation employing 20m2 of sorption integrated collectors.
Resumo:
Dynamic composition of services provides the ability to build complex distributed applications at run time by combining existing services, thus coping with a large variety of complex requirements that cannot be met by individual services alone. However, with the increasing amount of available services that differ in granularity (amount of functionality provided) and qualities, selecting the best combination of services becomes very complex. In response, this paper addresses the challenges of service selection, and makes a twofold contribution. First, a rich representation of compositional planning knowledge is provided, allowing the expression of multiple decompositions of tasks at arbitrary levels of granularity. Second, two distinct search space reduction techniques are introduced, the application of which, prior to performing service selection, results in significant improvement in selection performance in terms of execution time, which is demonstrated via experimental results.
Resumo:
In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.
Resumo:
This work addresses issues related to analysis and development of multivariable predictive controllers based on bilinear multi-models. Linear Generalized Predictive Control (GPC) monovariable and multivariable is shown, and highlighted its properties, key features and applications in industry. Bilinear GPC, the basis for the development of this thesis, is presented by the time-step quasilinearization approach. Some results are presented using this controller in order to show its best performance when compared to linear GPC, since the bilinear models represent better the dynamics of certain processes. Time-step quasilinearization, due to the fact that it is an approximation, causes a prediction error, which limits the performance of this controller when prediction horizon increases. Due to its prediction error, Bilinear GPC with iterative compensation is shown in order to minimize this error, seeking a better performance than the classic Bilinear GPC. Results of iterative compensation algorithm are shown. The use of multi-model is discussed in this thesis, in order to correct the deficiency of controllers based on single model, when they are applied in cases with large operation ranges. Methods of measuring the distance between models, also called metrics, are the main contribution of this thesis. Several application results in simulated distillation columns, which are close enough to actual behaviour of them, are made, and the results have shown satisfactory
Resumo:
We propose a new paradigm for collective learning in multi-agent systems (MAS) as a solution to the problem in which several agents acting over the same environment must learn how to perform tasks, simultaneously, based on feedbacks given by each one of the other agents. We introduce the proposed paradigm in the form of a reinforcement learning algorithm, nominating it as reinforcement learning with influence values. While learning by rewards, each agent evaluates the relation between the current state and/or action executed at this state (actual believe) together with the reward obtained after all agents that are interacting perform their actions. The reward is a result of the interference of others. The agent considers the opinions of all its colleagues in order to attempt to change the values of its states and/or actions. The idea is that the system, as a whole, must reach an equilibrium, where all agents get satisfied with the obtained results. This means that the values of the state/actions pairs match the reward obtained by each agent. This dynamical way of setting the values for states and/or actions makes this new reinforcement learning paradigm the first to include, naturally, the fact that the presence of other agents in the environment turns it a dynamical model. As a direct result, we implicitly include the internal state, the actions and the rewards obtained by all the other agents in the internal state of each agent. This makes our proposal the first complete solution to the conceptual problem that rises when applying reinforcement learning in multi-agent systems, which is caused by the difference existent between the environment and agent models. With basis on the proposed model, we create the IVQ-learning algorithm that is exhaustive tested in repetitive games with two, three and four agents and in stochastic games that need cooperation and in games that need collaboration. This algorithm shows to be a good option for obtaining solutions that guarantee convergence to the Nash optimum equilibrium in cooperative problems. Experiments performed clear shows that the proposed paradigm is theoretical and experimentally superior to the traditional approaches. Yet, with the creation of this new paradigm the set of reinforcement learning applications in MAS grows up. That is, besides the possibility of applying the algorithm in traditional learning problems in MAS, as for example coordination of tasks in multi-robot systems, it is possible to apply reinforcement learning in problems that are essentially collaborative
Resumo:
Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification