952 resultados para parallel systems
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
This paper presents a new parallel methodology for calculating the determinant of matrices of the order n, with computational complexity O(n), using the Gauss-Jordan Elimination Method and Chio's Rule as references. We intend to present our step-by-step methodology using clear mathematical language, where we will demonstrate how to calculate the determinant of a matrix of the order n in an analytical format. We will also present a computational model with one sequential algorithm and one parallel algorithm using a pseudo-code.
Resumo:
Classical Pavlovian fear conditioning to painful stimuli has provided the generally accepted view of a core system centered in the central amygdala to organize fear responses. Ethologically based models using other sources of threat likely to be expected in a natural environment, such as predators or aggressive dominant conspecifics, have challenged this concept of a unitary core circuit for fear processing. We discuss here what the ethologically based models have told us about the neural systems organizing fear responses. We explored the concept that parallel paths process different classes of threats, and that these different paths influence distinct regions in the periaqueductal gray - a critical element for the organization of all kinds of fear responses. Despite this parallel processing of different kinds of threats, we have discussed an interesting emerging view that common cortical-hippocampal-amygdalar paths seem to be engaged in fear conditioning to painful stimuli, to predators and, perhaps, to aggressive dominant conspecifics as well. Overall, the aim of this review is to bring into focus a more global and comprehensive view of the systems organizing fear responses.
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.
Resumo:
Methods We conducted a phase I, multicenter, randomized, double-blind, placebo-controlled, multi-arm (10) parallel study involving healthy adults to evaluate the safety and immunogenicity of influenza A (H1N1) 2009 non-adjuvanted and adjuvanted candidate vaccines. Subjects received two intramuscular injections of one of the candidate vaccines administered 21 days apart. Antibody responses were measured by means of hemagglutination-inhibition assay before and 21 days after each vaccination. The three co-primary immunogenicity end points were the proportion of seroprotection >70%, seroconversion >40%, and the factor increase in the geometric mean titer >2.5. Results A total of 266 participants were enrolled into the study. No deaths or serious adverse events were reported. The most commonly solicited local and systemic adverse events were injection-site pain and headache, respectively. Only three subjects (1.1%) reported severe injection-site pain. Four 2009 influenza A (H1N1) inactivated monovalent candidate vaccines that met the three requirements to evaluate influenza protection, after a single dose, were identified: 15 μg of hemagglutinin antigen without adjuvant; 7.5 μg of hemagglutinin antigen with aluminum hydroxide, MPL and squalene; 3.75 μg of hemagglutinin antigen with aluminum hydroxide and MPL; and 3.75 μg of hemagglutinin antigen with aluminum hydroxide and squalene. Conclusions Adjuvant systems can be safely used in influenza vaccines, including the adjuvant monophosphoryl lipid A (MPL) derived from Bordetella pertussis with squalene and aluminum hydroxide, MPL with aluminum hydroxide, and squalene and aluminum hydroxide.
Resumo:
Cutting and packing problems arise in a variety of industries, including garment, wood and shipbuilding. Irregular shape packing is a special case which admits irregular items and is much more complex due to the geometry of items. In order to ensure that items do not overlap and no item from the layout protrudes from the container, the collision free region concept was adopted. It represents all possible translations for a new item to be inserted into a container with already placed items. To construct a feasible layout, collision free region for each item is determined through a sequence of Boolean operations over polygons. In order to improve the speed of the algorithm, a parallel version of the layout construction was proposed and it was applied to a simulated annealing algorithm used to solve bin packing problems. Tests were performed in order to determine the speed improvement of the parallel version over the serial algorithm
Resumo:
In this work the growth and the magnetic properties of the transition metals molybdenum, niobium, and iron and of the highly-magnetostrictive C15 Laves phases of the RFe2 compounds (R: Rare earth metals: here Tb, Dy, and Tb{0.3}Dy{0.7} deposited on alpha-Al2O3 (sapphire) substrates are analyzed. Next to (11-20) (a-plane) oriented sapphire substrates mainly (10-10) (m-plane) oriented substrates were used. These show a pronounced facetting after high temperature annealing in air. Atomic force microscopy (AFM) measurements reveal a dependence of the height, width, and angle of the facets with the annealing temperature. The observed deviations of the facet angles with respect to the theoretical values of the sapphire (10-1-2) and (10-11) surfaces are explained by cross section high resolution transmission electron microscopy (HR-TEM) measurements. These show the plain formation of the (10-11) surface while the second, energy reduced (10-1-2) facet has a curved shape given by atomic steps of (10-1-2) layers and is formed completely solely at the facet ridges and valleys. Thin films of Mo and Nb, respectively, deposited by means of molecular beam epitaxy (MBE) reveal a non-twinned, (211)-oriented epitaxial growth as well on non-faceted as on faceted sapphire m-plane, as was shown by X-Ray and TEM evaluations. In the case of faceted sapphire the two bcc crystals overgrow the facets homogeneously. Here, the bcc (111) surface is nearly parallel to the sapphire (10-11) facet and the Mo/Nb (100) surface is nearly parallel to the sapphire (10-1-2) surface. (211)-oriented Nb templates on sapphire m-plane can be used for the non-twinned, (211)-oriented growth of RFe2 films by means of MBE. Again, the quality of the RFe2 films grown on faceted sapphire is almost equal to films on the non-faceted substrate. For comparison thin RFe2 films of the established (110) and (111) orientation were prepared. Magnetic and magnetoelastic measurements performed in a self designed setup reveal a high quality of the samples. No difference between samples with undulated and flat morphology can be observed. In addition to the preparation of covering, undulating thin films on faceted sapphire m-plane nanoscopic structures of Nb and Fe were prepared by shallow incidence MBE. The formation of the nanostructures can be explained by a shadowing of the atomic beam due to the facets in addition to de-wetting effects of the metals on the heated sapphire surface. Accordingly, the nanostructures form at the facet ridges and overgrow them. The morphology of the structures can be varied by deposition conditions as was shown for Fe. The shape of the structures vary from pearl-necklet strung spherical nanodots with a diameter of a few 10 nm to oval nanodots of a few 100 nm length to continuous nanowires. Magnetization measurements reveal uniaxial magnetic anisotropy with the easy axis of magnetization parallel to the facet ridges. The shape of the hysteresis is depending on the morphology of the structures. The magnetization reversal processes of the spherical and oval nanodots were simulated by micromagnetic modelling and can be explained by the formation of magnetic vortices.
Resumo:
The production, segregation and migration of melt and aqueous fluids (henceforth called liquid) plays an important role for the transport of mass and energy within the mantle and the crust of the Earth. Many properties of large-scale liquid migration processes such as the permeability of a rock matrix or the initial segregation of newly formed liquid from the host-rock depends on the grain-scale distribution and behaviour of liquid. Although the general mechanisms of liquid distribution at the grain-scale are well understood, the influence of possibly important modifying processes such as static recrystallization, deformation, and chemical disequilibrium on the liquid distribution is not well constrained. For this thesis analogue experiments were used that allowed to investigate the interplay of these different mechanisms in-situ. In high-temperature environments where melts are produced, the grain-scale distribution in “equilibrium” is fully determined by the liquid fraction and the ratio between the solid-solid and the solid-liquid surface energy. The latter is commonly expressed as the dihedral or wetting angle between two grains and the liquid phase (Chapter 2). The interplay of this “equilibrium” liquid distribution with ongoing surface energy driven recrystallization is investigated in Chapter 4 and 5 with experiments using norcamphor plus ethanol liquid. Ethanol in contact with norcamphor forms a wetting angle of about 25°, which is similar to reported angles of rock-forming minerals in contact with silicate melt. The experiments in Chapter 4 show that previously reported disequilibrium features such as trapped liquid lenses, fully-wetted grain boundaries, and large liquid pockets can be explained by the interplay of the liquid with ongoing recrystallization. Closer inspection of dihedral angles in Chapter 5 reveals that the wetting angles are themselves modified by grain coarsening. Ongoing recrystallization constantly moves liquid-filled triple junctions, thereby altering the wetting angles dynamically as a function of the triple junction velocity. A polycrystalline aggregate will therefore always display a range of equilibrium and dynamic wetting angles at raised temperature, rather than a single wetting angle as previously thought. For the deformation experiments partially molten KNO3–LiNO3 experiments were used in addition to norcamphor–ethanol experiments (Chapter 6). Three deformation regimes were observed. At a high bulk liquid fraction >10 vol.% the aggregate deformed by compaction and granular flow. At a “moderate” liquid fraction, the aggregate deformed mainly by grain boundary sliding (GBS) that was localized into conjugate shear zones. At a low liquid fraction, the grains of the aggregate formed a supporting framework that deformed internally by crystal plastic deformation or diffusion creep. Liquid segregation was most efficient during framework deformation, while GBS lead to slow liquid segregation or even liquid dispersion in the deforming areas.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
The main objective of this work was to investigate the impact of different hybridization concepts and levels of hybridization on fuel economy of a standard road vehicle where both conventional and non-conventional hybrid architectures are treated exactly in the same way from the point of view of overall energy flow optimization. Hybrid component models were developed and presented in detail as well as the simulations results mainly for NEDC cycle. The analysis was performed on four different parallel hybrid powertrain concepts: Hybrid Electric Vehicle (HEV), High Speed Flywheel Hybrid Vehicle (HSF-HV), Hydraulic Hybrid Vehicle (HHV) and Pneumatic Hybrid Vehicle (PHV). In order to perform equitable analysis of different hybrid systems, comparison was performed also on the basis of the same usable system energy storage capacity (i.e. 625kJ for HEV, HSF and the HHV) but in the case of pneumatic hybrid systems maximal storage capacity was limited by the size of the systems in order to comply with the packaging requirements of the vehicle. The simulations were performed within the IAV Gmbh - VeLoDyn software simulator based on Matlab / Simulink software package. Advanced cycle independent control strategy (ECMS) was implemented into the hybrid supervisory control unit in order to solve power management problem for all hybrid powertrain solutions. In order to maintain State of Charge within desired boundaries during different cycles and to facilitate easy implementation and recalibration of the control strategy for very different hybrid systems, Charge Sustaining Algorithm was added into the ECMS framework. Also, a Variable Shift Pattern VSP-ECMS algorithm was proposed as an extension of ECMS capabilities so as to include gear selection into the determination of minimal (energy) cost function of the hybrid system. Further, cycle-based energetic analysis was performed in all the simulated cases, and the results have been reported in the corresponding chapters.
Resumo:
Negli ultimi anni, parallelamente all’espansione del settore biologico, si è assistito a un crescente interesse per i modelli alternativi di garanzia dell’integrità e della genuinità dei prodotti biologici. Gruppi di piccoli agricoltori di tutto il mondo hanno iniziato a sviluppare approcci alternativi per affrontare i problemi connessi alla certificazione di terza parte. Queste pratiche sono note come Sistemi di Garanzia Partecipativa (PGS). Tali modelli: (i) si basano sugli standard di certificazione biologica dell’IFOAM, (ii) riguardano il complesso dei produttori di una comunità rurale, (iii) comportano l’inclusione di una grande varietà di attori e (iv) hanno lo scopo di ridurre al minimo burocrazia e costi semplificando le procedure di verifica e incorporando un elemento di educazione ambientale e sociale sia per i produttori sia per i consumatori. Gli obiettivi di questo lavoro di ricerca: • descrivere il funzionamento dei sistemi di garanzia partecipativa; • indicare i vantaggi della loro adozione nei Paesi in via di sviluppo e non; • illustrare il caso della Rede Ecovida de Agroecologia (Brasile); • offrire uno spunto di riflessione che riguarda il consumatore e la relativa fiducia nel modello PGS. L’impianto teorico fa riferimento alla Teoria delle Convenzioni. Sulla base del quadro teorico è stato costruito un questionario per i consumatori con lo scopo di testare l’appropriatezza delle ipotesi teoriche. I risultati finali riguardano la stima del livello di conoscenza attuale, la fiducia e la volontà d’acquisto dei prodotti PGS da parte dei consumatori nelle aree considerate. Sulla base di questa ricerca sarà possibile adattare ed esportare il modello empirico in altri paesi che presentano economie diverse per cercare di comprendere il potenziale campo di applicazione dei sistemi di garanzia partecipativa.
Resumo:
The promising development in the routine nanofabrication and the increasing knowledge of the working principles of new classes of highly sensitive, label-free and possibly cost-effective bio-nanosensors for the detection of molecules in liquid environment, has rapidly increased the possibility to develop portable sensor devices that could have a great impact on many application fields, such as health-care, environment and food production, thanks to the intrinsic ability of these biosensors to detect, monitor and study events at the nanoscale. Moreover, there is a growing demand for low-cost, compact readout structures able to perform accurate preliminary tests on biosensors and/or to perform routine tests with respect to experimental conditions avoiding skilled personnel and bulky laboratory instruments. This thesis focuses on analysing, designing and testing novel implementation of bio-nanosensors in layered hybrid systems where microfluidic devices and microelectronic systems are fused in compact printed circuit board (PCB) technology. In particular the manuscript presents hybrid systems in two validating cases using nanopore and nanowire technology, demonstrating new features not covered by state of the art technologies and based on the use of two custom integrated circuits (ICs). As far as the nanopores interface system is concerned, an automatic setup has been developed for the concurrent formation of bilayer lipid membranes combined with a custom parallel readout electronic system creating a complete portable platform for nanopores or ion channels studies. On the other hand, referring to the nanowire readout hybrid interface, two systems enabling to perform parallel, real-time, complex impedance measurements based on lock-in technique, as well as impedance spectroscopy measurements have been developed. This feature enable to experimentally investigate the possibility to enrich informations on the bio-nanosensors concurrently acquiring impedance magnitude and phase thus investigating capacitive contributions of bioanalytical interactions on biosensor surface.
Resumo:
Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.