26 resultados para High Performance Teams
em Universidad Politécnica de Madrid
Resumo:
Twelve commercially available edible marine algae from France, Japan and Spain and the certified reference material (CRM) NIES No. 9 Sargassum fulvellum were analyzed for total arsenic and arsenic species. Total arsenic concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) after microwave digestion and ranged from 23 to 126 μg g−1. Arsenic species in alga samples were extracted with deionized water by microwave-assisted extraction and showed extraction efficiencies from 49 to 98%, in terms of total arsenic. The presence of eleven arsenic species was studied by high performance liquid chromatography–ultraviolet photo-oxidation–hydride generation atomic–fluorescence spectrometry (HPLC–(UV)–HG–AFS) developed methods, using both anion and cation exchange chromatography. Glycerol and phosphate sugars were found in all alga samples analyzed, at concentrations between 0.11 and 22 μg g−1, whereas sulfonate and sulfate sugars were only detected in three of them (0.6-7.2 μg g−1). Regarding arsenic toxic species, low concentration levels of dimethylarsinic acid (DMA) (<0.9 μg g−1) and generally high arsenate (As(V)) concentrations (up to 77 μg g−1) were found in most of the algae studied. The results obtained are of interest to highlight the need to perform speciation analysis and to introduce appropriate legislation to limit toxic arsenic species content in these food products.
Resumo:
In this paper we will see how the efficiency of the MBS simulations can be improved in two different ways, by considering both an explicit and implicit semi-recursive formulation. The explicit method is based on a double velocity transformation that involves the solution of a redundant but compatible system of equations. The high computational cost of this operation has been drastically reduced by taking into account the sparsity pattern of the system. Regarding this, the goal of this method is the introduction of MA48, a high performance mathematical library provided by Harwell Subroutine Library. The second method proposed in this paper has the particularity that, depending on the case, between 70 and 85% of the computation time is devoted to the evaluation of forces derivatives with respect to the relative position and velocity vectors. Keeping in mind that evaluating these derivatives can be decomposed into concurrent tasks, the main goal of this paper lies on a successful and straightforward parallel implementation that have led to a substantial improvement with a speedup of 3.2 by keeping all the cores busy in a quad-core processor and distributing the workload between them, achieving on this way a huge time reduction by doing an ideal CPU usage
Resumo:
Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.
Resumo:
In recent years a lot of research has been invested in parallel processing of numerical applications. However, parallel processing of Symbolic and AI applications has received less attention. This paper presents a system for parallel symbolic computitig, narned ACE, based on the logic programming paradigm. ACE is a computational model for the full Prolog language, capable of exploiting Or-parall< lism and Independent And-parallelism. In this paper vve focus on the implementation of the and-parallel part of the ACE system (ralled &ACE) on a shared memory multiprocessor, d< scribing its organization, some optimizations, and presenting some performance figures, proving the abilhy of &ACE to efficiently exploit parallelism.
Resumo:
Based on our needs, that is to say, through precise simulation of the impact phenomena that may occur inside a jet engine turbine with an explicit non-linear finite element code, four new material models are postulated. Each one of is calibrated for four high-performance alloys that can be encountered in a modern jet engine. A new uncoupled material model for high strain and ballistic is proposed. Based on a Johnson-Cook type model, the proposed formulation introduces the effect of the third deviatoric invariant by means of three different Lode angle dependent functions. The Lode dependent functions are added to both plasticity and failure models. The postulated model is calibrated for a 6061-T651 aluminium alloy with data taken from the literature. The fracture pattern predictability of the JCX material model is shown performing numerical simulations of various quasi-static and dynamic tests. As an extension of the above-mentioned model, a modification in the thermal softening behaviour due to phase transformation temperatures is developed (JCXt). Additionally, a Lode angle dependent flow stress is defined. Analysing the phase diagram and high temperature tests performed, phase transformation temperatures of the FV535 stainless steel are determined. The postulated material model constants for the FV535 stainless steel are calibrated. A coupled elastoplastic-damage material model for high strain and ballistic applications is presented (JCXd). A Lode angle dependent function is added to the equivalent plastic strain to failure definition of the Johnson-Cook failure criterion. The weakening in the elastic law and in the Johnson-Cook type constitutive relation implicitly introduces the Lode angle dependency in the elastoplastic behaviour. The material model is calibrated for precipitation hardened Inconel 718 nickel-base superalloy. The combination of a Lode angle dependent failure criterion with weakened constitutive equations is proven to predict fracture patterns of the mechanical tests performed and provide reliable results. A transversely isotropic material model for directionally solidified alloys is presented. The proposed yield function is based a single linear transformation of the stress tensor. The linear operator weighs the degree of anisotropy of the yield function. The elastic behaviour, as well as the hardening, are considered isotropic. To model the hardening, a Johnson-Cook type relation is adopted. A material vector is included in the model implementation. The failure is modelled with the Cockroft-Latham failure criterion. The material vector allows orienting the reference orientation in any other that the user may need. The model is calibrated for the MAR-M 247 directionally solidified nickel-base superalloy.
Resumo:
Major ampullate (MA) dragline silk supports spider orb webs, combining strength and extensibility in the toughest biomaterial. MA silk evolved ~376 MYA and identifying how evolutionary changes in proteins influenced silk mechanics is crucial for biomimetics, but is hindered by high spinning plasticity. We use supercontraction to remove that variation and characterize MA silk across the spider phylogeny. We show that mechanical performance is conserved within, but divergent among, major lineages, evolving in correlation with discrete changes in proteins. Early MA silk tensile strength improved rapidly with the origin of GGX amino acid motifs and increased repetitiveness. Tensile strength then maximized in basal entelegyne spiders, ~230 MYA. Toughness subsequently improved through increased extensibility within orb spiders, coupled with the origin of a novel protein (MaSp2). Key changes in MA silk proteins therefore correlate with the sequential evolution high performance orb spider silk and could aid design of biomimetic fibers.
Resumo:
Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.
Resumo:
The dome-shaped Fresnel-Köhler concentrator is a novel optical design for photovoltaic applications. It is based on two previous successful CPV optical designs: the FK concentrator with a flat Fresnel lens and the dome-shaped Fresnel lens system developed by Daido Steel, resulting on a superior concentrator. This optical concentrator will be able to achieve large concentration factors, high tolerance (i.e. acceptance angle) and high optical efficiency, three key issues when dealing with photovoltaic applications. Besides, its irradiance is distributed on the cell surface in a very even way. The concentrator has shown outstanding simulation results, achieving an effective concentration-acceptance product (CAP) value of 0.72, on-axis optical efficiency over 85% and good irradiance uniformity on the cell provided by Köhler integration. Furthermore, due to its high tolerance, we will present the dome-shaped Fresnel-Köhler concentrator as a cost-effective CPV optical design. All this makes this concentrator superior to other conventional competitors in the current market.
Resumo:
- PV and HCPV compete in the utility market - PV cost reduction has been dramatic through volume - A complete off-the-shelf optics solution by Evonik and LPI - Based on the best-in-class design: The FK concentrator
Resumo:
In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.
Resumo:
Quantum Key Distribution is carving its place among the tools used to secure communications. While a difficult technology, it enjoys benefits that set it apart from the rest, the most prominent is its provable security based on the laws of physics. QKD requires not only the mastering of signals at the quantum level, but also a classical processing to extract a secret-key from them. This postprocessing has been customarily studied in terms of the efficiency, a figure of merit that offers a biased view of the performance of real devices. Here we argue that it is the throughput the significant magnitude in practical QKD, specially in the case of high speed devices, where the differences are more marked, and give some examples contrasting the usual postprocessing schemes with new ones from modern coding theory. A good understanding of its implications is very important for the design of modern QKD devices.
Resumo:
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
This paper presents some power converter architectures and circuit topologies, which can be used to achieve the requirements of the high performance transformer rectifier unit in aircraft applications, mainly as: high power factor with low THD, high efficiency and high power density. The voltage and the power levels demanded for this application are: three-phase line-to-neutral input voltage of 115 or 230V AC rms (360 – 800Hz), output voltage of 28V DC or 270V DC(new grid value) and the output power up to tens of kilowatts.
Resumo:
The postprocessing or secret-key distillation process in quantum key distribution (QKD) mainly involves two well-known procedures: information reconciliation and privacy amplification. Information or key reconciliation has been customarily studied in terms of efficiency. During this, some information needs to be disclosed for reconciling discrepancies in the exchanged keys. The leakage of information is lower bounded by a theoretical limit, and is usually parameterized by the reconciliation efficiency (or inefficiency), i.e. the ratio of additional information disclosed over the Shannon limit. Most techniques for reconciling errors in QKD try to optimize this parameter. For instance, the well-known Cascade (probably the most widely used procedure for reconciling errors in QKD) was recently shown to have an average efficiency of 1.05 at the cost of a high interactivity (number of exchanged messages). Modern coding techniques, such as rate-adaptive low-density parity-check (LDPC) codes were also shown to achieve similar efficiency values exchanging only one message, or even better values with few interactivity and shorter block-length codes.