861 resultados para High performance concrete
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
Occupational exposures to organic solvents, specifically acetonitrile and methanol, have the potential to cause serious long-term health effects. In the laboratory, these solvents are used extensively in protocols involving the use of high performance liquid chromatography (HPLC). Operators of HPLC equipment may be potentially exposed to these organic solvents when local exhaust ventilation is not employed properly or is not available, which can be the case in many settings. The objective of this research was to characterize the various sites of vapor release in the HPLC process and then to determine the relative influence of a novel vapor recovery system on the overall exposure to laboratory personnel. The effectiveness of steps to reduce environmental solvent vapor concentrations was assessed by measuring exposure levels of acetonitrile and methanol before and after installation of the vapor recovery system. With respect to acetonitrile, the concentration was not statistically significant with p=0.938; moreover, exposure after the intervention was actually higher than prior to intervention. With respect to methanol, the concentration was not statistically significant with p=0.278. This indicates that the exposure to methanol after the intervention was not statistically significantly higher or lower than prior to intervention. Thus, installation of the vapor recovery device did not result in statistically significant reduction in exposures in the settings encountered, and acetonitrile actually increased significantly.^
Resumo:
Twelve commercially available edible marine algae from France, Japan and Spain and the certified reference material (CRM) NIES No. 9 Sargassum fulvellum were analyzed for total arsenic and arsenic species. Total arsenic concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) after microwave digestion and ranged from 23 to 126 μg g−1. Arsenic species in alga samples were extracted with deionized water by microwave-assisted extraction and showed extraction efficiencies from 49 to 98%, in terms of total arsenic. The presence of eleven arsenic species was studied by high performance liquid chromatography–ultraviolet photo-oxidation–hydride generation atomic–fluorescence spectrometry (HPLC–(UV)–HG–AFS) developed methods, using both anion and cation exchange chromatography. Glycerol and phosphate sugars were found in all alga samples analyzed, at concentrations between 0.11 and 22 μg g−1, whereas sulfonate and sulfate sugars were only detected in three of them (0.6-7.2 μg g−1). Regarding arsenic toxic species, low concentration levels of dimethylarsinic acid (DMA) (<0.9 μg g−1) and generally high arsenate (As(V)) concentrations (up to 77 μg g−1) were found in most of the algae studied. The results obtained are of interest to highlight the need to perform speciation analysis and to introduce appropriate legislation to limit toxic arsenic species content in these food products.
Resumo:
High performance materials are needed for the reconstruction of such a singular building as a cathedral, since in addition to special mechanical properties, high self compact ability, high durability and high surface quality, are specified. Because of the project’s specifications, the use of polypropylene fiber-reinforced, self-compacting concrete was selected by the engineering office. The low quality of local materials and the lack of experience in applying macro polypropylene fiber for structural reinforcement with these components materials required the development of a pretesting program. To optimize the mix design, performance was evaluated following technical, economical and constructability criteria. Since the addition of fibers reduces concrete self-compactability, many trials were run to determine the optimal mix proportions. The variables introduced were paste volume; the aggregate skeleton of two or three fractions plus limestone filler; fiber type and dosage. Two mix designs were selected from the preliminary results. The first one was used as reference for self-compactability and mechanical properties. The second one was an optimized mix with a reduction in cement content of 20 kg/m3and fiber dosage of 1 kg/m3. For these mix designs, extended testing was carried out to measure the compression and flexural strength, modulus of elasticity, toughness, and water permeability resistance
Resumo:
In this paper we will see how the efficiency of the MBS simulations can be improved in two different ways, by considering both an explicit and implicit semi-recursive formulation. The explicit method is based on a double velocity transformation that involves the solution of a redundant but compatible system of equations. The high computational cost of this operation has been drastically reduced by taking into account the sparsity pattern of the system. Regarding this, the goal of this method is the introduction of MA48, a high performance mathematical library provided by Harwell Subroutine Library. The second method proposed in this paper has the particularity that, depending on the case, between 70 and 85% of the computation time is devoted to the evaluation of forces derivatives with respect to the relative position and velocity vectors. Keeping in mind that evaluating these derivatives can be decomposed into concurrent tasks, the main goal of this paper lies on a successful and straightforward parallel implementation that have led to a substantial improvement with a speedup of 3.2 by keeping all the cores busy in a quad-core processor and distributing the workload between them, achieving on this way a huge time reduction by doing an ideal CPU usage
Resumo:
Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.
Resumo:
In recent years a lot of research has been invested in parallel processing of numerical applications. However, parallel processing of Symbolic and AI applications has received less attention. This paper presents a system for parallel symbolic computitig, narned ACE, based on the logic programming paradigm. ACE is a computational model for the full Prolog language, capable of exploiting Or-parall< lism and Independent And-parallelism. In this paper vve focus on the implementation of the and-parallel part of the ACE system (ralled &ACE) on a shared memory multiprocessor, d< scribing its organization, some optimizations, and presenting some performance figures, proving the abilhy of &ACE to efficiently exploit parallelism.
Resumo:
Based on our needs, that is to say, through precise simulation of the impact phenomena that may occur inside a jet engine turbine with an explicit non-linear finite element code, four new material models are postulated. Each one of is calibrated for four high-performance alloys that can be encountered in a modern jet engine. A new uncoupled material model for high strain and ballistic is proposed. Based on a Johnson-Cook type model, the proposed formulation introduces the effect of the third deviatoric invariant by means of three different Lode angle dependent functions. The Lode dependent functions are added to both plasticity and failure models. The postulated model is calibrated for a 6061-T651 aluminium alloy with data taken from the literature. The fracture pattern predictability of the JCX material model is shown performing numerical simulations of various quasi-static and dynamic tests. As an extension of the above-mentioned model, a modification in the thermal softening behaviour due to phase transformation temperatures is developed (JCXt). Additionally, a Lode angle dependent flow stress is defined. Analysing the phase diagram and high temperature tests performed, phase transformation temperatures of the FV535 stainless steel are determined. The postulated material model constants for the FV535 stainless steel are calibrated. A coupled elastoplastic-damage material model for high strain and ballistic applications is presented (JCXd). A Lode angle dependent function is added to the equivalent plastic strain to failure definition of the Johnson-Cook failure criterion. The weakening in the elastic law and in the Johnson-Cook type constitutive relation implicitly introduces the Lode angle dependency in the elastoplastic behaviour. The material model is calibrated for precipitation hardened Inconel 718 nickel-base superalloy. The combination of a Lode angle dependent failure criterion with weakened constitutive equations is proven to predict fracture patterns of the mechanical tests performed and provide reliable results. A transversely isotropic material model for directionally solidified alloys is presented. The proposed yield function is based a single linear transformation of the stress tensor. The linear operator weighs the degree of anisotropy of the yield function. The elastic behaviour, as well as the hardening, are considered isotropic. To model the hardening, a Johnson-Cook type relation is adopted. A material vector is included in the model implementation. The failure is modelled with the Cockroft-Latham failure criterion. The material vector allows orienting the reference orientation in any other that the user may need. The model is calibrated for the MAR-M 247 directionally solidified nickel-base superalloy.
Resumo:
Major ampullate (MA) dragline silk supports spider orb webs, combining strength and extensibility in the toughest biomaterial. MA silk evolved ~376 MYA and identifying how evolutionary changes in proteins influenced silk mechanics is crucial for biomimetics, but is hindered by high spinning plasticity. We use supercontraction to remove that variation and characterize MA silk across the spider phylogeny. We show that mechanical performance is conserved within, but divergent among, major lineages, evolving in correlation with discrete changes in proteins. Early MA silk tensile strength improved rapidly with the origin of GGX amino acid motifs and increased repetitiveness. Tensile strength then maximized in basal entelegyne spiders, ~230 MYA. Toughness subsequently improved through increased extensibility within orb spiders, coupled with the origin of a novel protein (MaSp2). Key changes in MA silk proteins therefore correlate with the sequential evolution high performance orb spider silk and could aid design of biomimetic fibers.
Resumo:
Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.
Resumo:
The dome-shaped Fresnel-Köhler concentrator is a novel optical design for photovoltaic applications. It is based on two previous successful CPV optical designs: the FK concentrator with a flat Fresnel lens and the dome-shaped Fresnel lens system developed by Daido Steel, resulting on a superior concentrator. This optical concentrator will be able to achieve large concentration factors, high tolerance (i.e. acceptance angle) and high optical efficiency, three key issues when dealing with photovoltaic applications. Besides, its irradiance is distributed on the cell surface in a very even way. The concentrator has shown outstanding simulation results, achieving an effective concentration-acceptance product (CAP) value of 0.72, on-axis optical efficiency over 85% and good irradiance uniformity on the cell provided by Köhler integration. Furthermore, due to its high tolerance, we will present the dome-shaped Fresnel-Köhler concentrator as a cost-effective CPV optical design. All this makes this concentrator superior to other conventional competitors in the current market.
Resumo:
- PV and HCPV compete in the utility market - PV cost reduction has been dramatic through volume - A complete off-the-shelf optics solution by Evonik and LPI - Based on the best-in-class design: The FK concentrator
Resumo:
In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.