870 resultados para floating wire


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação elaborada para a obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Estruturas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preemptions account for a non-negligible overhead during system execution. There has been substantial amount of research on estimating the delay incurred due to the loss of working sets in the processor state (caches, registers, TLBs) and some on avoiding preemptions, or limiting the preemption cost. We present an algorithm to reduce preemptions by further delaying the start of execution of high priority tasks in fixed priority scheduling. Our approaches take advantage of the floating non-preemptive regions model and exploit the fact that, during the schedule, the relative task phasing will differ from the worst-case scenario in terms of admissible preemption deferral. Furthermore, approximations to reduce the complexity of the proposed approach are presented. Substantial set of experiments demonstrate that the approach and approximations improve over existing work, in particular for the case of high utilisation systems, where savings of up to 22% on the number of preemption are attained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this work was to evaluate the hypothesis that the greater transfer stability leads also to less volume of fumes. Using an Ar + 25%CO2 blend as shielding gas and maintaining constant the average current, wire feed speed and welding speed, bead-on-plate welds were carried out with plain carbon steel solid wire. The welding voltage was scanned to progressively vary the transfer stability. Using two conditions of low stability and one with high stability, fume generation was evaluated by means of the AWS F1.2:2006 standard. The influence of these conditions on fume morphology and composition was also verified. A condition with greater transfer stability does not generate less fume quantity, despite the fact that this condition produces fewer spatters. Other factors such as short-circuit current, arcing time, droplet diameters and arc length are the likely governing factors, but in an interrelated way. Metal transfer stability does not influence either the composition or the size/morphology of fume particulates. (c) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Part I of the present work we describe the viscosity measurements performed on tris(2-ethylhexyl) trimellitate or 1,2,4-benzenetricarboxylic acid, tris(2-ethylhexyl) ester (TOTM) up to 65 MPa and at six temperatures from (303 to 373)K, using a new vibrating-wire instrument. The main aim is to contribute to the proposal of that liquid as a potential reference fluid for high viscosity, high pressure and high temperature. The present Part II is dedicated to report the density measurements of TOTM necessary, not only to compute the viscosity data presented in Part I, but also as complementary data for the mentioned proposal. The present density measurements were obtained using a vibrating U-tube densimeter, model DMA HP, using model DMA5000 as a reading unit, both instruments from Anton Paar GmbH. The measurements were performed along five isotherms from (293 to 373)K and at eleven different pressures up to 68 MPa. As far as the authors are aware, the viscosity and density results are the first, above atmospheric pressure, to be published for TOTM. Due to TOTM's high viscosity, its density data were corrected for the viscosity effect on the U-tube density measurements. This effect was estimated using two Newtonian viscosity standard liquids, 20 AW and 200 GW. The density data were correlated with temperature and pressure using a modified Tait equation. The expanded uncertainty of the present density results is estimated as +/- 0.2% at a 95% confidence level. Those results were correlated with temperature and pressure by a modified Tait equation, with deviations within +/- 0.25%. Furthermore, the isothermal compressibility, K-T, and the isobaric thermal expansivity, alpha(p), were obtained by derivation of the modified Tait equation used for correlating the density data. The corresponding uncertainties, at a 95% confidence level, are estimated to be less than +/- 1.5% and +/- 1.2%, respectively. No isobaric thermal expansivity and isothermal compressibility for TOTM were found in the literature. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The internal impedance of a wire is the function of the frequency. In a conductor, where the conductivity is sufficiently high, the displacement current density can be neglected. In this case, the conduction current density is given by the product of the electric field and the conductance. One of the aspects of the high-frequency effects is the skin effect (SE). The fundamental problem with SE is it attenuates the higher frequency components of a signal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

15th International Conference on Mixed Design of Integrated Circuits and Systems, pp. 177 – 180, Poznan, Polónia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a model for the simulation of an offshore wind system having a rectifier input voltage malfunction at one phase. The offshore wind system model comprises a variable-speed wind turbine supported on a floating platform, equipped with a permanent magnet synchronous generator using full-power four-level neutral point clamped converter. The link from the offshore floating platform to the onshore electrical grid is done through a light high voltage direct current submarine cable. The drive train is modeled by a three-mass model. Considerations about the smart grid context are offered for the use of the model in such a context. The rectifier voltage malfunction domino effect is presented as a case study to show capabilities of the model. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper reports viscosity measurements of compressed liquid dipropyl (DPA) and dibutyl (DBA) adipates obtained with two vibrating wire sensors developed in our group. The vibrating wire instruments were operated in the forced oscillation, or steady-state mode. The viscosity measurements of DPA were carried out in a range of pressures up to 18. MPa and temperatures from (303 to 333). K, and DBA up to 65. MPa and temperature from (303 to 373). K, covering a total range of viscosities from (1.3 to 8.3). mPa. s. The required density data of the liquid samples were obtained in our laboratory using an Anton Paar vibrating tube densimeter and were reported in a previous paper. The viscosity results were correlated with density, using a modified hard-spheres scheme. The root mean square deviation of the data from the correlation is less than (0.21 and 0.32)% and the maximum absolute relative deviations are within (0.43 and 0.81)%, for DPA and DBA respectively. No data for the viscosity of both adipates could be found in the literature. Independent viscosity measurements were also performed, at atmospheric pressure, using an Ubbelohde capillary in order to compare with the vibrating wire results. The expanded uncertainty of these results is estimated as ±1.5% at a 95% confidence level. The two data sets agree within the uncertainty of both methods. © 2015 Published by Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The integrated numerical tool SWAMS (Simulation of Wave Action on Moored Ships) is used to simulate the behavior of a moored container carrier inside Sines’ Harbour. Wave, wind, currents, floating ship and moorings interaction is discussed. Several case scenarios are compared differing in the layout of the harbour and wind and wave conditions. The several harbour layouts correspond to proposed alternatives for the future expansion of Sines’ terminal XXI that include the extension of the East breakwater and of the quay. Additionally, the influence of wind on the behavior of the ship moored and the introduction of pre tensioning the mooring lines was analyzed. Hydrodynamic forces acting on the ship are determined using a modified version of the WAMIT model. This modified model utilizes the Haskind relations and the non-linear wave field inside the harbour obtained with finite element numerical model, BOUSS-WMH (Boussinesq Wave Model for Harbors) to get the wave forces on the ship. The time series of the moored ship motions and forces on moorings are obtained using BAS solver. © 2015 Taylor & Francis Group, London.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.