872 resultados para High Performance Computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The loss of prestressing force over time influences the long-term deflection of the prestressed concrete element. Prestress losses are inherently complex due to the interaction of concrete creep, concrete shrinkage, and steel relaxation. Implementing advanced materials such as ultra-high performance concrete (UHPC) further complicates the estimation of prestress losses because of the changes in material models dependent on curing regime. Past research shows compressive creep is "locked in" when UHPC cylinders are subjected to thermal treatment before being loaded in compression. However, the current precasting manufacturing process would typically load the element (through prestressing strand release from the prestressing bed) before the element would be taken to the curing facility. Members of many ages are stored until curing could be applied to all of them at once. This research was conducted to determine the impact of variable curing times for UHPC on the prestress losses, and hence deflections. Three UHPC beams, a rectangular section, a modified bulb tee section, and a pi-girder, were assessed for losses and deflections using an incremental time step approach and material models specific to UHPC based on compressive creep and shrinkage testing. Results show that although it is important for prestressed UHPC beams to be thermally treated, to "lock in" material properties, the timing of thermal treatment leads to negligible differences in long-term deflections. Results also show that for UHPC elements that are thermally treated, changes in deflection are caused only by external loads because prestress losses are "locked-in" following thermal treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The drugs studied in this work have been reportedly used to commit drug-facilitated sexual assault (DFSA), commonly known as "date rape". Detection of the drugs was performed using high-performance liquid chromatography with ultraviolet detection (HPLC/UV) and identified with high performance-liquid chromatography mass spectrometry (HPLC/MS) using selected ion monitoring (SIM). The objective of this study was to develop a single HPLC method for the simultaneous detection, identification and quantitation of these drugs. The following drugs were simultaneously analyzed: Gamma-hydroxybutyrate (GHB), scopolamine, lysergic acid diethylamide, ketamine, flunitrazepam, and diphenhydramine. The results showed increased sensitivity with electrospray (ES) ionization versus atmospheric pressure chemical ionization (APCI) using HPLC/MS. HPLC/ES/MS was approximately six times more sensitive than HPLC/APCI/MS and about fifty times more sensitive than HPLC/UV. A limit of detection (LOD) of 100 ppb was achieved for drug analysis using this method. The average linear regression coefficient of correlation squared (r2) was 0.933 for HPLC/UV and 0.998 for HPLC/ES/MS. The detection limits achieved by this method allowed for the detection of drug dosages used in beverage tampering. This method can be used to screen beverages suspected of drug tampering. The results of this study demonstrated that solid phase microextraction (SPME) did not improve sensitivity as an extraction technique when compared to direct injections of the drug standards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concrete substructures are often subjected to environmental deterioration, such as sulfate and acid attack, which leads to severe damage and causes structure degradation or even failure. In order to improve the durability of concrete, the High Performance Concrete (HPC) has become widely used by partially replacing cement with pozzolanic materials. However, HPC degradation mechanisms in sulfate and acidic environments are not completely understood. It is therefore important to evaluate the performance of the HPC in such conditions and predict concrete service life by establishing degradation models. This study began with a review of available environmental data in the State of Florida. A total of seven bridges have been inspected. Concrete cores were taken from these bridge piles and were subjected for microstructural analysis using Scanning Electron Microscope (SEM). Ettringite is found to be the products of sulfate attack in sulfate and acidic condition. In order to quantitatively analyze concrete deterioration level, an image processing program is designed using Matlab to obtain quantitative data. Crack percentage (Acrack/Asurface) is used to evaluate concrete deterioration. Thereafter, correlation analysis was performed to find the correlation between five related variables and concrete deterioration. Environmental sulfate concentration and bridge age were found to be positively correlated, while environmental pH level was found to be negatively correlated. Besides environmental conditions, concrete property factor was also included in the equation. It was derived from laboratory testing data. Experimental tests were carried out implementing accelerated expansion test under controlled environment. Specimens of eight different mix designs were prepared. The effect of pozzolanic replacement rate was taken into consideration in the empirical equation. And the empirical equation was validated with existing bridges. Results show that the proposed equations compared well with field test results with a maximum deviation of ± 20%. Two examples showing how to use the proposed equations are provided to guide the practical implementation. In conclusion, the proposed approach of relating microcracks to deterioration is a better method than existing diffusion and sorption models since sulfate attack cause cracking in concrete. Imaging technique provided in this study can also be used to quantitatively analyze concrete samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasingly strict regulations on greenhouse gas emissions make the fuel economy a pressing factor for automotive manufacturers. Lightweighting and engine downsizing are two strategies pursued to achieve the target. In this context, materials play a key role since these limit the engine efficiency and components weight, due to their acceptable thermo-mechanical loads. Piston is one of the most stressed engine components and it is traditionally made of Al alloys, whose weakness is to maintain adequate mechanical properties at high temperature due to overaging and softening. The enhancement in strength-to-weight ratio at high temperature of Al alloys had been investigated through two approaches: increase of strength at high temperature or reduction of the alloy density. Several conventional and high performance Al-Si and Al-Cu alloys have been characterized from a microstructural and mechanical point of view, investigating the effects of chemical composition, addition of transition elements and heat treatment optimization, in the specific temperature range for pistons operations. Among the Al-Cu alloys, the research outlines the potentialities of two innovative Al-Cu-Li(-Ag) alloys, typically adopted for structural aerospace components. Moreover, due to the increased probability of abnormal combustions in high performance spark-ignition engines, the second part of the dissertation deals with the study of knocking damages on Al pistons. Thanks to the cooperation with Ferrari S.p.A. and Fluid Machinery Research Group - Unibo, several bench tests have been carried out under controlled knocking conditions. Knocking damage mechanisms were investigated through failure analyses techniques, starting from visual analysis up to detailed SEM investigations. These activities allowed to relate piston knocking damage to engine parameters, with the final aim to develop an on-board knocking controller able to increase engine efficiency, without compromising engine functionality. Finally, attempts have been made to quantify the knock-induced damages, to provide a numerical relation with engine working conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study analyses the calibration process of a newly developed high-performance plug-in hybrid electric passenger car powertrain. The complexity of modern powertrains and the more and more restrictive regulations regarding pollutant emissions are the primary challenges for the calibration of a vehicle’s powertrain. In addition, the managers of OEM need to know as earlier as possible if the vehicle under development will meet the target technical features (emission included). This leads to the necessity for advanced calibration methodologies, in order to keep the development of the powertrain robust, time and cost effective. The suggested solution is the virtual calibration, that allows the tuning of control functions of a powertrain before having it built. The aim of this study is to calibrate virtually the hybrid control unit functions in order to optimize the pollutant emissions and the fuel consumption. Starting from the model of the conventional vehicle, the powertrain is then hybridized and integrated with emissions and aftertreatments models. After its validation, the hybrid control unit strategies are optimized using the Model-in-the-Loop testing methodology. The calibration activities will proceed thanks to the implementation of a Hardware-in-the-Loop environment, that will allow to test and calibrate the Engine and Transmission control units effectively, besides in a time and cost saving manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scope of this dissertation is to study the transport phenomena of small molecules in polymers and membranes for gas separation applications, with particular attention to energy efficiency and environmental sustainability. This work seeks to contribute to the development of new competitive selective materials through the characterization of novel organic polymers such as CANALs and ROMPs, as well as through the combination of selective materials obtaining mixed matrix membranes (MMMs), to make membrane technologies competitive with the traditional ones. Kinetic and thermodynamic aspects of the transport properties were investigated in ideal and non-ideal scenarios, such as mixed-gas experiments. The information we gathered contributed to the development of the fundamental understanding related to phenomenon like CO2-induced plasticization and physical aging. Among the most significant results, ZIF-8/PPO MMMs provided materials whose permeability and selectivity were higher than those of the pure materials for He/CO2 separation. The CANALs featured norbornyl benzocyclobutene backbone and thereby introduced a third typology of ladder polymers in the gas separation field, expanding the structural diversity of microporous materials. CANALs have a completely hydrocarbon-based and non-polar rigid backbone, which makes them an ideal model system to investigate structure-property correlations. ROMPs were synthesized by means of the ring opening metathesis living polymerization, which allowed the formation of bottlebrush polymers. CF3-ROMP reveled to be ultrapermeable to CO2, with unprecedented plasticization resistance properties. Mixed-gas experiments in glassy polymer showed that solubility-selectivity controls the separation efficiency of materials in multicomponent conditions. Finally, it was determined that plasticization pressure in not an intrinsic property of a material and does not represent a state of the system, but rather comes from the contribution of solubility coefficient and diffusivity coefficient in the framework of the solution-diffusion model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High Energy efficiency and high performance are the key regiments for Internet of Things (IoT) end-nodes. Exploiting cluster of multiple programmable processors has recently emerged as a suitable solution to address this challenge. However, one of the main bottlenecks for multi-core architectures is the instruction cache. While private caches fall into data replication and wasting area, fully shared caches lack scalability and form a bottleneck for the operating frequency. Hence we propose a hybrid solution where a larger shared cache (L1.5) is shared by multiple cores connected through a low-latency interconnect to small private caches (L1). However, it is still limited by large capacity miss with a small L1. Thus, we propose a sequential prefetch from L1 to L1.5 to improve the performance with little area overhead. Moreover, to cut the critical path for better timing, we optimized the core instruction fetch stage with non-blocking transfer by adopting a 4 x 32-bit ring buffer FIFO and adding a pipeline for the conditional branch. We present a detailed comparison of different instruction cache architectures' performance and energy efficiency recently proposed for Parallel Ultra-Low-Power clusters. On average, when executing a set of real-life IoT applications, our two-level cache improves the performance by up to 20% and loses 7% energy efficiency with respect to the private cache. Compared to a shared cache system, it improves performance by up to 17% and keeps the same energy efficiency. In the end, up to 20% timing (maximum frequency) improvement and software control enable the two-level instruction cache with prefetch adapt to various battery-powered usage cases to balance high performance and energy efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Doctoral Thesis aims to study and develop advanced and high-efficient battery chargers for full electric and plug-in electric cars. The document is strictly industry-oriented and relies on automotive standards and regulations. In the first part a general overview about wireless power transfer battery chargers (WPTBCs) and a deep investigation about international standards are carried out. Then, due to the highly increasing attention given to WPTBCs by the automotive industry and considering the need of minimizing weight, size and number of components this work focuses on those architectures that realize a single stage for on-board power conversion avoiding the implementation of the DC/DC converter upstream the battery. Based on the results of the state-of-the-art, the following sections focus on two stages of the architecture: the resonant tank and the primary DC/AC inverter. To reach the maximum transfer efficiency while minimizing weight and size of the vehicle assembly a coordinated system level design procedure for resonant tank along with an innovative control algorithm for the DC/AC primary inverter is proposed. The presented solutions are generalized and adapted for the best trade-off topologies of compensation networks: Series-Series and Series-Parallel. To assess the effectiveness of the above-mentioned objectives, validation and testing are performed through a simulation environment, while experimental test benches are carried out by the collaboration of Delft University of Technology (TU Delft).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research activity aims at providing a reliable estimation of particular state variables or parameters concerning the dynamics and performance optimization of a MotoGP-class motorcycle, integrating the classical model-based approach with new methodologies involving artificial intelligence. The first topic of the research focuses on the estimation of the thermal behavior of the MotoGP carbon braking system. Numerical tools are developed to assess the instantaneous surface temperature distribution in the motorcycle's front brake discs. Within this application other important brake parameters are identified using Kalman filters, such as the disc convection coefficient and the power distribution in the disc-pads contact region. Subsequently, a physical model of the brake is built to estimate the instantaneous braking torque. However, the results obtained with this approach are highly limited by the knowledge of the friction coefficient (μ) between the disc rotor and the pads. Since the value of μ is a highly nonlinear function of many variables (namely temperature, pressure and angular velocity of the disc), an analytical model for the friction coefficient estimation appears impractical to establish. To overcome this challenge, an innovative hybrid solution is implemented, combining the benefit of artificial intelligence (AI) with classical model-based approach. Indeed, the disc temperature estimated through the thermal model previously implemented is processed by a machine learning algorithm that outputs the actual value of the friction coefficient thus improving the braking torque computation performed by the physical model of the brake. Finally, the last topic of this research activity regards the development of an AI algorithm to estimate the current sideslip angle of the motorcycle's front tire. While a single-track motorcycle kinematic model and IMU accelerometer signals theoretically enable sideslip calculation, the presence of accelerometer noise leads to a significant drift over time. To address this issue, a long short-term memory (LSTM) network is implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaze estimation has gained interest in recent years for being an important cue to obtain information about the internal cognitive state of humans. Regardless of whether it is the 3D gaze vector or the point of gaze (PoG), gaze estimation has been applied in various fields, such as: human robot interaction, augmented reality, medicine, aviation and automotive. In the latter field, as part of Advanced Driver-Assistance Systems (ADAS), it allows the development of cutting-edge systems capable of mitigating road accidents by monitoring driver distraction. Gaze estimation can be also used to enhance the driving experience, for instance, autonomous driving. It also can improve comfort with augmented reality components capable of being commanded by the driver's eyes. Although, several high-performance real-time inference works already exist, just a few are capable of working with only a RGB camera on computationally constrained devices, such as a microcontroller. This work aims to develop a low-cost, efficient and high-performance embedded system capable of estimating the driver's gaze using deep learning and a RGB camera. The proposed system has achieved near-SOTA performances with about 90% less memory footprint. The capabilities to generalize in unseen environments have been evaluated through a live demonstration, where high performance and near real-time inference were obtained using a webcam and a Raspberry Pi4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.