23 resultados para Low-power applications
Resumo:
Lo studio presentato in questa sede concerne applicazioni di saldatura LASER caratterizzate da aspetti di non-convenzionalità ed è costituito da tre filoni principali. Nel primo ambito di intervento è stata valutata la possibilità di effettuare saldature per fusione, con LASER ad emissione continua, su pannelli Aluminum Foam Sandwich e su tubi riempiti in schiuma di alluminio. Lo studio ha messo in evidenza numerose linee operative riguardanti le problematiche relative alla saldatura delle pelli esterne dei componenti ed ha dimostrato la fattibilità relativa ad un approccio di giunzione LASER integrato (saldatura seguita da un post trattamento termico) per la realizzazione della giunzione completa di particolari tubolari riempiti in schiuma con ripristino della struttura cellulare all’interfaccia di giunzione. Il secondo ambito di intervento è caratterizzato dall’applicazione di una sorgente LASER di bassissima potenza, operante in regime ad impulsi corti, nella saldatura di acciaio ad elevato contenuto di carbonio. Lo studio ha messo in evidenza come questo tipo di sorgente, solitamente applicata per lavorazioni di ablazione e marcatura, possa essere applicata anche alla saldatura di spessori sub-millimetrici. In questa fase è stato messo in evidenza il ruolo dei parametri di lavoro sulla conformazione del giunto ed è stata definita l’area di fattibilità del processo. Lo studio è stato completato investigando la possibilità di applicare un trattamento LASER dopo saldatura per addolcire le eventuali zone indurite. In merito all’ultimo ambito di intervento l’attività di studio si è focalizzata sull’utilizzo di sorgenti ad elevata densità di potenza (60 MW/cm^2) nella saldatura a profonda penetrazione di acciai da costruzione. L’attività sperimentale e di analisi dei risultati è stata condotta mediante tecniche di Design of Experiment per la valutazione del ruolo preciso di tutti i parametri di processo e numerose considerazioni relative alla formazione di cricche a caldo sono state suggerite.
Resumo:
In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.
Resumo:
In this thesis, the industrial application of control a Permanent Magnet Synchronous Motor in a sensorless configuration has been faced, and in particular the task of estimating the unknown “parameters” necessary for the application of standard motor control algorithms. In literature several techniques have been proposed to cope with this task, among them the technique based on model-based nonlinear observer has been followed. The hypothesis of neglecting the mechanical dynamics from the motor model has been applied due to practical and physical considerations, therefore only the electromagnetic dynamics has been used for the observers design. First observer proposed is based on stator currents and Stator Flux dynamics described in a generic rotating reference frame. Stator flux dynamics are known apart their initial conditions which are estimated, with speed that is also unknown, through the use of the Adaptive Theory. The second observer proposed is based on stator currents and Rotor Flux dynamics described in a self-aligning reference frame. Rotor flux dynamics are described in the stationary reference frame exploiting polar coordinates instead of classical Cartesian coordinates, by means the estimation of amplitude and speed of the rotor flux. The stability proof is derived in a Singular Perturbation Framework, which allows for the use the current estimation errors as a measure of rotor flux estimation errors. The stability properties has been derived using a specific theory for systems with time scale separation, which guarantees a semi-global practical stability. For the two observer ideal simulations and real simulations have been performed to prove the effectiveness of the observers proposed, real simulations on which the effects of the Inverter nonlinearities have been introduced, showing the already known problems of the model-based observers for low speed applications.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
This thesis investigates interactive scene reconstruction and understanding using RGB-D data only. Indeed, we believe that depth cameras will still be in the near future a cheap and low-power 3D sensing alternative suitable for mobile devices too. Therefore, our contributions build on top of state-of-the-art approaches to achieve advances in three main challenging scenarios, namely mobile mapping, large scale surface reconstruction and semantic modeling. First, we will describe an effective approach dealing with Simultaneous Localization And Mapping (SLAM) on platforms with limited resources, such as a tablet device. Unlike previous methods, dense reconstruction is achieved by reprojection of RGB-D frames, while local consistency is maintained by deploying relative bundle adjustment principles. We will show quantitative results comparing our technique to the state-of-the-art as well as detailed reconstruction of various environments ranging from rooms to small apartments. Then, we will address large scale surface modeling from depth maps exploiting parallel GPU computing. We will develop a real-time camera tracking method based on the popular KinectFusion system and an online surface alignment technique capable of counteracting drift errors and closing small loops. We will show very high quality meshes outperforming existing methods on publicly available datasets as well as on data recorded with our RGB-D camera even in complete darkness. Finally, we will move to our Semantic Bundle Adjustment framework to effectively combine object detection and SLAM in a unified system. Though the mathematical framework we will describe does not restrict to a particular sensing technology, in the experimental section we will refer, again, only to RGB-D sensing. We will discuss successful implementations of our algorithm showing the benefit of a joint object detection, camera tracking and environment mapping.
Resumo:
This thesis focuses on the energy efficiency in wireless networks under the transmission and information diffusion points of view. In particular, on one hand, the communication efficiency is investigated, attempting to reduce the consumption during transmissions, while on the other hand the energy efficiency of the procedures required to distribute the information among wireless nodes in complex networks is taken into account. For what concerns energy efficient communications, an innovative transmission scheme reusing source of opportunity signals is introduced. This kind of signals has never been previously studied in literature for communication purposes. The scope is to provide a way for transmitting information with energy consumption close to zero. On the theoretical side, starting from a general communication channel model subject to a limited input amplitude, the theme of low power transmission signals is tackled under the perspective of stating sufficient conditions for the capacity achieving input distribution to be discrete. Finally, the focus is shifted towards the design of energy efficient algorithms for the diffusion of information. In particular, the endeavours are aimed at solving an estimation problem distributed over a wireless sensor network. The proposed solutions are deeply analyzed both to ensure their energy efficiency and to guarantee their robustness against losses during the diffusion of information (against information diffusion truncation more in general).