7 resultados para Micron
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
The conversion coefficients from air kerma to ICRU operational dose equivalent quantities for ENEA’s realization of the X-radiation qualities L10-L35 of the ISO “Low Air Kerma rate” series (L), N10-N40 of the ISO “Narrow spectrum” series (N) and H10-H60 of the ISO “High Air-kerma rate” (H) series and two beams at 5 kV and 7.5 kV were determined by utilising X-ray spectrum measurements. The pulse-height spectra were measured using a planar high-purity germanium spectrometer (HPGe) and unfolded to fluence spectra using a stripping procedure then validate with using Monte Carlo generated data of the spectrometer response. HPGe portable detector has a diameter of 8.5 mm and a thickness of 5 mm. The entrance window of the crystal is collimated by a 0.5 mm thick Aluminum ring to an open diameter of 6.5 mm. The crystal is mounted at a distance of 5 mm from the Berillium window (thickness 25.4 micron). The Monte Carlo method (MCNP-4C) was used to calculate the efficiency, escape and Compton curves of a planar high-purity germanium detector (HPGe) in the 5-60 keV energy. These curves were used for the determination of photon spectra produced by the X-ray machine SEIFERT ISOVOLT 160 kV in order to allow a precise characterization of photon beams in the low energy range, according to the ISO 4037. The detector was modelled with the MCNP computer code and validated with experimental data. To verify the measuring and the stripping procedure, the first and the second half-value layers and the air kerma rate were calculated from the counts spectra and compared with the values measured using an a free-air ionization chamber. For each radiation quality, the spectrum was characterized by the parameters given in ISO 4037-1. The conversion coefficients from the air kerma to the ICRU operational quantities Hp(10), Hp(0.07), H’(0.07) and H*(10) were calculated using monoenergetic conversion coefficients. The results are discussed with respect to ISO 4037-4, and compared with published results for low-energy X-ray spectra. The main motivation for this work was the lack of a treatment of the low photon energy region (from a few keV up to about 60 keV).
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Nanotechnology entails the manufacturing and manipulation of matter at length scales ranging from single atoms to micron-sized objects. The ability to address properties on the biologically-relevant nanometer scale has made nanotechnology attractive for Nanomedicine. This is perceived as a great opportunity in healthcare especially in diagnostics, therapeutics and more in general to develop personalized medicine. Nanomedicine has the potential to enable early detection and prevention, and to improve diagnosis, mass screening, treatment and follow-up of many diseases. From the biological standpoint, nanomaterials match the typical size of naturally occurring functional units or components of living organisms and, for this reason, enable more effective interaction with biological systems. Nanomaterials have the potential to influence the functionality and cell fate in the regeneration of organs and tissues. To this aim, nanotechnology provides an arsenal of techniques for intervening, fabricate, and modulate the environment where cells live and function. Unconventional micro- and nano-fabrication techniques allow patterning biomolecules and biocompatible materials down to the level of a few nanometer feature size. Patterning is not simply a deterministic placement of a material; in a more extended acception it allows a controlled fabrication of structures and gradients of different nature. Gradients are emerging as one of the key factors guiding cell adhesion, proliferation, migration and even differentiation in the case of stem cells. The main goal of this thesis has been to devise a nanotechnology-based strategy and tools to spatially and temporally control biologically-relevant phenomena in-vitro which are important in some fields of medical research.
Resumo:
In this Thesis, we study the physical properties and the cosmic evolution of AGN and their host galaxies since z∼3. Our analysis exploits samples of star forming galaxies detected with Herschel at far-IR wavelengths (from 70 up to 500 micron) in different extragalactic surveys, such as COSMOS and the deep GOODS (South and North) fields. The broad-band ancillary data available in COSMOS and the GOODS fields, allows us to implement Herschel and Spitzer photometry with multi-wavelength ancillary data. We perform a multicomponent SED-fitting decomposition to decouple the emission due to star formation from that due to AGN accretion, and to estimate both host-galaxy parameters (such as stellar mass, M* and star formation rate, SFR), and nuclear intrinsic bolometric luminosities. We use the individual estimates of AGN bolometric luminosity obtained through SED-fitting decomposition to reconstruct the redshit evolution of the AGN bolometric luminosity function since z∼3. The resulting trends are used to estimate the overall AGN accretion rate density at different cosmic epochs and to trace the first ever estimate of the AGN accretion history from an IR survey. Later on, we focus our study on the connection between AGN accretion and integrated galaxy properties. We analyse the relationships of AGN accretion with galaxy properties in the SFR-M* plane and at different cosmic epochs. Finally, we infer what is the parameter that best correlates with AGN accretion, comparing our results with previous studies and discussing their physical implications in the context of current scenarios of AGN/galaxy evolution.
Resumo:
L’accoppiamento articolare in ceramica è sempre più utilizzato in chirurgia protesica dell’anca per le sue eccellenti proprietà tribologiche. Tuttavia la fragilità della ceramica è causa di fallimenti meccanici. Abbiamo quindi condotto una serie di studi al fine di individuare un metodo efficace di diagnosi precoce del fallimento della ceramica. Abbiamo analizzato delle componenti ceramiche espiantate e abbiamo trovato un pattern di usura pre-frattura che faceva supporre una dispersione di particelle di ceramica nello spazio articolare. Per la diagnosi precoce abbiamo validato una metodica basata sulla microanalisi del liquido sinoviale. Per validare la metodica abbiamo eseguito un agoaspirato in 12 protesi ben funzionanti (bianchi) e confrontato i risultati di 39 protesi con segni di rottura con quelli di 7 senza segni di rottura. Per individuare i pazienti a rischio rottura i dati demografici di 26 pazienti con ceramica rotta sono stati confrontati con 49 controlli comparabili in termini demografici, tipo di ceramica e tipo di protesi. Infine è stata condotta una revisione sistematica della letteratura sulla diagnosi della rottura della ceramica. Nell’aspirato la presenza di almeno 11 particelle ceramiche di dimensioni inferiori a 3 micron o di una maggiore di 3 micron per ogni campo di osservazione sono segno di rottura della ceramica. La metodica con agoaspirato ha 100% di sensibilità e 88 % di specificità nel predire rotture della ceramica. Nel gruppo delle ceramiche rotte è stato trovato un maggior numero di malposizionamenti della protesi rispetto ai controlli (p=0,001). Il rumore in protesi con ceramica dovrebbe sollevare il sospetto di fallimento ed indurre ad eseguire una TC e un agoaspirato. Dal confronto con la letteratura la nostra metodica risulta essere la più efficace.