9 resultados para Low threshold current densities
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
The improvement of devices provided by Nanotechnology has put forward new classes of sensors, called bio-nanosensors, which are very promising for the detection of biochemical molecules in a large variety of applications. Their use in lab-on-a-chip could gives rise to new opportunities in many fields, from health-care and bio-warfare to environmental and high-throughput screening for pharmaceutical industry. Bio-nanosensors have great advantages in terms of cost, performance, and parallelization. Indeed, they require very low quantities of reagents and improve the overall signal-to-noise-ratio due to increase of binding signal variations vs. area and reduction of stray capacitances. Additionally, they give rise to new challenges, such as the need to design high-performance low-noise integrated electronic interfaces. This thesis is related to the design of high-performance advanced CMOS interfaces for electrochemical bio-nanosensors. The main focus of the thesis is: 1) critical analysis of noise in sensing interfaces, 2) devising new techniques for noise reduction in discrete-time approaches, 3) developing new architectures for low-noise, low-power sensing interfaces. The manuscript reports a multi-project activity focusing on low-noise design and presents two developed integrated circuits (ICs) as examples of advanced CMOS interfaces for bio-nanosensors. The first project concerns low-noise current-sensing interface for DC and transient measurements of electrophysiological signals. The focus of this research activity is on the noise optimization of the electronic interface. A new noise reduction technique has been developed so as to realize an integrated CMOS interfaces with performance comparable with state-of-the-art instrumentations. The second project intends to realize a stand-alone, high-accuracy electrochemical impedance spectroscopy interface. The system is tailored for conductivity-temperature-depth sensors in environmental applications, as well as for bio-nanosensors. It is based on a band-pass delta-sigma technique and combines low-noise performance with low-power requirements.
Resumo:
The quench characteristics of second generation (2 G) YBCO Coated Conductor (CC) tapes are of fundamental importance for the design and safe operation of superconducting cables and magnets based on this material. Their ability to transport high current densities at high temperature, up to 77 K, and at very high fields, over 20 T, together with the increasing knowledge in their manufacturing, which is reducing their cost, are pushing the use of this innovative material in numerous system applications, from high field magnets for research to motors and generators as well as for cables. The aim of this Ph. D. thesis is the experimental analysis and numerical simulations of quench in superconducting HTS tapes and coils. A measurements facility for the characterization of superconducting tapes and coils was designed, assembled and tested. The facility consist of a cryostat, a cryocooler, a vacuum system, resistive and superconducting current leads and signal feedthrough. Moreover, the data acquisition system and the software for critical current and quench measurements were developed. A 2D model was developed using the finite element code COMSOL Multiphysics R . The problem of modeling the high aspect ratio of the tape is tackled by multiplying the tape thickness by a constant factor, compensating the heat and electrical balance equations by introducing a material anisotropy. The model was then validated both with the results of a 1D quench model based on a non-linear electric circuit coupled to a thermal model of the tape, to literature measurements and to critical current and quench measurements made in the cryogenic facility. Finally the model was extended to the study of coils and windings with the definition of the tape and stack homogenized properties. The procedure allows the definition of a multi-scale hierarchical model, able to simulate the windings with different degrees of detail.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
The PhD project was focused on the study of the poultry welfare conditions and improvements. The project work was divided into 3 main research activities. A) Field evaluation of chicken meat rearing conditions kept in intensive farms. Considering the lack of published reports concerning the overall Italian rearing conditions of broiler chickens, a survey was carried out to assess the welfare conditions of broiler reared in the most important poultry companies in Italy to verify if they are in accordance with the advices given in the European proposal COM (2005) 221 final. Chicken farm conditions, carcass lesions and meat quality were investigated. 1. The densities currently used in Italy are in accordance with the European proposal COM 221 final (2005) which suggests to keep broilers at a density lower than 30-32 kg live weight/m2 and to not exceed 38-40 kg live weight/m2. 2. The mortality rates in summer and winter agree with the mortality score calculated following the formula reported in the EU Proposal COM 221 final (2005). 3. The incidence of damaged carcasses was very low and did not seem related to the stocking density. 4. The FPD scores were generally above the maximum limit advised by the EU proposal COM 221 final (2005), although the stocking densities were lower than 30-32 kg live weight per m2. 5. It can be stated that the control of the environmental conditions, particularly litter quality, appears a key issue to control the onset of foot dermatitis. B) Manipulation of several farm parameters, such litter material and depth, stocking density and light regimen to improve the chicken welfare conditions, in winter season. 1. Even though 2 different stocking densities were established in this study, the performances achieved from the chickens were almost identical among groups. 2. The FCR was significantly better in Standard conditions contrarily to birds reared in Welfare conditions with lower stocking density, more litter material and with a light program of 16 hours light and 8 hours dark. 3. In our trial, in Standard groups we observed a higher content of moisture, nitrogen and ammonia released from the litter. Therefore it can be assumed that the environmental characteristics have been positively changed by the improvements of the rearing conditions adopted for Welfare groups. 4. In Welfare groups the exhausted litters of the pens were dryer and broilers showed a lower occurrence of FPD. 5. The prevalence of hock burn lesions, like FPD, is high with poor litter quality conditions. 6. The combined effect of a lower stocking density, a greater amount of litter material and a photoperiod similar to the natural one, have positively influenced the chickens welfare status, as a matter of fact the occurrence of FPD in Welfare groups was the lowest keeping the score under the European threshold of the proposal COM 221 final(2005). C) The purpose of the third research was to study the effect of high or low stocking density of broiler chickens, different types of litter and the adoption of short or long lighting regimen on broiler welfare through the evaluation of their productivity and incidence of foot pad dermatitis during the hot season. 1. The feed efficiency was better for the Low Density than for High Density broilers. 2. The appearance of FPD was not influenced by stocking density. 3. The foot examination revealed that the lesions occurred more in birds maintained on chopped wheat straw than on wood shaving. 4. In conclusion, the adoptions of a short light regimen similar to that occurring in nature during summer reduces the feed intake without modify the growth rate thus improving the feed efficiency. Foot pad lesion were not affected neither by stocking densities nor by light regimens whereas wood shavings exerted a favourable effect in preserving foot pad in good condition. D) A study was carried out to investigate more widely the possible role of 25-hydroxycholecalciferol supplemented in the diet of a laying hen commercial strain (Lohmann brown) in comparison of diets supplemented with D3 or with D3 + 25- hydroxycholecalciferol. Egg traits during a productive cycle as well as the bone characteristics of the layers have been as well evaluated to determine if there the vitamin D3 may enhance the welfare status of the birds. 1. The weight of the egg and of its components is often greater in hens fed a diet enriched with 25-hydroxycholecalciferol. 2. Since eggs of treated groups are heavier and a larger amount of shell is needed, a direct effect on shell strength is observed. 3. At 30 and at 50 wk of age hens fed 25 hydroxycholecalciferol exhibited greater values of bone breaking force. 4. Radiographic density values obtained in the trial are always higher in hens fed with 25-hydroxycholecalciferol of both treatments: supplemented for the whole laying cycle (25D3) or from 40 weeks of age onward (D3+25D3).
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.
Resumo:
The prospect of the continuous multiplication of life styles, the obsolescence of the traditional typological diagrams, the usability of spaces on different territorial scales, imposes on contemporary architecture the search for new models of living. Limited densities in urban development have produced the erosion of territory, the increase of the harmful emissions and energy consumption. High density housing cannot refuse the social emergency to ensure high quality and low cost dwellings, to a new people target: students, temporary workers, key workers, foreign, young couples without children, large families and, in general, people who carry out public services. Social housing strategies have become particularly relevant in regenerating high density urban outskirts. The choice of this research topic derives from the desire to deal with the recent accommodation emergency, according to different perspectives, with a view to give a contribution to the current literature, by proposing some tools for a correct design of the social housing, by ensuring good quality, cost-effective, and eco-sustainable solutions, from the concept phase, through management and maintenance, until the end of the building life cycle. The purpose of the thesis is defining a framework of guidelines that become effective instruments to be used in designing the social housing. They should also integrate the existing regulations and are mainly thought for those who work in this sector. They would aim at supporting students who have to cope with this particular residential theme, and also the users themselves. The scientific evidence of either the recent specialized literature or the solutions adopted in some case studies within the selected metropolitan areas of Milan, London and São Paulo, it is possible to identify the principles of this new design approach, in which the connection between typology, morphology and technology pursues the goal of a high living standard.