21 resultados para VLSI CAD
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Recent developments in piston engine technology have increased performance in a very significant way. Diesel turbocharged/turbo compound engines, fuelled by jet fuels, have great performances. The focal point of this thesis is the transformation of the FIAT 1900 jtd diesel common rail engine for the installation on general aviation aircrafts like the CESSNA 172. All considerations about the diesel engine are supported by the studies that have taken place in the laboratories of the II Faculty of Engineering in Forlì. This work, mostly experimental, concerns the transformation of the automotive FIAT 1900 jtd – 4 cylinders – turbocharged – diesel common rail into an aircraft engine. The design philosophy of the aluminium alloy basement of the spark ignition engine have been transferred to the diesel version while the pistons and the head of the FIAT 1900 jtd are kept in the aircraft engine. Different solutions have been examined in this work. A first V 90° cylinders version that can develop up to 300 CV and whose weight is 30 kg, without auxiliaries and turbocharging group. The second version is a development of e original version of the diesel 1900 cc engine with an optimized crankshaft, that employ a special steel, 300M, and that is verified for the aircraft requirements. Another version with an augmented stroke and with a total displacement of 2500 cc has been examined; the result is a 30% engine heavier. The last version proposed is a 1600 cc diesel engine that work at 5000 rpm, with a reduced stroke and capable of more than 200 CV; it was inspired to the Yamaha R1 motorcycle engine. The diesel aircraft engine design keeps the bore of 82 mm, while the stroke is reduced to 64.6 mm, so the engine size is reduced along with weight. The basement weight, in GD AlSi 9 MgMn alloy, is 8,5 kg. Crankshaft, rods and accessories have been redesigned to comply to aircraft standards. The result is that the overall size is increased of only the 8% when referred to the Yamaha engine spark ignition version, while the basement weight increases of 53 %, even if the bore of the diesel version is 11% lager. The original FIAT 1900 jtd piston has been slightly modified with the combustion chamber reworked to the compression ratio of 15:1. The material adopted for the piston is the aluminium alloy A390.0-T5 commonly used in the automotive field. The piston weight is 0,5 kg for the diesel engine. The crankshaft is verified to torsional vibrations according to the Lloyd register of shipping requirements. The 300M special steel crankshaft total weight is of 14,5 kg. The result reached is a very small and light engine that may be certified for general aviation: the engine weight, without the supercharger, air inlet assembly, auxiliary generators and high pressure body, is 44,7 kg and the total engine weight, with enlightened HP pump body and the titanium alloy turbocharger is less than 100 kg, the total displacement is 1365 cm3 and the estimated output power is 220 CV. The direct conversion of automotive piston engine to aircrafts pays too huge weight penalties. In fact the main aircraft requirement is to optimize the power to weight ratio in order to obtain compact and fast engines for aeronautical use: this 1600 common rail diesel engine version demonstrates that these results can be reached.
Resumo:
L’obbiettivo di una ricostruzione micro vascolare di mandibola è quello di assicurare al paziente il miglior risultato estetico e funzionale auspicabile. Ciò può essere conseguito utilizzando il lembo microvascolare di perone/cresta iliaca e una placca ricostruttiva in titanio che funga da guida nella fase di modellamento del lembo per ricreare un contorno parabolico il più possibile simile al profilo mandibolare originario del paziente. Il modellamento manuale ed artigianale intraoperatorio di una placca ricostruttiva è operatore dipendente ed anche in mani molto abili o ricorrendo a modelli anatomici stereolitografici non sempre risulta preciso quanto voluto e spesso non porta ai risultati postoperatori attesi e desiderati. Obbiettivo del nostro studio è stato pertanto, sfruttando le moderne tecnologie CAD-CAM, la produzione di placche ricostruttive “custom -made” progettate direttamente al computer ricreanti il profilo osseo originale del paziente.
Resumo:
Additive Manufacturing (AM) is nowadays considered an important alternative to traditional manufacturing processes. AM technology shows several advantages in literature as design flexibility, and its use increases in automotive, aerospace and biomedical applications. As a systematic literature review suggests, AM is sometimes coupled with voxelization, mainly for representation and simulation purposes. Voxelization can be defined as a volumetric representation technique based on the model’s discretization with hexahedral elements, as occurs with pixels in the 2D image. Voxels are used to simplify geometric representation, store intricated details of the interior and speed-up geometric and algebraic manipulation. Compared to boundary representation used in common CAD software, voxel’s inherent advantages are magnified in specific applications such as lattice or topologically structures for visualization or simulation purposes. Those structures can only be manufactured with AM employment due to their complex topology. After an accurate review of the existent literature, this project aims to exploit the potential of the voxelization algorithm to develop optimized Design for Additive Manufacturing (DfAM) tools. The final aim is to manipulate and support mechanical simulations of lightweight and optimized structures that should be ready to be manufactured with AM with particular attention to automotive applications. A voxel-based methodology is developed for efficient structural simulation of lattice structures. Moreover, thanks to an optimized smoothing algorithm specific for voxel-based geometries, a topological optimized and voxelized structure can be transformed into a surface triangulated mesh file ready for the AM process. Moreover, a modified panel code is developed for simple CFD simulations using the voxels as a discretization unit to understand the fluid-dynamics performances of industrial components for preliminary aerodynamic performance evaluation. The developed design tools and methodologies perfectly fit the automotive industry’s needs to accelerate and increase the efficiency of the design workflow from the conceptual idea to the final product.
Resumo:
Il tema della casa, e più in generale dell’abitare, è argomento tornato al centro del dibattito sociale più di quanto non sia avvenuto in campo tecnico‐architettonico. Sono infatti abbastanza evidenti i limiti delle proposte che nel recente passato sono state, di norma, elaborate nelle nostre città, proposte molto spesso incapaci di tener conto delle molteplici dimensioni che l’evoluzione dei costumi e della struttura urbana e sociale ha indotto anche nella sfera della residenza e che sono legate a mutate condizioni lavorative, alla diversità di cultura e di religione di nuovi gruppi etnici insediati, alla struttura dei nuclei familiari (ove ancora esistano) ed a molti altri fattori; cambiate le esigenze, un tempo composte nella struttura della famiglia, sono cambiati desideri e richieste mentre l’apparato normativo è rimasto strutturato su modelli sociali ed economici superati. Il tema dunque assume, oggi più che mai, connotazioni con forti relazioni fra problematiche funzionali, tecnologiche e simboliche. Stimolata da queste osservazioni generali, la ricerca si è mossa partendo da un’analisi di casi realizzati nel periodo storico in cui si è esaurita, in Italia, l’emergenza abitativa post‐bellica, nell’intento di riconsiderare l’approccio vitale che era stato messo in campo in quella drammatica circostanza, ma già consapevole che lo sviluppo che avrebbe poi avuto sarebbe stato molto più circoscritto. La tesi infatti, dopo aver osservato rapidamente la consistenza tipologica ed architettonica di quegli interventi, per trarne suggestioni capaci di suggerire un credibile e nuovo prototipo da indagare, attraverso un’analisi comparativa sugli strumenti oggi disponibili per la comunicazione e gestione del progetto, si è soffermata sulla potenzialità delle nuove tecnologie dell'informazione (IT). Non si può infatti non osservare che esse hanno modificato non solo il modo di vivere, di lavorare, di produrre documenti e di scambiare informazioni, ma anche quello di controllare il processo di progetto. Il fenomeno è tuttora in corso ma è del tutto evidente che anche l'attività progettuale, seppure in un settore quale è quello dell'industria edilizia, caratterizzato da una notevole inerzia al cambiamento e restio all'innovazione, grazie alle nuove tecnologie ha conosciuto profonde trasformazioni (già iniziate con l’avvento del CAD) che hanno accelerato il progressivo mutamento delle procedure di rappresentazione e documentazione digitale del progetto. Su questo tema quindi si è concentrata la ricerca e la sperimentazione, valutando che l'”archivio di progetto integrato”, (ovvero IPDB ‐ Integrated Project Database) è, probabilmente, destinato a sostituire il concetto di CAD (utilizzato fino ad ora per il settore edilizio ed inteso quale strumento di elaborazione digitale, principalmente grafica ma non solo). Si è esplorata quindi, in una prima esperienza di progetto, la potenzialità e le caratteristiche del BIM (Building Information Model) per verificare se esso si dimostra realmente capace di formulare un archivio informativo, di sostegno al progetto per tutto il ciclo di vita del fabbricato, ed in grado di definirne il modello tridimensionale virtuale a partire dai suoi componenti ed a collezionare informazioni delle geometrie, delle caratteristiche fisiche dei materiali, della stima dei costi di costruzione, delle valutazioni sulle performance di materiali e componenti, delle scadenze manutentive, delle informazioni relative a contratti e procedure di appalto. La ricerca analizza la strutturazione del progetto di un edificio residenziale e presenta una costruzione teorica di modello finalizzata alla comunicazione e gestione della pianificazione, aperta a tutti i soggetti coinvolti nel processo edilizio e basata sulle potenzialità dell’approccio parametrico.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
Resumo:
The present dissertation relates to methodologies and technics about industrial and mechanical design. The author intends to give a complete idea about the world of design, showing the theories of Quality Function Deployment and TRIZ, of other methods just like planning, budgeting, Value Analysis and Engineering, Concurrent Engineering, Design for Assembly and Manufactoring, etc., and their applications to five concrete cases. In these cases there are also illustrated design technics as CAD, CAS, CAM; Rendering, which are ways to transform an idea into reality. The most important object of the work is, however, the birth of a new methodology, coming up from a comparison between QFD and TRIZ and their integration through other methodologies, just like Time and Cost Analysis, learned and skilled during an important experience in a very famous Italian automotive factory.
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
I continui sviluppi nel campo della fabbricazione dei circuiti integrati hanno comportato frequenti travolgimenti nel design, nell’implementazione e nella scalabilità dei device elettronici, così come nel modo di utilizzarli. Anche se la legge di Moore ha anticipato e caratterizzato questo trend nelle ultime decadi, essa stessa si trova a fronteggiare attualmente enormi limitazioni, superabili solo attraverso un diverso approccio nella produzione di chip, consistente in pratica nella sovrapposizione verticale di diversi strati collegati elettricamente attraverso speciali vias. Sul singolo strato, le network on chip sono state suggerite per ovviare le profonde limitazioni dovute allo scaling di strutture di comunicazione condivise. Questa tesi si colloca principalmente nel contesto delle nascenti piattaforme multicore ad alte prestazioni basate sulle 3D NoC, in cui la network on chip viene estesa nelle 3 direzioni. L’obiettivo di questo lavoro è quello di fornire una serie di strumenti e tecniche per poter costruire e aratterizzare una piattaforma tridimensionale, cosi come dimostrato nella realizzazione del testchip 3D NOC fabbricato presso la fonderia IMEC. Il primo contributo è costituito sia una accurata caratterizzazione delle interconnessioni verticali (TSVs) (ovvero delle speciali vias che attraversano l’intero substrato del die), sia dalla caratterizzazione dei router 3D (in cui una o più porte sono estese nella direzione verticale) ed infine dal setup di un design flow 3D utilizzando interamente CAD 2D. Questo primo step ci ha permesso di effettuare delle analisi dettagliate sia sul costo sia sulle varie implicazioni. Il secondo contributo è costituito dallo sviluppo di alcuni blocchi funzionali necessari per garantire il corretto funziomento della 3D NoC, in presenza sia di guasti nelle TSVs (fault tolerant links) che di deriva termica nei vari clock tree dei vari die (alberi di clock indipendenti). Questo secondo contributo è costituito dallo sviluppo delle seguenti soluzioni circuitali: 3D fault tolerant link, Look Up Table riconfigurabili e un sicnronizzatore mesocrono. Il primo è costituito fondamentalmente un bus verticale equipaggiato con delle TSV di riserva da utilizzare per rimpiazzare le vias guaste, più la logica di controllo per effettuare il test e la riconfigurazione. Il secondo è rappresentato da una Look Up Table riconfigurabile, ad alte prestazioni e dal costo contenuto, necesaria per bilanciare sia il traffico nella NoC che per bypassare link non riparabili. Infine la terza soluzione circuitale è rappresentata da un sincronizzatore mesocrono necessario per garantire la sincronizzazione nel trasferimento dati da un layer and un altro nelle 3D Noc. Il terzo contributo di questa tesi è dato dalla realizzazione di un interfaccia multicore per memorie 3D (stacked 3D DRAM) ad alte prestazioni, e dall’esplorazione architetturale dei benefici e del costo di questo nuovo sistema in cui il la memoria principale non è piu il collo di bottiglia dell’intero sistema. Il quarto ed ultimo contributo è rappresentato dalla realizzazione di un 3D NoC test chip presso la fonderia IMEC, e di un circuito full custom per la caratterizzazione della variability dei parametri RC delle interconnessioni verticali.
Resumo:
The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.
Resumo:
INTRODUCTION. Late chronic allograft disfunction (CAD) is one of the more concerning issues in the management of patients (pts) with renal transplant (tx). Humoral immune response seems to play an important role in CAD pathogenesis. AIM OF THE STUDY. To identify the causes of late chronic allograft disfunction. METHODS. This study (march 2004-august 2011) enrolled pts who underwent renal biopsy (BR) because of CAD (increase of creatininemia (s-Cr) >30% and/or proteinuria >1g/day at least one year after tx). BR were classified according to 1997/2005 Banff classification. Histological evaluation of C4d (positive if >25%), glomerulitis, tubulitis, intimal arteritis, atrophy/fibrosis and arteriolar-hyalinosis were performed. Ab anti-HLA research at BR was an inclusion criteria. Pts were divided into two groups: with or without transplant glomerulopathy (CTG). RESULTS. Evaluated BR: 93/109. BR indication: impaired s-Cr (52/93), proteinuria (23/93), both (18/93). Time Tx-BR: 7.4±6.3 yrs; s-Cr at BR: 2.7±1.4 mg/dl. CTG group(n=49) not-CTG group(n=44) p Time tx-BR (yrs) 9.3±6.7 5.3±5.2 0.002 Follow-up post-BR (yrs) 2.7±1.8 4.1±1.4 0.0001 s-Cr at BR (mg/dl) 2.9±1.3 2.4±1.5 NS Rate (%) of pts: Proteinuria at BR 61% 25% 0.0004 C4d+ 84% 25% <0.0001 Ab anti-HLA+ 71% 30% 0.0001 C4d+ and/or Ab antiHLA 92% 43% 0.0001 Glomerulitis 76% 16% <0.0001 Tubulitis 6% 32% 0.0014 Intimal arteritis 18% 0% 0.002 Arteriolar hyalinosis 65% 50% NS Atrophy/fibrosis 80% 77% NS Graft survival 45% 86% 0.00005 Histological Diagnosis: CTG group (n=49:Chronic rejection 94%;IgA recurrence + humoral activity 4%;IIA acute rejection + humoral activity 2%. Not-CTG group (n=44: GN recurrence 27%;IF/TA 23%; acute rejection 23%;BKV nephritis 9%; mild not specific alterations 18%. CONCLUSIONS: CTG is the morphological lesion mainly related to CAD. In the 92% of the cases it is associated with markers of immunological activity. It causes graft failure within five years after diagnosis in 55% of pts.
Resumo:
1.Ricostruzione mandibolare La ricostruzione mandibolare è comunemente eseguita utilizzando un lembo libero perone. Il metodo convenzionale (indiretto) di Computer Aided Design e Computer Aided Manifacturing prevede il modellamento manuale preoperatorio di una placca di osteosintesi standard su un modello stereolitografico della mandibola. Un metodo innovativo CAD CAM diretto comprende 3 fasi: 1) pianificazione virtuale 2) computer aided design della dima di taglio mandibolari, della dima di taglio del perone e della placca di osteosintesi e 3) Computer Aided Manufacturing dei 3 dispositivi chirurgici personalizzati. 7 ricostruzioni mandibolari sono state effettuate con il metodo diretto. I risultati raggiunti e le modalità di pianificazione sono descritte e discusse. La progettazione assistita da computer e la tecnica di fabbricazione assistita da computer facilita un'accurata ricostruzione mandibolare ed apporta un miglioramento statisticamente significativo rispetto al metodo convenzionale. 2. Cavità orale e orofaringe Un metodo ricostruttivo standard per la cavità orale e l'orofaringe viene descritto. 163 pazienti affetti da cancro della cavità orale e dell'orofaringe, sono stati trattati dal 1992 al 2012 eseguendo un totale di 175 lembi liberi. La strategia chirurgica è descritta in termini di scelta del lembo, modellamento ed insetting. I modelli bidimensionali sono utilizzati per pianificare una ricostruzione tridimensionale con il miglior risultato funzionale ed estetico. I modelli, la scelta del lembo e l' insetting sono descritti per ogni regione. Complicazioni e risultati funzionali sono stati valutati sistematicamente. I risultati hanno mostrato un buon recupero funzionale con le tecniche ricostruttive descritte. Viene proposto un algoritmo ricostruttivo basato su template standard.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.