27 resultados para Saliva collection devices and methods

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytics is the technology working with the manipulation of data to produce information able to change the world we live every day. Analytics have been largely used within the last decade to cluster people’s behaviour to predict their preferences of items to buy, music to listen, movies to watch and even electoral preference. The most advanced companies succeded in controlling people’s behaviour using analytics. Despite the evidence of the super-power of analytics, they are rarely applied to the big data collected within supply chain systems (i.e. distribution network, storage systems and production plants). This PhD thesis explores the fourth research paradigm (i.e. the generation of knowledge from data) applied to supply chain system design and operations management. An ontology defining the entities and the metrics of supply chain systems is used to design data structures for data collection in supply chain systems. The consistency of this data is provided by mathematical demonstrations inspired by the factory physics theory. The availability, quantity and quality of the data within these data structures define different decision patterns. Ten decision patterns are identified, and validated on-field, to address ten different class of design and control problems in the field of supply chain systems research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides a thoroughly theoretical background in network theory and shows novel applications to real problems and data. In the first chapter a general introduction to network ensembles is given, and the relations with “standard” equilibrium statistical mechanics are described. Moreover, an entropy measure is considered to analyze statistical properties of the integrated PPI-signalling-mRNA expression networks in different cases. In the second chapter multilayer networks are introduced to evaluate and quantify the correlations between real interdependent networks. Multiplex networks describing citation-collaboration interactions and patterns in colorectal cancer are presented. The last chapter is completely dedicated to control theory and its relation with network theory. We characterise how the structural controllability of a network is affected by the fraction of low in-degree and low out-degree nodes. Finally, we present a novel approach to the controllability of multiplex networks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, a TCAD approach for the investigation of charge transport in amorphous silicon dioxide is presented for the first time. The proposed approach is used to investigate high-voltage silicon oxide thick TEOS capacitors embedded in the back-end inter-level dielectric layers for galvanic insulation applications. In the first part of this thesis, a detailed review of the main physical and chemical properties of silicon dioxide and the main physical models for the description of charge transport in insulators are presented. In the second part, the characterization of high-voltage MIM structures at different high-field stress conditions up to the breakdown is presented. The main physical mechanisms responsible of the observed results are then discussed in details. The third part is dedicated to the implementation of a TCAD approach capable of describing charge transport in silicon dioxide layers in order to gain insight into the microscopic physical mechanisms responsible of the leakage current in MIM structures. In particular, I investigated and modeled the role of charge injection at contacts and charge build-up due to trapping and de-trapping mechanisms in the oxide layer to the purpose of understanding its behavior under DC and AC stress conditions. In addition, oxide breakdown due to impact-ionization of carriers has been taken into account in order to have a complete representation of the oxide behavior at very high fields. Numerical simulations have been compared against experiments to quantitatively validate the proposed approach. In the last part of the thesis, the proposed approach has been applied to simulate the breakdown in realistic structures under different stress conditions. The TCAD tool has been used to carry out a detailed analysis of the most relevant physical quantities, in order to gain a detailed understanding on the main mechanisms responsible for breakdown and guide design optimization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural events are a widely recognized hazard for industrial sites where relevant quantities of hazardous substances are handled, due to the possible generation of cascading events resulting in severe technological accidents (Natech scenarios). Natural events may damage storage and process equipment containing hazardous substances, that may be released leading to major accident scenarios called Natech events. The need to assess the risk associated with Natech scenarios is growing and methodologies were developed to allow the quantification of Natech risk, considering both point sources and linear sources as pipelines. A key element of these procedures is the use of vulnerability models providing an estimation of the damage probability of equipment or pipeline segment as a result of the impact of the natural event. Therefore, the first aim of the PhD project was to outline the state of the art of vulnerability models for equipment and pipelines subject to natural events such as floods, earthquakes, and wind. Moreover, the present PhD project also aimed at the development of new vulnerability models in order to fill some gaps in literature. In particular, a vulnerability model for vertical equipment subject to wind and to flood were developed. Finally, in order to improve the calculation of Natech risk for linear sources an original methodology was developed for Natech quantitative risk assessment methodology for pipelines subject to earthquakes. Overall, the results obtained are a step forward in the quantitative risk assessment of Natech accidents. The tools developed open the way to the inclusion of new equipment in the analysis of Natech events, and the methodology for the assessment of linear risk sources as pipelines provides an important tool for a more accurate and comprehensive assessment of Natech risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bioelectronic interfaces have significantly advanced in recent years, offering potential treatments for vision impairments, spinal cord injuries, and neurodegenerative diseases. However, the classical neurocentric vision drives the technological development toward neurons. Emerging evidence highlights the critical role of glial cells in the nervous system. Among them, astrocytes significantly influence neuronal networks throughout life and are implicated in several neuropathological states. Although they are incapable to fire action potentials, astrocytes communicate through diverse calcium (Ca2+) signalling pathways, crucial for cognitive functions and brain blood flow regulation. Current bioelectronic devices are primarily designed to interface neurons and are unsuitable for studying astrocytes. Graphene, with its unique electrical, mechanical and biocompatibility properties, has emerged as a promising neural interface material. However, its use as electrode interface to modulate astrocyte functionality remains unexplored. The aim of this PhD work was to exploit Graphene-oxide (GO) and reduced GO (rGO)-coated electrodes to control Ca2+ signalling in astrocytes by electrical stimulation. We discovered that distinct Ca2+dynamics in astrocytes can be evoked, in vitro and in brain slices, depending on the conductive/insulating properties of rGO/GO electrodes. Stimulation by rGO electrodes induces intracellular Ca2+ response with sharp peaks of oscillations (“P-type”), exclusively due to Ca2+ release from intracellular stores. Conversely, astrocytes stimulated by GO electrodes show slower and sustained Ca2+ response (“S-type”), largely mediated by external Ca2+ influx through specific ion channels. Astrocytes respond faster than neurons and activate distinct G-Protein Coupled Receptor intracellular signalling pathways. We propose a resistive/insulating model, hypothesizing that the different conductivity of the substrate influences the electric field at the cell/electrolyte or cell/material interfaces, favouring, respectively, the Ca2+ release from intracellular stores or the extracellular Ca2+ influx. This research provides a simple tool to selectively control distinct Ca2+ signals in brain astrocytes in neuroscience and bioelectronic medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advanced cell cultures are developing rapidly in biomedical research. Nowadays, various approaches and technologies are being used, however, these culturing systems present limitations from increasing complexity, requiring high costs, and not easily customization. We present two versatile and cost-effective methods for developing culturing systems that integrate 3D cell culture and microfluidic platforms. Firstly, for drug screening applications, many high-quality cell spheres of homogeneous size and shape are required. Conventional approaches usually have a dearth of control over the size and geometry of cell spheres and require sample collection and manipulation. To overcome this difficulty, in this study, hundreds of spheroids of several cell lines were generated using multi-well plates that housed our microdevices. Tumor spheroids grow at a uniform rate (in scaffolded or scaffold-free environments) and can be harvested at will. Microscopy imaging are done in real time during or after the culture. After in situ immunostaining, fluorescence imaging can be conducted while keeping the spatial distribution of spheroids in the microwells. Drug effects were successfully observed through viability, growth, and morphologic investigations. Also, we fabricated a microfluidic device suitable for directed and selective cell culture treatments. The microfluidic device was used to reproduce and confirm in vitro investigations carried out using normal culture methods, using a microglia cell line. The device layout and the syringe pump system, entirely designed in our lab, successfully allowed culture growth and medium flow regulation. Solution flows can be finely controlled, allowing treatments and immunofluorescence in one single chamber selectively. To conclude, we propose the development of two culturing platforms (microstructured well devices and in-flow microfluidic chip), which are the result of separate scientific investigations but have the primary goal of performing treatments in a reproducible manner. Our devices shall improve future studies on drug exposure testing, representing adjustable and versatile cell culture systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Great strides have been made in the last few years in the pharmacological treatment of neuropsychiatric disorders, with the introduction into the therapy of several new and more efficient agents, which have improved the quality of life of many patients. Despite these advances, a large percentage of patients is still considered “non-responder” to the therapy, not drawing any benefits from it. Moreover, these patients have a peculiar therapeutic profile, due to the very frequent application of polypharmacy, attempting to obtain satisfactory remission of the multiple aspects of psychiatric syndromes. Therapy is heavily individualised and switching from one therapeutic agent to another is quite frequent. One of the main problems of this situation is the possibility of unwanted or unexpected pharmacological interactions, which can occur both during polypharmacy and during switching. Simultaneous administration of psychiatric drugs can easily lead to interactions if one of the administered compounds influences the metabolism of the others. Impaired CYP450 function due to inhibition of the enzyme is frequent. Other metabolic pathways, such as glucuronidation, can also be influenced. The Therapeutic Drug Monitoring (TDM) of psychotropic drugs is an important tool for treatment personalisation and optimisation. It deals with the determination of parent drugs and metabolites plasma levels, in order to monitor them over time and to compare these findings with clinical data. This allows establishing chemical-clinical correlations (such as those between administered dose and therapeutic and side effects), which are essential to obtain the maximum therapeutic efficacy, while minimising side and toxic effects. It is evident the importance of developing sensitive and selective analytical methods for the determination of the administered drugs and their main metabolites, in order to obtain reliable data that can correctly support clinical decisions. During the three years of Ph.D. program, some analytical methods based on HPLC have been developed, validated and successfully applied to the TDM of psychiatric patients undergoing treatment with drugs belonging to following classes: antipsychotics, antidepressants and anxiolytic-hypnotics. The biological matrices which have been processed were: blood, plasma, serum, saliva, urine, hair and rat brain. Among antipsychotics, both atypical and classical agents have been considered, such as haloperidol, chlorpromazine, clotiapine, loxapine, risperidone (and 9-hydroxyrisperidone), clozapine (as well as N-desmethylclozapine and clozapine N-oxide) and quetiapine. While the need for an accurate TDM of schizophrenic patients is being increasingly recognized by psychiatrists, only in the last few years the same attention is being paid to the TDM of depressed patients. This is leading to the acknowledgment that depression pharmacotherapy can greatly benefit from the accurate application of TDM. For this reason, the research activity has also been focused on first and second-generation antidepressant agents, like triciclic antidepressants, trazodone and m-chlorophenylpiperazine (m-cpp), paroxetine and its three main metabolites, venlafaxine and its active metabolite, and the most recent antidepressant introduced into the market, duloxetine. Among anxiolytics-hypnotics, benzodiazepines are very often involved in the pharmacotherapy of depression for the relief of anxious components; for this reason, it is useful to monitor these drugs, especially in cases of polypharmacy. The results obtained during these three years of Ph.D. program are reliable and the developed HPLC methods are suitable for the qualitative and quantitative determination of CNS drugs in biological fluids for TDM purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osteoarthritis (OA) or degenerative joint disease (DJD) is a pathology which affects the synovial joints and characterised by a focal loss of articular cartilage and subsequent bony reaction of the subcondral and marginal bone. Its etiology is best explained by a multifactorial model including: age, sex, genetic and systemic factors, other predisposing diseases and functional stress. In this study the results of the investigation of a modern identified skeletal collection will be presented. In particular, we will focus on the relationship between the presence of OA at various joints. The joint modifications have been analysed using a new methodology that allows the scoring of different degrees of expression of the features considered. Materials and Methods The sample examined comes from the Sassari identified skeletal collection (part of “Frassetto collections”). The individuals were born between 1828 and 1916 and died between 1918 and 1932. Information about sex and age is known for all the individuals. The occupation is known for 173 males and 125 females. Data concerning the occupation of the individuals indicate a preindustrial and rural society. OA has been diagnosed when eburnation (EB) or loss of morphology (LM) were present, or when at least two of the following: marginal lipping (ML), esostosis (EX) or erosion (ER), were present. For each articular surface affected a “mean score” was calculated, reflecting the “severity” of the alterations. A further “score” was calculated for each joint. In the analysis sexes and age classes were always kept separate. For the statistical analyses non parametric test were used. Results The results show there is an increase of OA with age in all the joints analyzed and in particular around 50 years and 60 years. The shoulder, the hip and the knee are the joints mainly affected with ageing while the ankle is the less affected; the correlation values confirm this result. The lesion which show the major correlation with age is the ML. In our sample males are more frequently and more severely affected by OA than females, particularly at the superior limbs, while hip and knee are similarly affected in the two sexes. Lateralization shows some positive results in particular in the right shoulder of males and in various articular surfaces especially of the superior limb of both males and females; articular surfaces and joints are quite always lateralized to the right. Occupational analyses did not show remarkable results probably because of the homogeneity of the sample; males although performing different activities are quite all employed in stressful works. No highest prevalence of knee and hip OA was found in farm-workers respect to the other males. Discussion and Conclusion In this work we propose a methodology to score the different features, necessary to diagnose OA, that allows the investigation of the severity of joint degeneration. This method is easier than the one proposed by Buikstra and Ubelaker (1994), but in the same time allows a quite detailed recording of the features. Epidemiological results can be interpreted quite simply and they are in accordance with other studies; more difficult is the interpretation of the occupational results because many questions concerning the activities performed by the individuals of the collection during their lifespan cannot be solved. Because of this, caution is suggested in the interpretation of bioarcheological specimens. With this work we hope to contribute to the discussion on the puzzling problem of the etiology of OA. The possibility of studying identified skeletons will add important data to the description of osseous features of OA, enriching the medical documentation, based on different criteria. Even if we are aware that the clinical diagnosis is different from the palaeopathological one we think our work will be useful in clarifying some epidemiological as well as pathological aspects of OA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.