948 resultados para Multi-layer devices
Resumo:
Graphene, in single layer or multi-layer forms, holds great promise for future electronics and high-temperature applications. Resistance to oxidation, an important property for high-temperature applications, has not yet been extensively investigated. Controlled thinning of multi-layer graphene (MLG), e.g., by plasma or laser processing is another challenge, since the existing methods produce non-uniform thinning or introduce undesirable defects in the basal plane. We report here that heating to extremely high temperatures (exceeding 2000 K) and controllable layer-by-layer burning (thinning) can be achieved by low-power laser processing of suspended high-quality MLG in air in "cold-wall" reactor configuration. In contrast, localized laser heating of supported samples results in non-uniform graphene burning at much higher rates. Fully atomistic molecular dynamics simulations were also performed to reveal details of oxidation mechanisms leading to uniform layer-by-layer graphene gasification. The extraordinary resistance of MLG to oxidation paves the way to novel high-temperature applications as continuum light source or scaffolding material.
Resumo:
Hierarchical multi-label classification is a complex classification task where the classes involved in the problem are hierarchically structured and each example may simultaneously belong to more than one class in each hierarchical level. In this paper, we extend our previous works, where we investigated a new local-based classification method that incrementally trains a multi-layer perceptron for each level of the classification hierarchy. Predictions made by a neural network in a given level are used as inputs to the neural network responsible for the prediction in the next level. We compare the proposed method with one state-of-the-art decision-tree induction method and two decision-tree induction methods, using several hierarchical multi-label classification datasets. We perform a thorough experimental analysis, showing that our method obtains competitive results to a robust global method regarding both precision and recall evaluation measures.
Resumo:
In the present study we are using multi variate analysis techniques to discriminate signal from background in the fully hadronic decay channel of ttbar events. We give a brief introduction to the role of the Top quark in the standard model and a general description of the CMS Experiment at LHC. We have used the CMS experiment computing and software infrastructure to generate and prepare the data samples used in this analysis. We tested the performance of three different classifiers applied to our data samples and used the selection obtained with the Multi Layer Perceptron classifier to give an estimation of the statistical and systematical uncertainty on the cross section measurement.
Resumo:
The goal of this thesis is the application of an opto-electronic numerical simulation to heterojunction silicon solar cells featuring an all back contact architecture (Interdigitated Back Contact Hetero-Junction IBC-HJ). The studied structure exhibits both metal contacts, emitter and base, at the back surface of the cell with the objective to reduce the optical losses due to the shadowing by front contact of conventional photovoltaic devices. Overall, IBC-HJ are promising low-cost alternatives to monocrystalline wafer-based solar cells featuring front and back contact schemes, in fact, for IBC-HJ the high concentration doping diffusions are replaced by low-temperature deposition processes of thin amorphous silicon layers. Furthermore, another advantage of IBC solar cells with reference to conventional architectures is the possibility to enable a low-cost assembling of photovoltaic modules, being all contacts on the same side. A preliminary extensive literature survey has been helpful to highlight the specific critical aspects of IBC-HJ solar cells as well as the state-of-the-art of their modeling, processing and performance of practical devices. In order to perform the analysis of IBC-HJ devices, a two-dimensional (2-D) numerical simulation flow has been set up. A commercial device simulator based on finite-difference method to solve numerically the whole set of equations governing the electrical transport in semiconductor materials (Sentuarus Device by Synopsys) has been adopted. The first activity carried out during this work has been the definition of a 2-D geometry corresponding to the simulation domain and the specification of the electrical and optical properties of materials. In order to calculate the main figures of merit of the investigated solar cells, the spatially resolved photon absorption rate map has been calculated by means of an optical simulator. Optical simulations have been performed by using two different methods depending upon the geometrical features of the front interface of the solar cell: the transfer matrix method (TMM) and the raytracing (RT). The first method allows to model light prop-agation by plane waves within one-dimensional spatial domains under the assumption of devices exhibiting stacks of parallel layers with planar interfaces. In addition, TMM is suitable for the simulation of thin multi-layer anti reflection coating layers for the reduction of the amount of reflected light at the front interface. Raytracing is required for three-dimensional optical simulations of upright pyramidal textured surfaces which are widely adopted to significantly reduce the reflection at the front surface. The optical generation profiles are interpolated onto the electrical grid adopted by the device simulator which solves the carriers transport equations coupled with Poisson and continuity equations in a self-consistent way. The main figures of merit are calculated by means of a postprocessing of the output data from device simulation. After the validation of the simulation methodology by means of comparison of the simulation result with literature data, the ultimate efficiency of the IBC-HJ architecture has been calculated. By accounting for all optical losses, IBC-HJ solar cells result in a theoretical maximum efficiency above 23.5% (without texturing at front interface) higher than that of both standard homojunction crystalline silicon (Homogeneous Emitter HE) and front contact heterojuction (Heterojunction with Intrinsic Thin layer HIT) solar cells. However it is clear that the criticalities of this structure are mainly due to the defects density and to the poor carriers transport mobility in the amorphous silicon layers. Lastly, the influence of the most critical geometrical and physical parameters on the main figures of merit have been investigated by applying the numerical simulation tool set-up during the first part of the present thesis. Simulations have highlighted that carrier mobility and defects level in amorphous silicon may lead to a potentially significant reduction of the conversion efficiency.
Abscheidung und Charakterisierung von Plasmapolymerschichten auf Fluorkohlenstoff- und Siloxan-Basis
Resumo:
In dieser Arbeit wurden Fluorkohlenstoff-basierte und siliziumorganische Plasmapolymerfilme hergestellt und hinsichtlich ihrer strukturellen und funktionalen Eigenschaften untersucht. Beide untersuchten Materialsysteme sind in der Beschichtungstechnologie von großem wissenschaftlichen und anwendungstechnischen Interesse. Die Schichtabscheidung erfolgte mittels plasmachemischer Gasphasenabscheidung (PECVD) an Parallelplattenreaktoren. Bei den Untersuchungen zur Fluorkohlenstoff-Plasmapolymerisation stand die Herstellung ultra-dünner, d. h. weniger als 5 nm dicker Schichten im Vordergrund. Dies wurde durch gepulste Plasmaanregung und Verwendung eines Gasgemisches aus Trifluormethan (CHF3) und Argon realisiert. Die Bindungsstruktur der Schichten wurden in Abhängigkeit der eingespeisten Leistung, die den Fragmentationsgrad der Monomere im Plasma bestimmt, analysiert. Hierzu wurden die Röntgen-Photoelektronenspektroskopie (XPS), Rasterkraftmikroskopie (AFM), Flugzeit-Sekundärionenmassenspektrometrie (ToF-SIMS) und Röntgenreflektometrie (XRR) eingesetzt. Es zeigte sich, dass die abgeschiedenen Schichten ein homogenes Wachstumsverhalten und keine ausgeprägten Interfacebereiche zum Substrat und zur Oberfläche hin aufweisen. Die XPS-Analysen deuten darauf hin, dass Verkettungsreaktionen von CF2-Radikalen im Plasma eine wichtige Rolle für den Schichtbildungsprozess spielen. Weiterhin konnte gezeigt werden, dass der gewählte Beschichtungsprozess eine gezielte Reduzierung der Benetzbarkeit verschiedener Substrate ermöglicht. Dabei genügen Schichtdicken von weniger als 3 nm zur Erreichung eines teflonartigen Oberflächencharakters mit Oberflächenenergien um 20 mN/m. Damit erschließen sich neue Applikationsmöglichkeiten ultra-dünner Fluorkohlenstoffschichten, was anhand eines Beispiels aus dem Bereich der Nanooptik demonstriert wird. Für die siliziumorganischen Schichten unter Verwendung des Monomers Hexamethyldisiloxan (HMDSO) galt es zunächst, diejenigen Prozessparameter zu identifizieren, die ihren organischen bzw. glasartigen Charakter bestimmen. Hierzu wurde der Einfluss von Leistungseintrag und Zugabe von Sauerstoff als Reaktivgas auf die Elementzusammensetzung der Schichten untersucht. Bei niedrigen Plasmaleistungen und Sauerstoffflüssen werden vor allem kohlenstoffreiche Schichten abgeschieden, was auf eine geringere Fragmentierung der Kohlenwasserstoffgruppen zurückgeführt wurde. Es zeigte sich, dass die Variation des Sauerstoffanteils im Prozessgas eine sehr genaue Steuerbarkeit der Schichteigenschaften ermöglicht. Mittels Sekundär-Neutralteilchen-Massenspektrometrie (SNMS) konnte die prozesstechnische Realisierbarkeit und analytische Quantifizierbarkeit von Wechselschichtsystemen aus polymerartigen und glasartigen Lagen demonstriert werden. Aus dem Intensitätsverhältnis von Si:H-Molekülen zu Si-Atomen im SNMS-Spektrum ließ sich der Wasserstoffgehalt bestimmen. Weiterhin konnte gezeigt werden, dass durch Abscheidung von HMDSO-basierten Gradientenschichten eine deutliche Reduzierung von Reibung und Verschleiß bei Elastomerbauteilen erzielt werden kann.
Resumo:
I grafi sono molto utilizzati per la rappresentazione di dati, sopratutto in quelle aree dove l’informazione sull’interconnettività e la topologia dei dati è importante tanto quanto i dati stessi, se non addirittura di più. Ogni area di applicazione ha delle proprie necessità, sia in termini del modello che rappresenta i dati, sia in termini del linguaggio capace di fornire la necessaria espressività per poter fare interrogazione e trasformazione dei dati. È sempre più frequente che si richieda di analizzare dati provenienti da diversi sistemi, oppure che si richieda di analizzare caratteristiche dello stesso sistema osservandolo a granularità differenti, in tempi differenti oppure considerando relazioni differenti. Il nostro scopo è stato quindi quello di creare un modello, che riesca a rappresentare in maniera semplice ed efficace i dati, in tutte queste situazioni. Entrando più nei dettagli, il modello permette non solo di analizzare la singola rete, ma di analizzare più reti, relazionandole tra loro. Il nostro scopo si è anche esteso nel definire un’algebra, che, tramite ai suoi operatori, permette di compiere delle interrogazioni su questo modello. La definizione del modello e degli operatori sono stati maggiormente guidati dal caso di studio dei social network, non tralasciando comunque di rimanere generali per fare altri tipi di analisi. In seguito abbiamo approfondito lo studio degli operatori, individuando delle proprietà utili per fare delle ottimizzazioni, ragionando sui dettagli implementativi, e fornendo degli algoritmi di alto livello. Per rendere più concreta la definizione del modello e degli operatori, in modo da non lasciare spazio ad ambiguità, è stata fatta anche un’implementazione, e in questo elaborato ne forniremo la descrizione.
Resumo:
An invisibility cloak is a device that can hide the target by enclosing it from the incident radiation. This intriguing device has attracted a lot of attention since it was first implemented at a microwave frequency in 2006. However, the problems of existing cloak designs prevent them from being widely applied in practice. In this dissertation, we try to remove or alleviate the three constraints for practical applications imposed by loosy cloaking media, high implementation complexity, and small size of hidden objects compared to the incident wavelength. To facilitate cloaking design and experimental characterization, several devices and relevant techniques for measuring the complex permittivity of dielectric materials at microwave frequencies are developed. In particular, a unique parallel plate waveguide chamber has been set up to automatically map the electromagnetic (EM) field distribution for wave propagation through the resonator arrays and cloaking structures. The total scattering cross section of the cloaking structures was derived based on the measured scattering field by using this apparatus. To overcome the adverse effects of lossy cloaking media, microwave cloaks composed of identical dielectric resonators made of low loss ceramic materials are designed and implemented. The effective permeability dispersion was provided by tailoring dielectric resonator filling fractions. The cloak performances had been verified by full-wave simulation of true multi-resonator structures and experimental measurements of the fabricated prototypes. With the aim to reduce the implementation complexity caused by metamaterials employment for cloaking, we proposed to design 2-D cylindrical cloaks and 3-D spherical cloaks by using multi-layer ordinary dielectric material (εr>1) coating. Genetic algorithm was employed to optimize the dielectric profiles of the cloaking shells to provide the minimum scattering cross sections of the cloaked targets. The designed cloaks can be easily scaled to various operating frequencies. The simulation results show that the multi-layer cylindrical cloak essentially outperforms the similarly sized metamaterials-based cloak designed by using the transformation optics-based reduced parameters. For the designed spherical cloak, the simulated scattering pattern shows that the total scattering cross section is greatly reduced. In addition, the scattering in specific directions could be significantly reduced. It is shown that the cloaking efficiency for larger targets could be improved by employing lossy materials in the shell. At last, we propose to hide a target inside a waveguide structure filled with only epsilon near zero materials, which are easy to implement in practice. The cloaking efficiency of this method, which was found to increase for large targets, has been confirmed both theoretically and by simulations.
Resumo:
Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Microelectronic systems are multi-material, multi-layer structures, fabricated and exposed to environmental stresses over a wide range of temperatures. Thermal and residual stresses created by thermal mismatches in films and interconnections are a major cause of failure in microelectronic devices. Due to new device materials, increasing die size and the introduction of new materials for enhanced thermal management, differences in thermal expansions of various packaging materials have become exceedingly important and can no longer be neglected. X-ray diffraction is an analytical method using a monochromatic characteristic X-ray beam to characterize the crystal structure of various materials, by measuring the distances between planes in atomic crystalline lattice structures. As a material is strained, this interplanar spacing is correspondingly altered, and this microscopic strain is used to determine the macroscopic strain. This thesis investigates and describes the theory and implementation of X-ray diffraction in the measurement of residual thermal strains. The design of a computer controlled stress attachment stage fully compatible with an Anton Paar heat stage will be detailed. The stress determined by the diffraction method will be compared with bimetallic strip theory and finite element models.
Resumo:
Résumé: L’Institut pour l'étude de la neige et des avalanches en Suisse (SLF) a développé SNOWPACK, un modèle thermodynamique multi-couches de neige permettant de simuler les propriétés géophysiques du manteau neigeux (densité, température, taille de grain, teneur en eau, etc.) à partir desquelles un indice de stabilité est calculé. Il a été démontré qu’un ajustement de la microstructure serait nécessaire pour une implantation au Canada. L'objectif principal de la présente étude est de permettre au modèle SNOWPACK de modéliser de manière plus réaliste la taille de grain de neige et ainsi obtenir une prédiction plus précise de la stabilité du manteau neigeux à l’aide de l’indice basé sur la taille de grain, le Structural Stability Index (SSI). Pour ce faire, l’erreur modélisée (biais) par le modèle a été analysée à l’aide de données précises sur le terrain de la taille de grain à l’aide de l’instrument IRIS (InfraRed Integrated Sphere). Les données ont été recueillies durant l’hiver 2014 à deux sites différents au Canada : parc National des Glaciers, en Colombie-Britannique ainsi qu’au parc National de Jasper. Le site de Fidelity était généralement soumis à un métamorphisme à l'équilibre tandis que celui de Jasper à un métamorphisme cinétique plus prononcé. Sur chacun des sites, la stratigraphie des profils de densités ainsi des profils de taille de grain (IRIS) ont été complétés. Les profils de Fidelity ont été complétés avec des mesures de micropénétromètre (SMP). L’analyse des profils de densité a démontré une bonne concordance avec les densités modélisées (R[indice supérieur 2]=0.76) et donc la résistance simulée pour le SSI a été jugée adéquate. Les couches d’instabilités prédites par SNOWPACK ont été identifiées à l’aide de la variation de la résistance dans les mesures de SMP. L’analyse de la taille de grain optique a révélé une surestimation systématique du modèle ce qui est en accord avec la littérature. L’erreur de taille de grain optique dans un environnement à l’équilibre était assez constante tandis que l’erreur en milieux cinétique était plus variable. Finalement, une approche orientée sur le type de climat représenterait le meilleur moyen pour effectuer une correction de la taille de grain pour une évaluation de la stabilité au Canada.
Resumo:
Manipulation of single cells and particles is important to biology and nanotechnology. Our electrokinetic (EK) tweezers manipulate objects in simple microfluidic devices using gentle fluid and electric forces under vision-based feedback control. In this dissertation, I detail a user-friendly implementation of EK tweezers that allows users to select, position, and assemble cells and nanoparticles. This EK system was used to measure attachment forces between living breast cancer cells, trap single quantum dots with 45 nm accuracy, build nanophotonic circuits, and scan optical properties of nanowires. With a novel multi-layer microfluidic device, EK was also used to guide single microspheres along complex 3D trajectories. The schemes, software, and methods developed here can be used in many settings to precisely manipulate most visible objects, assemble objects into useful structures, and improve the function of lab-on-a-chip microfluidic systems.
Resumo:
Quantum sensors based on coherent matter-waves are precise measurement devices whose ultimate accuracy is achieved with Bose-Einstein condensates (BECs) in extended free fall. This is ideally realized in microgravity environments such as drop towers, ballistic rockets and space platforms. However, the transition from lab-based BEC machines to robust and mobile sources with comparable performance is a challenging endeavor. Here we report on the realization of a miniaturized setup, generating a flux of 4x10(5) quantum degenerate Rb-87 atoms every 1.6 s. Ensembles of 1 x 10(5) atoms can be produced at a 1 Hz rate. This is achieved by loading a cold atomic beam directly into a multi-layer atom chip that is designed for efficient transfer from laser-cooled to magnetically trapped clouds. The attained flux of degenerate atoms is on par with current lab-based BEC experiments while offering significantly higher repetition rates. Additionally, the flux is approaching those of current interferometers employing Raman-type velocity selection of laser-cooled atoms. The compact and robust design allows for mobile operation in a variety of demanding environments and paves the way for transportable high-precision quantum sensors.
Resumo:
Re-creating and understanding the origin of life represents one of the major challenges facing the scientific community. We will never know exactly how life started on planet Earth, however, we can reconstruct the most likely chemical pathways that could have contributed to the formation of the first living systems. Traditionally, prebiotic chemistry has investigated the formation of modern life’s precursors and their self-organisation under very specific conditions thought to be ‘plausible’. So far, this approach has failed to produce a living system from the bottom-up. In the work presented herein, two different approaches are employed to explore the transition from inanimate to living matter. The development of microfluidic technology during the last decades has changed the way traditional chemical and biological experiments are performed. Microfluidics allows the handling of low volumes of reagents with very precise control. The use of micro-droplets generated within microfluidic devices is of particular interest to the field of Origins of Life and Artificial Life. Whilst many efforts have been made aiming to construct cell-like compartments from modern biological constituents, these are usually very difficult to handle. However, microdroplets can be easily generated and manipulated at kHz rates, making it suitable for high-throughput experimentation and analysis of compartmentalised chemical reactions. Therefore, we decided to develop a microfluidic device capable of manipulating microdroplets in such a way that they could be efficiently mixed, split and sorted within iterative cycles. Since no microfluidic technology had been developed before in the Cronin Group, the first chapter of this thesis describes the soft lithographic methods and techniques developed to fabricate microfluidic devices. Also, special attention is placed on the generation of water-in-oil microdroplets, and the subsequent modules required for the manipulation of the droplets such as: droplet fusers, splitters, sorters and single/multi-layer micromechanical valves. Whilst the first part of this thesis describes the development of a microfluidic platform to assist chemical evolution, finding a compatible set of chemical building blocks capable of reacting to form complex molecules with endowed replicating or catalytic activity was challenging. Abstract 10 Hence, the second part of this thesis focuses on potential chemistry that will ultimately possess the properties mentioned above. A special focus is placed on the formation of peptide bonds from unactivated amino acids, despite being one of the greatest challenges in prebiotic chemistry. As opposed to classic prebiotic experiments, in which a specific set of conditions is studied to fit a particular hypothesis, we took a different approach: we explored the effects of several parameters at once on a model polymerisation reaction, without constraints on hypotheses on the nature of optimum conditions or plausibility. This was facilitated by development of a new high-throughput automated platform, allowing the exploration of a much larger number of parameters. This led us to discover that peptide bond formation is less challenging than previously imagined. Having established the right set of conditions under which peptide bond formation was enhanced, we then explored the co-oligomerisation between different amino acids, aiming for the formation of heteropeptides with different structure or function. Finally, we studied the effect of various environmental conditions (rate of evaporation, presence of salts or minerals) in the final product distribution of our oligomeric products.
Resumo:
The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.