933 resultados para Metals - Formability - Simulation methods
Resumo:
Starting induction motors on isolated or weak systems is a highly dynamic process that can cause motor and load damage as well as electrical network fluctuations. Mechanical damage is associated with the high starting current drawn by a ramping induction motor. In order to compensate the load increase, the voltage of the electrical system decreases. Different starting methods can be applied to the electrical system to reduce these and other starting method issues. The purpose of this thesis is to build accurate and usable simulation models that can aid the designer in making the choice of an appropriate motor starting method. The specific case addressed is the situation where a diesel-generator set is used as the electrical supplied source to the induction motor. The most commonly used starting methods equivalent models are simulated and compared to each other. The main contributions of this thesis is that motor dynamic impedance is continuously calculated and fed back to the generator model to simulate the coupling of the electrical system. The comparative analysis given by the simulations has shown reasonably similar characteristics to other comparative studies. The diesel-generator and induction motor simulations have shown good results, and can adequately demonstrate the dynamics for testing and comparing the starting methods. Further work is suggested to refine the equivalent impedance presented in this thesis.
Resumo:
We describe work in which gold nanoparticles were formed in diamond-like carbon (DLC), thereby generating a Au-DLC nanocomposite. A high-quality, hydrogen-free DLC thin film was formed by filtered vacuum arc plasma deposition, into which gold nanoparticles were introduced using two different methods. The first method was gold ion implantation into the DLC film at a number of decreasing ion energies, distributing the gold over a controllable depth range within the DLC. The second method was co-deposition of gold and carbon, using two separate vacuum arc plasma guns with suitably interleaved repetitive pulsing. Transmission electron microscope images show that the size of the gold nanoparticles obtained by ion implantation is 3-5 nm. For the Au-DLC composite obtained by co-deposition, there were two different nanoparticle sizes, most about 2 nm with some 6-7 nm. Raman spectroscopy indicates that the implanted sample contains a smaller fraction of sp(3) bonding for the DLC, demonstrating that some sp(3) bonds are destroyed by the gold implantation. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4757029]
Resumo:
This work presents numerical simulations of two fluid flow problems involving moving free surfaces: the impacting drop and fluid jet buckling. The viscoelastic model used in these simulations is the eXtended Pom-Pom (XPP) model. To validate the code, numerical predictions of the drop impact problem for Newtonian and Oldroyd-B fluids are presented and compared with other methods. In particular, a benchmark on numerical simulations for a XPP drop impacting on a rigid plate is performed for a wide range of the relevant parameters. Finally, to provide an additional application of free surface flows of XPP fluids, the viscous jet buckling problem is simulated and discussed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.
Resumo:
[EN]A three-dimensional air pollution model for the short-term simulation of emission, transport and reaction of pollutants is presented. In the finite element simulation of these environmental processes over a complex terrain, a mesh generator capable of adapting itself to the topographic characteristics is essential, A local refinement of tetrahedra is used in order to capture the plume rise. Then a wind field is computed by using a mass-consistent model and perturbing its vertical component to introduce the plume rise effect. Finally, an Eulerian convection-diffusionreaction model is used to simulate the pollutant dispersion…
Resumo:
Which event study methods are best in non-U.S. multi-country samples? Nonparametric tests, especially the rank and generalized sign, are better specified and more powerful than common parametric tests, especially in multi-day windows. The generalized sign test is the best statistic but must be applied to buy-and-hold abnormal returns for correct specification. Market-adjusted and market-model methods with local market indexes, without conversion to a common currency, work well. The results are robust to limiting the samples to situations expected to be problematic for test specification or power. Applying the tests that perform best in simulation to merger announcements produces reasonable results.
Resumo:
The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.
Resumo:
Ion channels are pore-forming proteins that regulate the flow of ions across biological cell membranes. Ion channels are fundamental in generating and regulating the electrical activity of cells in the nervous system and the contraction of muscolar cells. Solid-state nanopores are nanometer-scale pores located in electrically insulating membranes. They can be adopted as detectors of specific molecules in electrolytic solutions. Permeation of ions from one electrolytic solution to another, through a protein channel or a synthetic pore is a process of considerable importance and realistic analysis of the main dependencies of ion current on the geometrical and compositional characteristics of these structures are highly required. The project described by this thesis is an effort to improve the understanding of ion channels by devising methods for computer simulation that can predict channel conductance from channel structure. This project describes theory, algorithms and implementation techniques used to develop a novel 3-D numerical simulator of ion channels and synthetic nanopores based on the Brownian Dynamics technique. This numerical simulator could represent a valid tool for the study of protein ion channel and synthetic nanopores, allowing to investigate at the atomic-level the complex electrostatic interactions that determine channel conductance and ion selectivity. Moreover it will provide insights on how parameters like temperature, applied voltage, and pore shape could influence ion translocation dynamics. Furthermore it will help making predictions of conductance of given channel structures and it will add information like electrostatic potential or ionic concentrations throughout the simulation domain helping the understanding of ion flow through membrane pores.
Resumo:
Simulationen von SiO2 mit dem von van Beest, Kramer und vanSanten (BKS) entwickelten Paarpotenzial erzeugen vielezufriedenstellende Ergebnisse, aber auch charakteristischeSchwachstellen. In dieser Arbeit wird das BKS-Potenzial mitzwei kürzlich vorgeschlagenen Potenzialen verglichen, dieeffektiv Mehrteilchen-Wechselwirkungen beinhalten. Der ersteAnsatz erlaubt dazu fluktuierende Ladungen, der zweiteinduzierbare Polarisierungen auf den Sauerstoffatomen. Die untersuchten Schwachstellen des BKS Potenzialsbeinhalten das Verhältnis der zwei Gitterkonstanten a und cim Quarzübergang, das von BKS falsch beschrieben wird.Cristobalit und Tridymit erscheinen instabil mit BKS.Weiterhin zeigt die BKS-Zustandsdichte charakteristischeAbweichungen von der wahren Zustandsdichte. DerÜbergangsdruck für den Stishovit I-II Übergang wird deutlichüberschätzt. Das Fluktuierende-Ladungs-Modell verbesserteinige der genannten Punkte, reproduziert aber viele andereEigenschaften schlechter als BKS. DasFluktierende-Dipol-Modell dagegen behebt alle genanntenArtefakte. Zusätzlich wird der druckinduzierte Phasenübergang imalpha-Quarz untersucht. Alle Potentiale finden die selbeStruktur für Quarz II. Bei anschliessender Dekompressionerzeugt BKS eine weitere Phase, während die beiden anderenPotentiale wieder zum alpha-Quarz zurückkehren. Weiterhinwerden zwei Methoden entwickelt, um die piezoelektrischenKonstanten bei konstantem Druck zu bestimmen. Die Ergebnissegeben Hinweise auf eine möglicherweisenicht-elektrostatische Natur der Polarisierungen imFluktuierende-Dipole-Modell. Mit dieser Interpretation scheint das Fluktuierende-DipolPotential alle verfügbaren experimentellen Daten am bestenvon allen drei untersuchten Ansätzen zu reproduzieren.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.
Resumo:
The goal of this thesis is the application of an opto-electronic numerical simulation to heterojunction silicon solar cells featuring an all back contact architecture (Interdigitated Back Contact Hetero-Junction IBC-HJ). The studied structure exhibits both metal contacts, emitter and base, at the back surface of the cell with the objective to reduce the optical losses due to the shadowing by front contact of conventional photovoltaic devices. Overall, IBC-HJ are promising low-cost alternatives to monocrystalline wafer-based solar cells featuring front and back contact schemes, in fact, for IBC-HJ the high concentration doping diffusions are replaced by low-temperature deposition processes of thin amorphous silicon layers. Furthermore, another advantage of IBC solar cells with reference to conventional architectures is the possibility to enable a low-cost assembling of photovoltaic modules, being all contacts on the same side. A preliminary extensive literature survey has been helpful to highlight the specific critical aspects of IBC-HJ solar cells as well as the state-of-the-art of their modeling, processing and performance of practical devices. In order to perform the analysis of IBC-HJ devices, a two-dimensional (2-D) numerical simulation flow has been set up. A commercial device simulator based on finite-difference method to solve numerically the whole set of equations governing the electrical transport in semiconductor materials (Sentuarus Device by Synopsys) has been adopted. The first activity carried out during this work has been the definition of a 2-D geometry corresponding to the simulation domain and the specification of the electrical and optical properties of materials. In order to calculate the main figures of merit of the investigated solar cells, the spatially resolved photon absorption rate map has been calculated by means of an optical simulator. Optical simulations have been performed by using two different methods depending upon the geometrical features of the front interface of the solar cell: the transfer matrix method (TMM) and the raytracing (RT). The first method allows to model light prop-agation by plane waves within one-dimensional spatial domains under the assumption of devices exhibiting stacks of parallel layers with planar interfaces. In addition, TMM is suitable for the simulation of thin multi-layer anti reflection coating layers for the reduction of the amount of reflected light at the front interface. Raytracing is required for three-dimensional optical simulations of upright pyramidal textured surfaces which are widely adopted to significantly reduce the reflection at the front surface. The optical generation profiles are interpolated onto the electrical grid adopted by the device simulator which solves the carriers transport equations coupled with Poisson and continuity equations in a self-consistent way. The main figures of merit are calculated by means of a postprocessing of the output data from device simulation. After the validation of the simulation methodology by means of comparison of the simulation result with literature data, the ultimate efficiency of the IBC-HJ architecture has been calculated. By accounting for all optical losses, IBC-HJ solar cells result in a theoretical maximum efficiency above 23.5% (without texturing at front interface) higher than that of both standard homojunction crystalline silicon (Homogeneous Emitter HE) and front contact heterojuction (Heterojunction with Intrinsic Thin layer HIT) solar cells. However it is clear that the criticalities of this structure are mainly due to the defects density and to the poor carriers transport mobility in the amorphous silicon layers. Lastly, the influence of the most critical geometrical and physical parameters on the main figures of merit have been investigated by applying the numerical simulation tool set-up during the first part of the present thesis. Simulations have highlighted that carrier mobility and defects level in amorphous silicon may lead to a potentially significant reduction of the conversion efficiency.