962 resultados para Monte-Carlo Simulation
Resumo:
Hypernuclear physics is currently attracting renewed interest, due tornthe important role of hypernuclei spectroscopy rn(hyperon-hyperon and hyperon-nucleon interactions) rnas a unique toolrnto describe the baryon-baryon interactions in a unified way and to rnunderstand the origin of their short-range.rnrnHypernuclear research will be one of the main topics addressed by the {sc PANDA} experimentrnat the planned Facility for Antiproton and Ion Research {sc FAIR}.rnThanks to the use of stored $overline{p}$ beams, copiousrnproduction of double $Lambda$ hypernuclei is expected at thern{sc PANDA} experiment, which will enable high precision $gamma$rnspectroscopy of such nuclei for the first time.rnAt {sc PANDA} excited states of $Xi^-$ hypernuclei will be usedrnas a basis for the formation of double $Lambda$ hypernuclei.rnFor their detection, a devoted hypernuclear detector setup is planned. This setup consists ofrna primary nuclear target for the production of $Xi^{-}+overline{Xi}$ pairs, a secondary active targetrnfor the hypernuclei formation and the identification of associated decay products and a germanium array detector to perform $gamma$ spectroscopy.rnrnIn the present work, the feasibility of performing high precision $gamma$rnspectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment has been studiedrnby means of a Monte Carlo simulation. For this issue, the designing and simulation of the devoted detector setup as well as of the mechanism to produce double $Lambda$ hypernuclei have been optimizedrntogether with the performance of the whole system. rnIn addition, the production yields of double hypernuclei in excitedrnparticle stable states have been evaluated within a statistical decay model.rnrnA strategy for the unique assignment of various newly observed $gamma$-transitions rnto specific double hypernuclei has been successfully implemented by combining the predicted energy spectra rnof each target with the measurement of two pion momenta from the subsequent weak decays of a double hypernucleus.rn% Indeed, based on these Monte Carlo simulation, the analysis of the statistical decay of $^{13}_{Lambda{}Lambda}$B has been performed. rn% As result, three $gamma$-transitions associated to the double hypernuclei $^{11}_{Lambda{}Lambda}$Bern% and to the single hyperfragments $^{4}_{Lambda}$H and $^{9}_{Lambda}$Be, have been well identified.rnrnFor the background handling a method based on time measurement has also been implemented.rnHowever, the percentage of tagged events related to the production of $Xi^{-}+overline{Xi}$ pairs, variesrnbetween 20% and 30% of the total number of produced events of this type. As a consequence, further considerations have to be made to increase the tagging efficiency by a factor of 2.rnrnThe contribution of the background reactions to the radiation damage on the germanium detectorsrnhas also been studied within the simulation. Additionally, a test to check the degradation of the energyrnresolution of the germanium detectors in the presence of a magnetic field has also been performed.rnNo significant degradation of the energy resolution or in the electronics was observed. A correlationrnbetween rise time and the pulse shape has been used to correct the measured energy. rnrnBased on the present results, one can say that the performance of $gamma$ spectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment seems feasible.rnA further improvement of the statistics is needed for the background rejection studies. Moreover, a more realistic layout of the hypernuclear detectors has been suggested using the results of these studies to accomplish a better balance between the physical and the technical requirements.rn
Resumo:
Marking the final explosive burning stage of massive stars, supernovae are onernthe of most energetic celestial events. Apart from their enormous optical brightnessrnthey are also known to be associated with strong emission of MeV neutrinos—up tornnow the only proven source of extrasolar neutrinos.rnAlthough being designed for the detection of high energy neutrinos, the recentlyrncompleted IceCube neutrino telescope in the antarctic ice will have the highestrnsensitivity of all current experiments to measure the shape of the neutrino lightrncurve, which is in the MeV range. This measurement is crucial for the understandingrnof supernova dynamics.rnIn this thesis, the development of a Monte Carlo simulation for a future low energyrnextension of IceCube, called PINGU, is described that investigates the response ofrnPINGU to a supernova. Using this simulation, various detector configurations arernanalysed and optimised for supernova detection. The prospects of extracting notrnonly the total light curve, but also the direction of the supernova and the meanrnneutrino energy from the data are discussed. Finally the performance of PINGU isrncompared to the current capabilities of IceCube.
Resumo:
Im Rahmen dieser Arbeit wurden Computersimulationen von Keimbildungs- und Kris\-tallisationsprozessen in rnkolloidalen Systemen durchgef\"uhrt. rnEine Kombination von Monte-Carlo-Simulationsmethoden und der Forward-Flux-Sampling-Technik wurde rnimplementiert, um die homogene und heterogene Nukleation von Kristallen monodisperser Hart\-kugeln zu untersuchen. rnIm m\"a\ss{ig} unterk\"uhlten Bulk-Hartkugelsystem sagen wir die homogenen Nukleationsraten voraus und rnvergleichen die Resultate mit anderen theoretischen Ergebnissen und experimentellen Daten. rnWeiterhin analysieren wir die kristallinen Cluster in den Keimbildungs- und Wachstumszonen, rnwobei sich herausstellt, dass kristalline Cluster sich in unterschiedlichen Formen im System bilden. rnKleine Cluster sind eher l\"anglich in eine beliebige Richtung ausgedehnt, w\"ahrend gr\"o\ss{ere} rnCluster kompakter und von ellipsoidaler Gestalt sind. rn rnIm n\"achsten Teil untersuchen wir die heterogene Keimbildung an strukturierten bcc (100)-W\"anden. rnDie 2d-Analyse der kristallinen Schichten an der Wand zeigt, dass die Struktur der rnWand eine entscheidende Rolle in der Kristallisation von Hartkugelkolloiden spielt. rnWir sagen zudem die heterogenen Kristallbildungsraten bei verschiedenen \"Ubers\"attigungsgraden voraus. rnDurch Analyse der gr\"o\ss{ten} Cluster an der Wand sch\"atzen wir zus\"atzlich den Kontaktwinkel rnzwischen Kristallcluster und Wand ab. rnEs stellt sich heraus, dass wir in solchen Systemen weit von der Benetzungsregion rnentfernt sind und der Kristallisationsprozess durch heterogene Nukleation stattfindet. rn rnIm letzten Teil der Arbeit betrachten wir die Kristallisation von Lennard-Jones-Kolloidsystemen rnzwischen zwei ebenen W\"anden. rnUm die Erstarrungsprozesse f\"ur ein solches System zu untersuchen, haben wir eine Analyse des rnOrdnungsparameters f\"ur die Bindung-Ausrichtung in den Schichten durchgef\"urt. rnDie Ergebnisse zeigen, dass innerhalb einer Schicht keine hexatische Ordnung besteht, rnwelche auf einen Kosterlitz-Thouless-Schmelzvorgang hinweisen w\"urde. rnDie Hysterese in den Erhitzungs-Gefrier\-kurven zeigt dar\"uber hinaus, dass der Kristallisationsprozess rneinen aktivierten Prozess darstellt.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.
Resumo:
In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.
Resumo:
Am Mainzer Mikrotron können Lambda-Hyperkerne in (e,e'K^+)-Reaktionen erzeugt werden. Durch den Nachweis des erzeugten Kaons im KAOS-Spektrometer lassen sich Reaktionen markieren, bei denen ein Hyperon erzeugt wurde. Die Spektroskopie geladener Pionen, die aus schwachen Zweikörperzerfällen leichter Hyperkerne stammen, erlaubt es die Bindungsenergie des Hyperons im Kern mit hoher Präzision zu bestimmen. Neben der direkten Produktion von Hyperkernen ist auch die Erzeugung durch die Fragmentierung eines hoch angeregten Kontinuumszustands möglich. Dadurch können unterschiedliche Hyperkerne in einem Experiment untersucht werden. Für die Spektroskopie der Zerfallspionen stehen hochauflösende Magnetspektrometer zur Verfügung. Um die Grundzustandsmasse der Hyperkerne aus dem Pionimpuls zu berechnen, ist es erforderlich, dass das Hyperfragment vor dem Zerfall im Target abgebremst wird. Basierend auf dem bekannten Wirkungsquerschnitt der elementaren Kaon-Photoproduktion wurde eine Berechnung der zu erwartenden Ereignisrate vorgenommen. Es wurde eine Monte-Carlo-Simulation entwickelt, die den Fragmentierungsprozess und das Abbremsen der Hyperfragmente im Target beinhaltet. Diese nutzt ein statistisches Aufbruchsmodell zur Beschreibung der Fragmentierung. Dieser Ansatz ermöglicht für Wasserstoff-4-Lambda-Hyperkerne eine Vorhersage der zu erwartenden Zählrate an Zerfallspionen. In einem Pilotexperiment im Jahr 2011 wurde erstmalig an MAMI der Nachweis von Hadronen mit dem KAOS-Spektrometer unter einem Streuwinkel von 0° demonstriert, und koinzident dazu Pionen nachgewiesen. Es zeigte sich, dass bedingt durch die hohen Untergrundraten von Positronen in KAOS eine eindeutige Identifizierung von Hyperkernen in dieser Konfiguration nicht möglich war. Basierend auf diesen Erkenntnissen wurde das KAOS-Spektrometer so modifiziert, dass es als dedizierter Kaonenmarkierer fungierte. Zu diesem Zweck wurde ein Absorber aus Blei im Spektrometer montiert, in dem Positronen durch Schauerbildung abgestoppt werden. Die Auswirkung eines solchen Absorbers wurde in einem Strahltest untersucht. Eine Simulation basierend auf Geant4 wurde entwickelt mittels derer der Aufbau von Absorber und Detektoren optimiert wurde, und die Vorhersagen über die Auswirkung auf die Datenqualität ermöglichte. Zusätzlich wurden mit der Simulation individuelle Rückrechnungsmatrizen für Kaonen, Pionen und Protonen erzeugt, die die Wechselwirkung der Teilchen mit der Bleiwand beinhalteten, und somit eine Korrektur der Auswirkungen ermöglichen. Mit dem verbesserten Aufbau wurde 2012 eine Produktionsstrahlzeit durchgeführt, wobei erfolgreich Kaonen unter 0° Streuwinkel koninzident mit Pionen aus schwachen Zerfällen detektiert werden konnten. Dabei konnte im Impulsspektrum der Zerfallspionen eine Überhöhung mit einer Signifikanz, die einem p-Wert von 2,5 x 10^-4 entspricht, festgestellt werden. Diese Ereignisse können aufgrund ihres Impulses, den Zerfällen von Wasserstoff-4-Lambda-Hyperkernen zugeordnet werden, wobei die Anzahl detektierter Pionen konsistent mit der berechneten Ausbeute ist.
Resumo:
The available results of deep imaging searches for planetary companions around nearby stars provide us useful constraints on the frequencies of giant planets in very wide orbits. Here we present some preliminary results of the Monte Carlo simulation which compare the published detection limits with the generated planetary masses and orbital parameters. This allow us to consider the impications of the null detection, which comes from the direct imaging techniques, on the distributions of mass and semimajor axis derived from the results of the other search techniques and also to check the agreement of the observations with the available planetary formation models.
Resumo:
Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.
Resumo:
The present study was conducted to estimate the direct losses due to Neospora caninum in Swiss dairy cattle and to assess the costs and benefits of different potential control strategies. A Monte Carlo simulation spreadsheet module was developed to estimate the direct costs caused by N. caninum, with and without control strategies, and to estimate the costs of these control strategies in a financial analysis. The control strategies considered were "testing and culling of seropositive female cattle", "discontinued breeding with offspring from seropositive cows", "chemotherapeutical treatment of female offspring" and "vaccination of all female cattle". Each parameter in the module that was considered to be uncertain, was described using probability distributions. The simulations were run with 20,000 iterations over a time period of 25 years. The median annual losses due to N. caninum in the Swiss dairy cow population were estimated to be euro 9.7 million euros. All control strategies that required yearly serological testing of all cattle in the population produced high costs and thus were not financially profitable. Among the other control strategies, two showed benefit-cost ratios (BCR) >1 and positive net present values (NPV): "Discontinued breeding with offspring from seropositive cows" (BCR=1.29, NPV=25 million euros ) and "chemotherapeutical treatment of all female offspring" (BCR=2.95, NPV=59 million euros). In economic terms, the best control strategy currently available would therefore be "discontinued breeding with offspring from seropositive cows".
Resumo:
Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.
Resumo:
In many clinical trials to evaluate treatment efficacy, it is believed that there may exist latent treatment effectiveness lag times after which medical procedure or chemical compound would be in full effect. In this article, semiparametric regression models are proposed and studied to estimate the treatment effect accounting for such latent lag times. The new models take advantage of the invariance property of the additive hazards model in marginalizing over random effects, so parameters in the models are easy to be estimated and interpreted, while the flexibility without specifying baseline hazard function is kept. Monte Carlo simulation studies demonstrate the appropriateness of the proposed semiparametric estimation procedure. Data collected in the actual randomized clinical trial, which evaluates the effectiveness of biodegradable carmustine polymers for treatment of recurrent brain tumors, are analyzed.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.
Resumo:
The physics of the operation of singe-electron tunneling devices (SEDs) and singe-electron tunneling transistors (SETs), especially of those with multiple nanometer-sized islands, has remained poorly understood in spite of some intensive experimental and theoretical research. This computational study examines the current-voltage (IV) characteristics of multi-island single-electron devices using a newly developed multi-island transport simulator (MITS) that is based on semi-classical tunneling theory and kinetic Monte Carlo simulation. The dependence of device characteristics on physical device parameters is explored, and the physical mechanisms that lead to the Coulomb blockade (CB) and Coulomb staircase (CS) characteristics are proposed. Simulations using MITS demonstrate that the overall IV characteristics in a device with a random distribution of islands are a result of a complex interplay among those factors that affect the tunneling rates that are fixed a priori (e.g. island sizes, island separations, temperature, gate bias, etc.), and the evolving charge state of the system, which changes as the source-drain bias (VSD) is changed. With increasing VSD, a multi-island device has to overcome multiple discrete energy barriers (up-steps) before it reaches the threshold voltage (Vth). Beyond Vth, current flow is rate-limited by slow junctions, which leads to the CS structures in the IV characteristic. Each step in the CS is characterized by a unique distribution of island charges with an associated distribution of tunneling probabilities. MITS simulation studies done on one-dimensional (1D) disordered chains show that longer chains are better suited for switching applications as Vth increases with increasing chain length. They are also able to retain CS structures at higher temperatures better than shorter chains. In sufficiently disordered 2D systems, we demonstrate that there may exist a dominant conducting path (DCP) for conduction, which makes the 2D device behave as a quasi-1D device. The existence of a DCP is sensitive to the device structure, but is robust with respect to changes in temperature, gate bias, and VSD. A side gate in 1D and 2D systems can effectively control Vth. We argue that devices with smaller island sizes and narrower junctions may be better suited for practical applications, especially at room temperature.