10 resultados para Multiple Quantum Well Lasers
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Resumo:
Our view of Globular Clusters has deeply changed in the last decade. Modern spectroscopic and photometric data have conclusively established that globulars are neither coeval nor monometallic, reopening the issue of the formation of such systems. Their formation is now schematized as a two-step process, during which the polluted matter from the more massive stars of a first generation gives birth, in the cluster innermost regions, to a second generation of stars with the characteristic signature of fully CNO-processed matter. To date, star-to-star variations in abundances of the light elements (C, N, O, Na) have been observed in stars of all evolutionary phases in all properly studied Galactic globular clusters. Multiple or broad evolutionary sequences have also been observed in nearly all the clusters that have been observed with good signal-to-noise in the appropriate photometric bands. The body of evidence suggests that spreads in light-element abundances can be fairly well traced by photometric indices including near ultraviolet passbands, as CNO abundance variations affect mainly wavelengths shorter than ~400 nm owing to the rise of some NH and CN molecular absorption bands. Here, we exploit this property of near ultraviolet photometry to trace internal chemical variations and combined it with low resolution spectroscopy aimed to derive carbon and nitrogen abundances in order to maximize the information on the multiple populations. This approach has been proven to be very effective in (i) detecting multiple population, (ii) characterizing their global properties (i.e., relative fraction of stars, location in the color-magnitude diagram, spatial distribution, and trends with cluster parameters) and (iii) precisely tagging their chemical properties (i.e., extension of the C-N anticorrelation, bimodalities in the N content).
Resumo:
In the last decades, nanomaterials, and in particular semiconducting nanoparticles (or quantum dots), have gained increasing attention due to their controllable optical properties and potential applications. Silicon nanoparticles (also called silicon nanocrystals, SiNCs) have been extensively studied in the last years, due to their physical and chemical properties which render them a valid alternative to conventional quantum dots. During my PhD studies I have planned new synthetical routes to obtain SiNCs functionalised with molecules which could ameliorate the properties of the nanoparticle. However, this was certainly challenging, because SiNCs are very susceptible to many reagents and conditions that are often used in organic synthesis. They can be irreversibly quenched in the presence of alkalis, they can be damaged in the presence of oxidants, they can modify their optical properties in the presence of many nitrogen-containing compounds, metal complexes or simple organic molecules. If their surface is not well-passivated, the oxygen can introduce defect states, or they can aggregate and precipitate in several solvents. Therefore, I was able to functionalise SiNCs with different ligands: chromophores, amines, carboxylic acids, poly(ethylene)glycol, even ameliorating functionalisation strategies that already existed. This thesis will collect the experimental procedures used to synthesize silicon nanocrystals, the strategies adopted to functionalise effectively the nanoparticle with different types of organic molecules, and the characterisation of their surface, physical properties and luminescence (mostly photogenerated, but also electrochemigenerated). I also spent a period of 7 months in Leeds (UK), where I managed to learn how to synthesize other cadmium-free quantum dots made of copper, indium and sulphur (CIS QDs). During my last year of PhD, I focused on their functionalisation by ligand exchange techniques, yielding the first example of light-harvesting antenna based on those quantum dots. Part of this thesis is dedicated to them.
Resumo:
The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.
Resumo:
In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world.
Resumo:
This dissertation aims to contribute to the discourse on the governance of smart cities (SC) by examining the collaborative relationships between various actors involved in developing and implementing SC initiatives. Poorly organized collaboration can lead to conflicts and misunderstandings, resulting in failures in realizing such complex technological initiatives. Hence, capturing the main elements of SC collaboration becomes essential for understanding how they should be developed and managed. However, the topic has been limitedly explored in prior research, with fragmented studies on narrow aspects related to the SC governance. Using Russia as an empirical setting, the study focuses on the interplay of both government and non-governmental stakeholders in constructing collaborative relationships within SC, covering both vertical and horizontal dimensions of their interaction. The overarching goal of this research is to understand how collaborative governance unfolds in the SC context by stating two guiding research questions: 1) who are the dominant actors in SC and what are their roles? 2) what are the relationships forged among them? The dissertation investigates the SC initiatives across three different cities – Moscow, Saint Petersburg, and Perm – in a format of empirical illustration as well as an in-depth case study. The dissertation provides three main contributions. First, it strengthens the link between the SC domain, public governance, and literature on cross-sectoral collaboration by highlighting ‘urban smartness’ as a source for generating multiple values. Second, the thesis offers novel view on the strategic development paths which conceptually shape the SC framework. It connects the techno-centric and human-centric perspectives of SC by showing that they are naturally linked, rather than mutually exclusive. Third, the study illustrates that SC initiatives are contextually dependent, and this dependence covers specificities of public governance, including underlying informal mechanisms, which influence the inception, development, and management of SC in the organizational realms.
Resumo:
The presence of multiple stellar populations in globular clusters (GCs) is now well accepted, however, very little is known regarding their origin. In this Thesis, I study how multiple populations formed and evolved by means of customized 3D numerical simulations, in light of the most recent data from spectroscopic and photometric observations of Local and high-redshift Universe. Numerical simulations are the perfect tool to interpret these data: hydrodynamic simulations are suited to study the early phases of GCs formation, to follow in great detail the gas behavior, while N-body codes permit tracing the stellar component. First, we study the formation of second-generation stars in a rotating massive GC. We assume that second-generation stars are formed out of asymptotic giant branch stars (AGBs) ejecta, diluted by external pristine gas. We find that, for low pristine gas density, stars mainly formed out of AGBs ejecta rotate faster than stars formed out of more diluted gas, in qualitative agreement with current observations. Then, assuming a similar setup, we explored whether Type Ia supernovae affect the second- generation star formation and their chemical composition. We show that the evolution depends on the density of the infalling gas, but, in general, an iron spread is developed, which may explain the spread observed in some massive GCs. Finally, we focused on the long-term evolution of a GC, composed of two populations and orbiting the Milky Way disk. We have derived that, for an extended first population and a low-mass second one, the cluster loses almost 98 percent of its initial first population mass and the GC mass can be as much as 20 times less after a Hubble time. Under these conditions, the derived fraction of second-population stars reproduces the observed value, which is one of the strongest constraints of GC mass loss.
Resumo:
In high-energy hadron collisions, the production at parton level of heavy-flavour quarks (charm and bottom) is described by perturbative Quantum Chromo-dynamics (pQCD) calculations, given the hard scale set by the quark masses. However, in hadron-hadron collisions, the predictions of the heavy-flavour hadrons eventually produced entail the knowledge of the parton distribution functions, as well as an accurate description of the hadronisation process. The latter is taken into account via the fragmentation functions measured at e$^+$e$^-$ colliders or in ep collisions, but several observations in LHC Run 1 and Run 2 data challenged this picture. In this dissertation, I studied the charm hadronisation in proton-proton collision at $\sqrt{s}$ = 13 TeV with the ALICE experiment at the LHC, making use of a large statistic data sample collected during LHC Run 2. The production of heavy-flavour in this collision system will be discussed, also describing various hadronisation models implemented in commonly used event generators, which try to reproduce experimental data, taking into account the unexpected results at LHC regarding the enhanced production of charmed baryons. The role of multiple parton interaction (MPI) will also be presented and how it affects the total charm production as a function of multiplicity. The ALICE apparatus will be described before moving to the experimental results, which are related to the measurement of relative production rates of the charm hadrons $\Sigma_c^{0,++}$ and $\Lambda_c^+$, which allow us to study the hadronisation mechanisms of charm quarks and to give constraints to different hadronisation models. Furthermore, the analysis of D mesons ($D^{0}$, $D^{+}$ and $D^{*+}$) as a function of charged-particle multiplicity and spherocity will be shown, investigating the role of multi-parton interactions. This research is relevant per se and for the mission of the ALICE experiment at the LHC, which is devoted to the study of Quark-Gluon Plasma.