982 resultados para Simulations, Quantum Models, Resonant Tunneling Diode
Resumo:
A high frequency physical phase variable electric machine model was developed using FE analysis. The model was implemented in a machine drive environment with hardware-in-the-loop. The novelty of the proposed model is that it is derived based on the actual geometrical and other physical information of the motor, considering each individual turn in the winding. This is the first attempt to develop such a model to obtain high frequency machine parameters without resorting to expensive experimental procedures currently in use. The model was used in a dynamic simulation environment to predict inverter-motor interaction. This includes motor terminal overvoltage, current spikes, as well as switching effects. In addition, a complete drive model was developed for electromagnetic interference (EMI) analysis and evaluation. This consists of the lumped parameter models of different system components, such as cable, inverter, and motor. The lumped parameter models enable faster simulations. The results obtained were verified by experimental measurements and excellent agreements were obtained. A change in the winding arrangement and its influence on the motor high frequency behavior has also been investigated. This was shown to have a little effect on the parameter values and in the motor high frequency behavior for equal number of turns. An accurate prediction of overvoltage and EMI in the design stages of the drive system would reduce the time required for the design modifications as well as for the evaluation of EMC compliance issues. The model can be utilized in the design optimization and insulation selection for motors. Use of this procedure could prove economical, as it would help designers develop and test new motor designs for the evaluation of operational impacts in various motor drive applications.
Resumo:
Acknowledgements One of us (T. B.) acknowledges many interesting discussions on coupled maps with Professor C. Tsallis. We are also grateful to the anonymous referees for their constructive feedback that helped us improve the manuscript and to the HPCS Laboratory of the TEI of Western Greece for providing the computer facilities where all our simulations were performed. C. G. A. was partially supported by the “EPSRC EP/I032606/1” grant of the University of Aberdeen. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES - Investing in knowledge society through the European Social Fund.
Resumo:
Acknowledgements One of us (T. B.) acknowledges many interesting discussions on coupled maps with Professor C. Tsallis. We are also grateful to the anonymous referees for their constructive feedback that helped us improve the manuscript and to the HPCS Laboratory of the TEI of Western Greece for providing the computer facilities where all our simulations were performed. C. G. A. was partially supported by the “EPSRC EP/I032606/1” grant of the University of Aberdeen. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES - Investing in knowledge society through the European Social Fund.
Resumo:
Plasmonic resonant cavities are capable of confining light at the nanoscale, resulting in both enhanced local electromagnetic fields and lower mode volumes. However, conventional plasmonic resonant cavities possess large Ohmic losses at metal-dielectric interfaces. Plasmonic near-field coupling plays a key role in a design of photonic components based on the resonant cavities because of the possibility to reduce losses. Here, we study the plasmonic near-field coupling in the silver nanorod metamaterials treated as resonant nanostructured optical cavities. Reflectance measurements reveal the existence of multiple resonance modes of the nanorod metamaterials, which is consistent with our theoretical analysis. Furthermore, our numerical simulations show that the electric field at the longitudinal resonances forms standing waves in the nanocavities due to the near-field coupling between the adjacent nanorods, and a new hybrid mode emerges due to a coupling between nanorods and a gold-film substrate. We demonstrate that this coupling can be controlled by changing the gap between the silver nanorod array and gold substrate.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
The study of III-nitride materials (InN, GaN and AlN) gained huge research momentum after breakthroughs in the production light emitting diodes (LEDs) and laser diodes (LDs) over the past two decades. Last year, the Nobel Prize in Physics was awarded jointly to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for inventing a new energy efficient and environmental friendly light source: blue light-emitting diode (LED) from III-nitride semiconductors in the early 1990s. Nowadays, III-nitride materials not only play an increasingly important role in the lighting technology, but also become prospective candidates in other areas, for example, the high frequency (RF) high electron mobility transistor (HEMT) and photovoltaics. These devices require the growth of high quality III-nitride films, which can be prepared using metal organic vapour phase epitaxy (MOVPE). The main aim of my thesis is to study and develop the growth of III-nitride films, including AlN, u-AlGaN, Si-doped AlGaN, and InAlN, serving as sample wafers for fabrication of ultraviolet (UV) LEDs, in order to replace the conventional bulky, expensive and environmentally harmful mercury lamp as new UV light sources. For application to UV LEDs, reducing the threading dislocation density (TDD) in AlN epilayers on sapphire substrates is a key parameter for achieving high-efficiency AlGaNbased UV emitters. In Chapter 4, after careful and systematic optimisation, a working set of conditions, the screw and edge type dislocation density in the AlN were reduced to around 2.2×108 cm-2 and 1.3×109 cm-2 , respectively, using an optimized three-step process, as estimated by TEM. An atomically smooth surface with an RMS roughness of around 0.3 nm achieved over 5×5 µm 2 AFM scale. Furthermore, the motion of the steps in a one dimension model has been proposed to describe surface morphology evolution, especially the step bunching feature found under non-optimal conditions. In Chapter 5, control of alloy composition and the maintenance of compositional uniformity across a growing epilayer surface were demonstrated for the development of u-AlGaN epilayers. Optimized conditions (i.e. a high growth temperature of 1245 °C) produced uniform and smooth film with a low RMS roughness of around 2 nm achieved in 20×20 µm 2 AFM scan. The dopant that is most commonly used to obtain n-type conductivity in AlxGa1-xN is Si. However, the incorporation of Si has been found to increase the strain relaxation and promote unintentional incorporation of other impurities (O and C) during Si-doped AlGaN growth. In Chapter 6, reducing edge-type TDs is observed to be an effective appoach to improve the electric and optical properties of Si-doped AlGaN epilayers. In addition, the maximum electron concentration of 1.3×1019 cm-3 and 6.4×1018 cm-3 were achieved in Si-doped Al0.48Ga0.52N and Al0.6Ga0.4N epilayers as measured using Hall effect. Finally, in Chapter 7, studies on the growth of InAlN/AlGaN multiple quantum well (MQW) structures were performed, and exposing InAlN QW to a higher temperature during the ramp to the growth temperature of AlGaN barrier (around 1100 °C) will suffer a significant indium (In) desorption. To overcome this issue, quasi-two-tempeature (Q2T) technique was applied to protect InAlN QW. After optimization, an intense UV emission from MQWs has been observed in the UV spectral range from 320 to 350 nm measured by room temperature photoluminescence.
Resumo:
Optical nanofibres are ultrathin optical fibres with a waist diameter typically less than the wavelength of light being guided through them. Cold atoms can couple to the evanescent field of the nanofibre-guided modes and such systems are emerging as promising technologies for the development of atom-photon hybrid quantum devices. Atoms within the evanescent field region of an optical nanofibre can be probed by sending near or on-resonant light through the fibre; however, the probe light can detrimentally affect the properties of the atoms. In this paper, we report on the modification of the local temperature of laser-cooled 87Rb atoms in a magneto-optical trap centred around an optical nanofibre when near-resonant probe light propagates through it. A transient absorption technique has been used to measure the temperature of the affected atoms and temperature variations from 160 μk to 850 μk, for a probe power ranging from 0 to 50 nW, have been observed. This effect could have implications in relation to using optical nanofibres for probing and manipulating cold or ultracold atoms.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
Variations in global ice volume and temperature over the Cenozoic era have been investigated with a set of one-dimensional (1-D) ice-sheet models. Simulations include three ice sheets representing glaciation in the Northern Hemisphere, i.e. in Eurasia, North America and Greenland, and two separate ice sheets for Antarctic glaciation. The continental mean Northern Hemisphere surface-air temperature has been derived through an inverse procedure from observed benthic d18O records. These data have yielded a mutually consistent and continuous record of temperature, global ice volume and benthic d18O over the past 35 Ma. The simple 1-D model shows good agreement with a comprehensive 3-D ice-sheet model for the past 3 Ma. On average, differences are only 1.0°C for temperature and 6.2 m for sea level. Most notably, over the 35 Ma period, the reconstructed ice volume-temperature sensitivity shows a transition from a climate controlled by Southern Hemisphere ice sheets to one controlled by Northern Hemisphere ice sheets. Although the transient behaviour is important, equilibrium experiments show that the relationship between temperature and sea level is linear and symmetric, providing limited evidence for hysteresis. Furthermore, the results show a good comparison with other simulations of Antarctic ice volume and observed sea level.
Resumo:
The recently proposed global monsoon hypothesis interprets monsoon systems as part of one global-scale atmospheric overturning circulation, implying a connection between the regional monsoon systems and an in-phase behaviour of all northern hemispheric monsoons on annual timescales (Trenberth et al., 2000). Whether this concept can be applied to past climates and variability on longer timescales is still under debate, because the monsoon systems exhibit different regional characteristics such as different seasonality (i.e. onset, peak, and withdrawal). To investigate the interconnection of different monsoon systems during the pre-industrial Holocene, five transient global climate model simulations have been analysed with respect to the rainfall trend and variability in different sub-domains of the Afro-Asian monsoon region. Our analysis suggests that on millennial timescales with varying orbital forcing, the monsoons do not behave as a tightly connected global system. According to the models, the Indian and North African monsoons are coupled, showing similar rainfall trend and moderate correlation in rainfall variability in all models. The East Asian monsoon changes independently during the Holocene. The dissimilarities in the seasonality of the monsoon sub-systems lead to a stronger response of the North African and Indian monsoon systems to the Holocene insolation forcing than of the East Asian monsoon and affect the seasonal distribution of Holocene rainfall variations. Within the Indian and North African monsoon domain, precipitation solely changes during the summer months, showing a decreasing Holocene precipitation trend. In the East Asian monsoon region, the precipitation signal is determined by an increasing precipitation trend during spring and a decreasing precipitation change during summer, partly balancing each other. A synthesis of reconstructions and the model results do not reveal an impact of the different seasonality on the timing of the Holocene rainfall optimum in the different sub-monsoon systems. They rather indicate locally inhomogeneous rainfall changes and show, that single palaeo-records should not be used to characterise the rainfall change and monsoon evolution for entire monsoon sub-systems.
Resumo:
Routes of migration and exchange are important factors in the debate about how the Neolithic transition spread into Europe. Studying the genetic diversity of livestock can help in tracing back some of these past events. Notably, domestic goat (Capra hircus) did not have any wild progenitors (Capra aegagrus) in Europe before their arrival from the Near East. Studies of mitochondrial DNA have shown that the diversity in European domesticated goats is a subset of that in the wild, underlining the ancestral relationship between both populations. Additionally, an ancient DNA study on Neolithic goat remains has indicated that a high level of genetic diversity was already present early in the Neolithic in northwestern Mediterranean sites. We used coalescent simulations and approximate Bayesian computation, conditioned on patterns of modern and ancient mitochondrial DNA diversity in domesticated and wild goats, to test a series of simplified models of the goat domestication process. Specifically, we ask if domestic goats descend from populations that were distinct prior to domestication. Although the models we present require further analyses, preliminary results indicate that wild and domestic goats are more likely to descend from a single ancestral wild population that was managed 11,500 years before present, and that serial founding events characterise the spread of Capra hircus into Europe.
Resumo:
We present self-consistent, axisymmetric core-collapse supernova simulations performed with the Prometheus-Vertex code for 18 pre-supernova models in the range of 11–28 M ⊙, including progenitors recently investigated by other groups. All models develop explosions, but depending on the progenitor structure, they can be divided into two classes. With a steep density decline at the Si/Si–O interface, the arrival of this interface at the shock front leads to a sudden drop of the mass-accretion rate, triggering a rapid approach to explosion. With a more gradually decreasing accretion rate, it takes longer for the neutrino heating to overcome the accretion ram pressure and explosions set in later. Early explosions are facilitated by high mass-accretion rates after bounce and correspondingly high neutrino luminosities combined with a pronounced drop of the accretion rate and ram pressure at the Si/Si–O interface. Because of rapidly shrinking neutron star radii and receding shock fronts after the passage through their maxima, our models exhibit short advection timescales, which favor the efficient growth of the standing accretion-shock instability. The latter plays a supportive role at least for the initiation of the re-expansion of the stalled shock before runaway. Taking into account the effects of turbulent pressure in the gain layer, we derive a generalized condition for the critical neutrino luminosity that captures the explosion behavior of all models very well. We validate the robustness of our findings by testing the influence of stochasticity, numerical resolution, and approximations in some aspects of the microphysics.
Resumo:
The Ran GTPase protein is a guanine nucleotide-binding protein (GNBP) with an acknowledged profile in cancer onset, progression and metastases. The complex mechanism adopted by GNBPs in exchanging GDP for GTP is an intriguing process and crucial for Ran viability. The successful completion of the process is a fundamental aspect of propagating downstream signalling events. QM/MM molecular dynamics simulations were employed in this study to provide a deeper mechanistic understanding of the initiation of nucleotide exchange in Ran. Results indicate significant disruption of the metal-binding site upon interaction with RCC1 (the Ran guanine nucleotide exchange factor), overall culminating in the prominent shift of the divalent magnesium ion. The observed ion drifting is reasoned to occur as a consequence of the complex formation between Ran and RCC1 and is postulated to be a critical factor in the exchange process adopted by Ran. This is the first report to observe and detail such intricate dynamics for a protein in Ras superfamily.
Resumo:
We present a reformulation of the hairy-probe method for introducing electronic open boundaries that is appropriate for steady-state calculations involving nonorthogonal atomic basis sets. As a check on the correctness of the method we investigate a perfect atomic wire of Cu atoms and a perfect nonorthogonal chain of H atoms. For both atom chains we find that the conductance has a value of exactly one quantum unit and that this is rather insensitive to the strength of coupling of the probes to the system, provided values of the coupling are of the same order as the mean interlevel spacing of the system without probes. For the Cu atom chain we find in addition that away from the regions with probes attached, the potential in the wire is uniform, while within them it follows a predicted exponential variation with position. We then apply the method to an initial investigation of the suitability of graphene as a contact material for molecular electronics. We perform calculations on a carbon nanoribbon to determine the correct coupling strength of the probes to the graphene and obtain a conductance of about two quantum units corresponding to two bands crossing the Fermi surface. We then compute the current through a benzene molecule attached to two graphene contacts and find only a very weak current because of the disruption of the π conjugation by the covalent bond between the benzene and the graphene. In all cases we find that very strong or weak probe couplings suppress the current.