973 resultados para Linear-chain
Resumo:
This thesis divides into two distinct parts, both of which are underpinned by the tight-binding model. The first part covers our implementation of the tight-binding model in conjunction with the Berry phase theory of electronic polarisation to probe the atomistic origins of spontaneous polarisation and piezoelectricity as well as attempting to accurately calculate the values and coefficients associated with these phenomena. We first develop an analytic model for the polarisation of a one-dimensional linear chain of atoms. We compare the zincblende and ideal wurtzite structures in terms of effective charges, spontaneous polarisation and piezoelectric coefficients, within a first nearest neighbour tight-binding model. We further compare these to real wurtzite structures and conclude that accurate quantitative results are beyond the scope of this model but qualitative trends can still be described. The second part of this thesis deals with implementing the tight-binding model to investigate the effect of local alloy fluctuations in bulk AlGaN alloys and InGaN quantum wells. We calculate the band gap evolution of Al1_xGaxN across the full composition range and compare it to experiment as well as fitting bowing parameters to the band gap as well as to the conduction band and valence band edges. We also investigate the wavefunction character of the valence band edge to determine the composition at which the optical polarisation switches in Al1_xGaxN alloys. Finally, we examine electron and hole localisation in InGaN quantum wells. We show how the built-in field localises the carriers along the c-axis and how local alloy fluctuations strongly localise the highest hole states in the c-plane, while the electrons remain delocalised in the c-plane. We show how this localisation affects the charge density overlap and also investigate the effect of well width fluctuations on the localisation of the electrons.
Resumo:
While fault-tolerant quantum computation might still be years away, analog quantum simulators offer a way to leverage current quantum technologies to study classically intractable quantum systems. Cutting edge quantum simulators such as those utilizing ultracold atoms are beginning to study physics which surpass what is classically tractable. As the system sizes of these quantum simulators increase, there are also concurrent gains in the complexity and types of Hamiltonians which can be simulated. In this work, I describe advances toward the realization of an adaptable, tunable quantum simulator capable of surpassing classical computation. We simulate long-ranged Ising and XY spin models which can have global arbitrary transverse and longitudinal fields in addition to individual transverse fields using a linear chain of up to 24 Yb+ 171 ions confined in a linear rf Paul trap. Each qubit is encoded in the ground state hyperfine levels of an ion. Spin-spin interactions are engineered by the application of spin-dependent forces from laser fields, coupling spin to motion. Each spin can be read independently using state-dependent fluorescence. The results here add yet more tools to an ever growing quantum simulation toolbox. One of many challenges has been the coherent manipulation of individual qubits. By using a surprisingly large fourth-order Stark shifts in a clock-state qubit, we demonstrate an ability to individually manipulate spins and apply independent Hamiltonian terms, greatly increasing the range of quantum simulations which can be implemented. As quantum systems grow beyond the capability of classical numerics, a constant question is how to verify a quantum simulation. Here, I present measurements which may provide useful metrics for large system sizes and demonstrate them in a system of up to 24 ions during a classically intractable simulation. The observed values are consistent with extremely large entangled states, as much as ~95% of the system entangled. Finally, we use many of these techniques in order to generate a spin Hamiltonian which fails to thermalize during experimental time scales due to a meta-stable state which is often called prethermal. The observed prethermal state is a new form of prethermalization which arises due to long-range interactions and open boundary conditions, even in the thermodynamic limit. This prethermalization is observed in a system of up to 22 spins. We expect that system sizes can be extended up to 30 spins with only minor upgrades to the current apparatus. These results emphasize that as the technology improves, the techniques and tools developed here can potentially be used to perform simulations which will surpass the capability of even the most sophisticated classical techniques, enabling the study of a whole new regime of quantum many-body physics.
Resumo:
The particle transfer molecular dynamics is used to study the phase equilibria of linear and branched chain molecules. The scaling of the critical temperature versus chain length is obtained and the critical densities are found to decrease with increasing chain length, which are in agreement with the results of experiment and theory. The phase diagrams of the linear and the branched chain molecules nearly overlap with each other. Moreover, the radial distribution functions of linear and branched chain molecules in gas phase are very similar, but in the liquid phase, they are different for different kinds of chains.
Resumo:
Graft chain propagation rate coefficients (k(p.g)) for grafting AA onto linear low density polyethylene (LLDPE) in the melt in ESR tubes have been measured via Fourier transform infrared (FTIR) spectroscopy and electron spin resonance (ESR) spectroscopy in the temperature range from 130 to 170 degrees C. To exclude the effect of homopolymerization on the grafting. the LLDPE was pre-irradiated in the air by electron beam to generate the peroxides and then treated with iodide solution to eliminating one kind of peroxides, hydroperoxide. The monomer conversion is determined by FTIR and the chain propagation free-radical concentration is deduced from the double integration of the well-resolved ESR spectra, consisting nine lines in the melt. The temperature dependence of k(p.g) is expressed:The magnitude of k(p.g) from FTIR and ESR analysis is in good agreement with the theoretical data deduced from ethylene-AA copolymerization, suggesting this method could reliably and directly provide the propagation rate coefficient. The comparison of k(p.g) with the data extrapolated from solution polymerization at modest temperature indicates that the extrapolated data might not be entirely fitting to discuss the kinetics behavior in the melt.
Resumo:
We investigate entanglement between collective operators of two blocks of oscillators in an infinite linear harmonic chain. These operators are defined as averages over local operators (individual oscillators) in the blocks. On the one hand, this approach of "physical blocks" meets realistic experimental conditions, where measurement apparatuses do not interact with single oscillators but rather with a whole bunch of them, i.e., where in contrast to usually studied "mathematical blocks" not every possible measurement is allowed. On the other, this formalism naturally allows the generalization to blocks which may consist of several noncontiguous regions. We quantify entanglement between the collective operators by a measure based on the Peres-Horodecki criterion and show how it can be extracted and transferred to two qubits. Entanglement between two blocks is found even in the case where none of the oscillators from one block is entangled with an oscillator from the other, showing genuine bipartite entanglement between collective operators. Allowing the blocks to consist of a periodic sequence of subblocks, we verify that entanglement scales at most with the total boundary region. We also apply the approach of collective operators to scalar quantum field theory.
Resumo:
For a positive integer $t$, let \begin{equation*} \begin{array}{ccccccccc} (\mathcal{A}_{0},\mathcal{M}_{0}) & \subseteq & (\mathcal{A}_{1},\mathcal{M}_{1}) & \subseteq & & \subseteq & (\mathcal{A}_{t-1},\mathcal{M}_{t-1}) & \subseteq & (\mathcal{A},\mathcal{M}) \\ \cap & & \cap & & & & \cap & & \cap \\ (\mathcal{R}_{0},\mathcal{M}_{0}^{2}) & & (\mathcal{R}_{1},\mathcal{M}_{1}^{2}) & & & & (\mathcal{R}_{t-1},\mathcal{M}_{t-1}^{2}) & & (\mathcal{R},\mathcal{M}^{2}) \end{array} \end{equation*} be a chain of unitary local commutative rings $(\mathcal{A}_{i},\mathcal{M}_{i})$ with their corresponding Galois ring extensions $(\mathcal{R}_{i},\mathcal{M}_{i}^{2})$, for $i=0,1,\cdots,t$. In this paper, we have given a construction technique of the cyclic, BCH, alternant, Goppa and Srivastava codes over these rings. Though, initially in \cite{AP} it is for local ring $(\mathcal{A},\mathcal{M})$, in this paper, this new approach have given a choice in selection of most suitable code in error corrections and code rate perspectives.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
Environmentally benign and economical methods for the preparation of industrially important hydroxy acids and diacids were developed. The carboxylic acids, used in polyesters, alkyd resins, and polyamides, were obtained by the oxidation of the corresponding alcohols with hydrogen peroxide or air catalyzed by sodium tungstate or supported noble metals. These oxidations were carried out using water as a solvent. The alcohols are also a useful alternative to the conventional reactants, hydroxyaldehydes and cycloalkanes. The oxidation of 2,2-disubstituted propane-1,3-diols with hydrogen peroxide catalyzed by sodium tungstate afforded 2,2-disubstituted 3-hydroxypropanoic acids and 1,1-disubstituted ethane-1,2-diols as products. A computational study of the Baeyer-Villiger rearrangement of the intermediate 2,2-disubstituted 3-hydroxypropanals gave in-depth data of the mechanism of the reaction. Linear primary diols having chain length of at least six carbons were easily oxidized with hydrogen peroxide to linear dicarboxylic acids catalyzed by sodium tungstate. The Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols and linear primary diols afforded the highest yield of the corresponding hydroxy acids, while the Pt, Bi/C catalyzed oxidation of the diols afforded the highest yield of the corresponding diacids. The mechanism of the promoted oxidation was best described by the ensemble effect, and by the formation of a complex of the hydroxy and the carboxy groups of the hydroxy acids with bismuth atoms. The Pt, Bi/C catalyzed air oxidation of 2-substituted 2-hydroxymethylpropane-1,3-diols gave 2-substituted malonic acids by the decarboxylation of the corresponding triacids. Activated carbon was the best support and bismuth the most efficient promoter in the air oxidation of 2,2-dialkylpropane-1,3-diols to diacids. In oxidations carried out in organic solvents barium sulfate could be a valuable alternative to activated carbon as a non-flammable support. In the Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols to 2,2-disubstituted 3-hydroxypropanoic acids the small size of the 2-substituents enhanced the rate of the oxidation. When the potential of platinum of the catalyst was not controlled, the highest yield of the diacids in the Pt, Bi/C catalyzed air oxidation of 2,2-dialkylpropane-1,3-diols was obtained in the regime of mass transfer. The most favorable pH of the reaction mixture of the promoted oxidation was 10. The reaction temperature of 40°C prevented the decarboxylation of the diacids.
Resumo:
The transfer matrix method is known to be well suited for a complete analysis of a lumped as well as distributed element, one-dimensional, linear dynamical system with a marked chain topology. However, general subroutines of the type available for classical matrix methods are not available in the current literature on transfer matrix methods. In the present article, general expressions for various aspects of analysis-viz., natural frequency equation, modal vectors, forced response and filter performance—have been evaluated in terms of a single parameter, referred to as velocity ratio. Subprograms have been developed for use with the transfer matrix method for the evaluation of velocity ratio and related parameters. It is shown that a given system, branched or straight-through, can be completely analysed in terms of these basic subprograms, on a stored program digital computer. It is observed that the transfer matrix method with the velocity ratio approach has certain advantages over the existing general matrix methods in the analysis of one-dimensional systems.
Resumo:
We report here the formation of plasmid linear multimers promoted by the Red-system of phage lambda using a multicopy plasmid comprised of lambda red alpha and red beta genes, under the control of the lambda cI857 repressor. Our observations have revealed that the multimerization of plasmid DNA is dependent on the red beta and recA genes, suggesting a concerted role for these functions in the formation of plasmid multimers. The formation of multimers occurred in a recBCD+ sbcB+ xthA+ lon genetic background at a higher frequency than in the isogenic lon+ host cells. The multimers comprised tandem repeats of monomer plasmid DNA. Treatment of purified plasmid DNA with exonuclease III revealed the presence of free double-chain ends in the molecules. Determination of the size of multimeric DNA, by pulse field gel electrophoresis, revealed that the bulk of the DNA was in the range 50-240 kb, representing approximately 5-24 unit lengths of monomeric plasmid DNA. We provide a conceptual framework for Red-system-promoted formation and enhanced accumulation of plasmid linear multimers in lon mutants of E. coli.
Resumo:
In this paper we consider a decentralized supply chain formation problem for linear multi-echelon supply chains when the managers of the individual echelons are autonomous, rational, and intelligent. At each echelon, there is a choice of service providers and the specific problem we solve is that of determining a cost-optimal mix of service providers so as to achieve a desired level of end-to-end delivery performance. The problem can be broken up into two sub-problems following a mechanism design approach: (1) Design of an incentive compatible mechanism to elicit the true cost functions from the echelon managers; (2) Formulation and solution of an appropriate optimization problem using the true cost information. In this paper we propose a novel Bayesian incentive compatible mechanism for eliciting the true cost functions. This improves upon existing solutions in the literature which are all based on the classical Vickrey-Clarke-Groves mechanisms, requiring significant incentives to be paid to the echelon managers for achieving dominant strategy incentive compatibility. The proposed solution, which we call SCF-BIC (Supply Chain Formation with Bayesian Incentive Compatibility), significantly reduces the cost of supply chain formation. We illustrate the efficacy of the proposed methodology using the example of a three echelon manufacturing supply chain.