26 resultados para Linear optical quantum computation

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sol-gel derived inorganic materials are of interest as hosts for non-linear optically active guest molecules and they offer particular advantages in the field of non-linear optics. Orientationally ordered glasses have been prepared using a sol-gel system based on tetramethoxysilane, methyltrimethoxysilane and a non-linear optical chromophore Disperse Red 1. The novel technique of photo-induced poling was used to generate enhanced levels of polar order. The level of enhancement is strongly dependent on the extent of gelation and an optimum preparation time of ∼100 h led to an enhancement factor of ∼5. Films prepared in this manner exhibited a high stability of the polar order.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New algorithms and microcomputer-programs for generating original multilayer designs (and printing a spectral graph) from refractive-index input are presented. The programs are characterised TSHEBYSHEV, HERPIN, MULTILAYER-SPECTRUM and have originated new designs of narrow-stopband, non-polarizing edge, and Tshebyshev optical filter. Computation procedure is an exact synthesis (so far that is possible) numerical refinement not having been needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several novel compounds with the non-linear optical chromophore 2-amino-5-nitropyridine (2A5NP) and Keggin polyoxoanions (alpha-isomers), having the general formula (2A5NP)(m)H-n[XM12O40]center dot xH(2)O, M = Mo, W, were synthesised. Compounds were obtained with X = P, n = 3, m = 3 and 4 and X = Si, n = m = 4 ( x = 2 - 6). Thus, for each of the anions [PMo12O40](3-) and [PW12O40](3-) two different compounds were obtained, with the same anion and organic counterpart but with a different stoichiometric ratio. These presented different charge transfer properties and thermal stability. All compounds were characterised by spectroscopic and analytical techniques. The single crystal X-ray diffraction structure of (2A5NP)(4)H-3[PMo12O40]center dot 2.5H(2)O center dot 0.5C(2)H(5)OH showed that the water solvent molecules and the organic chromophores are assembled via infinite one-dimensional chains of hydrogen bonds with formation of open channels, which accommodate [ PMo12O40] 3- and ethanol solvent molecules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The length and time scales accessible to optical tweezers make them an ideal tool for the examination of colloidal systems. Embedded high-refractive-index tracer particles in an index-matched hard sphere suspension provide 'handles' within the system to investigate the mechanical behaviour. Passive observations of the motion of a single probe particle give information about the linear response behaviour of the system, which can be linked to the macroscopic frequency-dependent viscous and elastic moduli of the suspension. Separate 'dragging' experiments allow observation of a sample's nonlinear response to an applied stress on a particle-by particle basis. Optical force measurements have given new data about the dynamics of phase transitions and particle interactions; an example in this study is the transition from liquid-like to solid-like behaviour, and the emergence of a yield stress and other effects attributable to nearest-neighbour caging effects. The forces needed to break such cages and the frequency of these cage breaking events are investigated in detail for systems close to the glass transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix isolation IR spectroscopy has been used to study the vacuum pyrolysis of 1,1,3,3-tetramethyldisiloxane (L1), 1,1,3,3,5,5-hexamethyltrisiloxane (L2) and 3H,5H-octamethyltetrasiloxane (L3) at ca. 1000 K in a flow reactor at low pressures. The hydrocarbons CH3, CH4, C2H2, C2H4, and C2H6 were observed as prominent pyrolysis products in all three systems, and amongst the weaker features are bands arising from the methylsilanes Me2SiH2 (for L1 and L2) and Me3SiH (for L3). The fundamental of SiO was also observed very weakly. By use of quantum chemical calculations combined with earlier kinetic models, mechanisms have been proposed involving the intermediacy of silanones Me2Si = O and MeSiH = O. Model calculations on the decomposition pathways of H3SiOSiH3 and H3SiOSiH2OSiH3 show that silanone elimination is favoured over silylene extrusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness, including three algorithms using combined A- or D-optimality or PRESS statistic (Predicted REsidual Sum of Squares) with regularised orthogonal least squares algorithm respectively. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalisation scheme in orthogonal least squares or regularised orthogonal least squares has been extended such that the new algorithms are computationally efficient. A numerical example is included to demonstrate effectiveness of the algorithms. Copyright (C) 2003 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a unified framework for a range of linear transforms that can be used for the analysis of terahertz spectroscopic data, with particular emphasis on their application to the measurement of leaf water content. The use of linear transforms for filtering, regression, and classification is discussed. For illustration, a classification problem involving leaves at three stages of drought and a prediction problem involving simulated spectra are presented. Issues resulting from scaling the data set are discussed. Using Lagrange multipliers, we arrive at the transform that yields the maximum separation between the spectra and show that this optimal transform is equivalent to computing the Euclidean distance between the samples. The optimal linear transform is compared with the average for all the spectra as well as with the Karhunen–Loève transform to discriminate a wet leaf from a dry leaf. We show that taking several principal components into account is equivalent to defining new axes in which data are to be analyzed. The procedure shows that the coefficients of the Karhunen–Loève transform are well suited to the process of classification of spectra. This is in line with expectations, as these coefficients are built from the statistical properties of the data set analyzed.