983 resultados para Legendre Polynomial Dipole Moment Generating Function
Simulating quantum interference in a three-level system with perpendicular transition dipole moments
Resumo:
We consider a three-level V-type atomic system with the ground state coupled by a laser field to only one of the excited states, and with the two excited states coupled together by a dc field. Although the dipole moments of the two dipole-allowed transitions are assumed perpendicular, we demonstrate that this system emulates to a large degree a three-level system with parallel dipole moments-the latter being a system that exhibits quantum interference and displays a number of interesting features. As examples, we show that the system can produce extremely large values for the intensity-intensity correlation function, and that its resonance fluorescence spectrum can display ultranarrow lines. The dressed states for this system are identified, and the spectral features are interpreted in terms of transitions among these dressed states. We also show that this system is capable of exhibiting considerable squeezing.
Resumo:
We have used the Two-Degree Field (2dF) instrument on the Anglo-Australian Telescope (AAT) to obtain redshifts of a sample of z < 3 and 18.0 < g < 21.85 quasars selected from Sloan Digital Sky Survey (SDSS) imaging. These data are part of a larger joint programme between the SDSS and 2dF communities to obtain spectra of faint quasars and luminous red galaxies, namely the 2dF-SDSS LRG and QSO (2SLAQ) Survey. We describe the quasar selection algorithm and present the resulting number counts and luminosity function of 5645 quasars in 105.7 deg(2). The bright-end number counts and luminosity functions agree well with determinations from the 2dF QSO Redshift Survey (2QZ) data to g similar to 20.2. However, at the faint end, the 2SLAQ number counts and luminosity functions are steeper (i.e. require more faint quasars) than the final 2QZ results from Croom et al., but are consistent with the preliminary 2QZ results from Boyle et al. Using the functional form adopted for the 2QZ analysis ( a double power law with pure luminosity evolution characterized by a second-order polynomial in redshift), we find a faint-end slope of beta =-1.78 +/- 0.03 if we allow all of the parameters to vary, and beta =-1.45 +/- 0.03 if we allow only the faint-end slope and normalization to vary (holding all other parameters equal to the final 2QZ values). Over the magnitude range covered by the 2SLAQ survey, our maximum-likelihood fit to the data yields 32 per cent more quasars than the final 2QZ parametrization, but is not inconsistent with other g > 21 deep surveys for quasars. The 2SLAQ data exhibit no well-defined 'break' in the number counts or luminosity function, but do clearly flatten with increasing magnitude. Finally, we find that the shape of the quasar luminosity function derived from 2SLAQ is in good agreement with that derived from Type I quasars found in hard X-ray surveys.
Resumo:
Morphology, occlusal surface topography, macrowear, and microwear features of parrotfish pharyngeal teeth were investigated to relate microstructural characteristics to the function of the pharyngeal mill using scanning electron microscopy of whole and sectioned pharyngeal jaws and teeth. Pharyngeal tooth migration is anterior in the lower jaw (fifth ceratobranchial) and posterior in the upper jaw (paired third pharyngobranchials), making the interaction of occlusal surfaces and wear-generating forces complex. The extent of wear can be used to define three regions through which teeth migrate: a region containing newly erupted teeth showing little or no wear; a midregion in which the apical enameloid is swiftly worn; and a region containing teeth with only basal enameloid remaining, which shows low to moderate wear. The shape of the occlusal surface alters as the teeth progress along the pharyngeal jaw, generating conditions that appear suited to the reduction of coral particles. It is likely that the interaction between these particles and algal cells during the process of the rendering of the former is responsible for the rupture of the latter, with the consequent liberation of cell contents from which parrotfish obtain their nutrients.
Resumo:
In this paper, we present a framework for Bayesian inference in continuous-time diffusion processes. The new method is directly related to the recently proposed variational Gaussian Process approximation (VGPA) approach to Bayesian smoothing of partially observed diffusions. By adopting a basis function expansion (BF-VGPA), both the time-dependent control parameters of the approximate GP process and its moment equations are projected onto a lower-dimensional subspace. This allows us both to reduce the computational complexity and to eliminate the time discretisation used in the previous algorithm. The new algorithm is tested on an Ornstein-Uhlenbeck process. Our preliminary results show that BF-VGPA algorithm provides a reasonably accurate state estimation using a small number of basis functions.
Resumo:
G protein-coupled receptors (GPCRs) are successfully exploited as drug targets. As our understanding of how distinct GPCR subtypes can be generated expands, so do possibilities for therapeutic intervention via these receptors. Receptor activity-modifying proteins (RAMPs) are excellent examples of proteins that enhance diversity in. GPCR function. They facilitate the creation of binding pockets, controlling the pharmacology of some GPCRs. Moreover, they have the ability to regulate cell-surface trafficking, internalisation and signalling of GPCRs, creating novel opportunities for drug discovery. RAMPs could be directly targeted by drugs, or advantage could be taken of unique RAMP/GPCR interfaces for generating highly selective ligands.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.
Resumo:
Mathematics Subject Class.: 33C10,33D60,26D15,33D05,33D15,33D90
Resumo:
Mathematics Subject Classification: 33C60, 33C20, 44A15
Resumo:
2000 Mathematics Subject Classification: Primary 47A48, Secondary 60G12
Resumo:
An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.
Resumo:
2000 Mathematics Subject Classification: 62F15.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92C20
Resumo:
We introduce a modification of the familiar cut function by replacing the linear part in its definition by a polynomial of degree p + 1 obtaining thus a sigmoid function called generalized cut function of degree p + 1 (GCFP). We then study the uniform approximation of the (GCFP) by smooth sigmoid functions such as the logistic and the shifted logistic functions. The limiting case of the interval-valued Heaviside step function is also discussed which imposes the use of Hausdorff metric. Numerical examples are presented using CAS MATHEMATICA.
Resumo:
2000 Mathematics Subject Classification: 30A05, 33E05, 30G30, 30G35, 33E20.