946 resultados para rotated to zero
Resumo:
We consider the Hamiltonian H of a 3D spinless non-relativistic quantum particle subject to parallel constant magnetic and non-constant electric field. The operator H has infinitely many eigenvalues of infinite multiplicity embedded in its continuous spectrum. We perturb H by appropriate scalar potentials V and investigate the transformation of these embedded eigenvalues into resonances. First, we assume that the electric potentials are dilation-analytic with respect to the variable along the magnetic field, and obtain an asymptotic expansion of the resonances as the coupling constant Ï° of the perturbation tends to zero. Further, under the assumption that the Fermi Golden Rule holds true, we deduce estimates for the time evolution of the resonance states with and without analyticity assumptions; in the second case we obtain these results as a corollary of suitable Mourre estimates and a recent article of Cattaneo, Graf and Hunziker [11]. Next, we describe sets of perturbations V for which the Fermi Golden Rule is valid at each embedded eigenvalue of H; these sets turn out to be dense in various suitable topologies. Finally, we assume that V decays fast enough at infinity and is of definite sign, introduce the Krein spectral shift function for the operator pair (H+V, H), and study its singularities at the energies which coincide with eigenvalues of infinite multiplicity of the unperturbed operator H.
Resumo:
2000 Mathematics Subject Classification: 37F21, 70H20, 37L40, 37C40, 91G80, 93E20.
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
Pulses in the form of the Airy function as solutions to an equation similar to the Schrodinger equation but with opposite roles of the time and space variables are derived. The pulses are generated by an Airy time varying field at a source point and propagate in vacuum preserving their shape and magnitude. The pulse motion is decelerating according to a quadratic law. Its velocity changes from infinity at the source point to zero in infinity. These one dimensional results are extended to the 3D+time case for a similar Airy-Bessel pulse with the same behaviour, the non-diffractive preservation and the deceleration. This pulse is excited by the field at a plane aperture perpendicular to the direction of the pulse propagation. © 2011 IEEE.
Resumo:
It is shown that an electromagnetic wave equation in time domain is reduced in paraxial approximation to an equation similar to the Schrodinger equation but in which the time and space variables play opposite roles. This equation has solutions in form of time-varying pulses with the Airy function as an envelope. The pulses are generated by a source point with an Airy time varying field and propagate in vacuum preserving their shape and magnitude. The motion is according to a quadratic law with the velocity changing from infinity at the source point to zero in infinity. These one-dimensional results are extended to the 3D+time case when a similar Airy-Bessel pulse is excited by the field at a plane aperture. The same behaviour of the pulses, the non-diffractive preservation and their deceleration, is found. © 2011 IEEE.
Resumo:
We investigate the mobility of nonlinear localized modes in a generalized discrete Ginzburg-Landau-type model, describing a one-dimensional waveguide array in an active Kerr medium with intrinsic, saturable gain and damping. It is shown that exponentially localized, traveling discrete dissipative breather-solitons may exist as stable attractors supported only by intrinsic properties of the medium, i.e., in the absence of any external field or symmetry-breaking perturbations. Through an interplay by the gain and damping effects, the moving soliton may overcome the Peierls-Nabarro barrier, present in the corresponding conservative system, by self-induced time-periodic oscillations of its power (norm) and energy (Hamiltonian), yielding exponential decays to zero with different rates in the forward and backward directions. In certain parameter windows, bistability appears between fast modes with small oscillations and slower, large-oscillation modes. The velocities and the oscillation periods are typically related by lattice commensurability and exhibit period-doubling bifurcations to chaotically "walking" modes under parameter variations. If the model is augmented by intersite Kerr nonlinearity, thereby reducing the Peierls-Nabarro barrier of the conservative system, the existence regime for moving solitons increases considerably, and a richer scenario appears including Hopf bifurcations to incommensurately moving solutions and phase-locking intervals. Stable moving breathers also survive in the presence of weak disorder. © 2014 American Physical Society.
Resumo:
A black box phase sensitive amplifier based 3R regeneration scheme is proposed for non-return to zero quadrature phase shift keyed formatted signals. Performance improvements of more than 2 dB are achieved at the presence of input phase distortion.
Resumo:
1 Oxygen and sulphide dynamics were examined, using microelectrode techniques, in meristems and rhizomes of the seagrass Thalassia testudinum at three different sites in Florida Bay, and in the laboratory, to evaluate the potential role of internal oxygen variability and sulphide invasion in episodes of sudden die-off. The sites differed with respect to shoot density and sediment composition, with an active die-off occurring at only one of the sites. 2 Meristematic oxygen content followed similar diel patterns at all sites with high oxygen content during the day and hyposaturation relative to the water column during the night. Minimum meristematic oxygen content was recorded around sunrise and varied among sites, with values close to zero at the die-off site. 3 Gaseous sulphide was detected within the sediment at all sites but at different concentrations among sites and within the die-off site. Spontaneous invasion of sulphide into Thalassia rhizomes was recorded at low internal oxygen partial pressure during darkness at the die-off site. 4 A laboratory experiment showed that the internal oxygen dynamics depended on light availability, and hence plant photosynthesis, and on the oxygen content of the water column controlling passive oxygen diffusion from water column to leaves and belowground tissues in the dark. 5 Sulphide invasion only occurred at low internal oxygen content, and the rate of invasion was highly dependent on the oxygen supply to roots and rhizomes. Sulphide was slowly depleted from the tissues when high oxygen partial pressures were re-established through leaf photosynthesis. Coexistence of sulphide and oxygen in the tissues and the slow rate of sulphide depletion suggest that sulphide reoxidation is not biologically mediated within the tissues of Thalassia. 6 Our results support the hypothesis that internal oxygen stress, caused by low water column oxygen content or poor plant performance governed by other environmental factors, allows invasion of sulphide and that the internal plant oxygen and sulphide dynamics potentially are key factors in the episodes of sudden die-off in beds of Thalassia testudinum . Root anoxia followed by sulphide invasion may be a more general mechanism determining the growth and survival of other rooted plants in sulphate-rich aquatic environments.
Resumo:
The primary purpose of this thesis was to design a logical simulation of a communication sub block to be used in the effective communication of digital data between the host and the peripheral devices. The module designed is a Serial interface engine in the Universal Serial Bus that effectively controls the flow of data for communication between the host and the peripheral devices with the emphasis on the study of timing and control signals, considering the practical aspects of them. In this study an attempt was made to realize data communication in the hardware using the Verilog Hardware Description language, which is supported by most popular logic synthesis tools. Various techniques like Cyclic Redundancy Checks, bit-stuffing and Non Return to Zero are implemented in the design to provide enhanced performance of the module.
Resumo:
Modeling studies predict that changes in radiocarbon (14C) reservoir ages of surface waters during the last deglacial episode will reflect changes in both atmospheric 14C concentration and ocean circulation including the Atlantic Meridional Overturning Circulation. Tests of these models require the availability of accurate 14C reservoir ages in well-dated late Quaternary time series. We here test two models using plateau-tuned 14C time series in multiple well-placed sediment core age-depth sequences throughout the lower latitudes of the Atlantic Ocean. 14C age plateau tuning in glacial and deglacial sequences provides accurate calendar year ages that differ by as much as 500-2500 years from those based on assumed global reservoir ages around 400 years. This study demonstrates increases in local Atlantic surface reservoir ages of up to 1000 years during the Last Glacial Maximum, ages that reflect stronger trades off Benguela and summer winds off southern Brazil. By contrast, surface water reservoir ages remained close to zero in the Cariaco Basin in the southern Caribbean due to lagoon-style isolation and persistently strong atmospheric CO2 exchange. Later, during the early deglacial (16 ka) reservoir ages decreased to a minimum of 170-420 14C years throughout the South Atlantic, likely in response to the rapid rise in atmospheric pCO2 and Antarctic temperatures occurring then. Changes in magnitude and geographic distribution of 14C reservoir ages of peak glacial and deglacial surface waters deviate from the results of Franke et al. (2008) but are generally consistent with those of the more advanced ocean circulation model of Butzin et al. (2012).
Resumo:
This study is based on rock mechanical tests of samples from platform carbonate strata to document their petrophysical properties and determine their potential for porosity loss by mechanical compaction. Sixteen core-plug samples, including eleven limestones and five dolostones, from Miocene carbonate platforms on the Marion Plateau, offshore northeast Australia, were tested at vertical effective stress, sigma1', of 0-70 MPa, as lateral strain was kept equal to zero. The samples were deposited as bioclastic facies in platform-top settings having paleo-water depths of <10-90 m. They were variably cemented with low-Mg calcite and five of the samples were dolomitized before burial to present depths of 39-635 m below sea floor with porosities of 8-46%. Ten samples tested under dry conditions had up to 0.22% strain at sigma1' = 50 MPa, whereas six samples tested saturated with brine, under drained conditions, had up to 0.33% strain. The yield strength was reached in five of the plugs. The measured strains show an overall positive correlation with porosity. Vp ranges from 3640 to 5660 m/s and Vs from 1840 to 3530 m/s. Poisson coefficient is 0.20-0.33 and Young's modulus at 30 MPa ranged between 5 and 40 GPa. Water saturated samples had lower shear moduli and slightly higher P- to S-wave velocity ratios. Creep at constant stress was observed only in samples affected by pore collapse, indicating propagation of microcracks. Although deposited as loose carbonate sand and mud, the studied carbonates acquired reef-like petrophysical properties by early calcite and dolomite cementation. The small strains observed experimentally at 50 MPa indicate that little mechanical compaction would occur at deeper burial. However, as these rocks are unlikely to preserve their present high porosities to 4-5 km depth, further porosity loss would proceed mainly by chemical compaction and cementation.
Resumo:
This study investigated the impact caused by events horizontal mergers and acquisitions (M&As) horizontal, in the stock returns of the participating companies and competitors regarding the creation or destruction of value for those firms in Brazil, from 2001 to 2012. For this, first was used the event study methodology to estimate abnormal returns in stock prices; after was conducted an analysis multiple regression. The results of the event study showed that using sub-periods for the data, before and after the crisis period, the effects were different for the target-before negative, after positive. Regarding the acquirer and competitors, the results were constant. For acquirer firms, the returns were close to zero, while for the competitors were negative. Furthermore, the regression results regarding the bidder showed that firms invested in processes of M&As to obtain a further increase its efficiency. Furthermore, this study indicated that the leverage of the bidder plays is important for creating value in acquisitions, when they has a higher Tobin’s Q. The results of target firms showed that a small firm had a better return than large firm did.
Resumo:
<p>Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here. </p><p>Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.</p><p>One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.</p><p>Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.</p><p>In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models. </p><p>Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data. </p><p>The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.</p><p>Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.</p>
Resumo:
<p>Extremal quantile index is a concept that the quantile index will drift to zero (or one)</p><p>as the sample size increases. The three chapters of my dissertation consists of three</p><p>applications of this concept in three distinct econometric problems. In Chapter 2, I</p><p>use the concept of extremal quantile index to derive new asymptotic properties and</p><p>inference method for quantile treatment effect estimators when the quantile index</p><p>of interest is close to zero. In Chapter 3, I rely on the concept of extremal quantile</p><p>index to achieve identification at infinity of the sample selection models and propose</p><p>a new inference method. Last, in Chapter 4, I use the concept of extremal quantile</p><p>index to define an asymptotic trimming scheme which can be used to control the</p><p>convergence rate of the estimator of the intercept of binary response models.</p>
Resumo:
<p>I study the link between capital markets and sources of macroeconomic risk. In chapter 1 I show that expected inflation risk is priced in the cross section of stock returns even after controlling for cash flow growth and volatility risks. Motivated by this evidence I study a long run risk model with a built-in inflation non-neutrality channel that allows me to decompose the real stochastic discount factor into news about current and expected cash flow growth, news about expected inflation and news about volatility. The model can successfully price a broad menu of assets and provides a setting for analyzing cross sectional variation in expected inflation risk premium. For industries like retail and durable goods inflation risk can account for nearly a third of the overall risk premium while the energy industry and a broad commodity index act like inflation hedges. Nominal bonds are exposed to expected inflation risk and have inflation premiums that increase with bond maturity. The price of expected inflation risk was very high during the 70's and 80's, but has come down a lot since being very close to zero over the past decade. On average, the expected inflation price of risk is negative, consistent with the view that periods of high inflation represent a "bad" state of the world and are associated with low economic growth and poor stock market performance. In chapter 2 I look at the way capital markets react to predetermined macroeconomic announcements. I document significantly higher excess returns on the US stock market on macro release dates as compared to days when no macroeconomic news hit the market. Almost the entire equity premium since 1997 is being realized on days when macroeconomic news are released. At high frequency, there is a pattern of returns increasing in the hours prior to the pre-determined announcement time, peaking around the time of the announcement and dropping thereafter.</p>