923 resultados para Space-time block codes


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using series solutions and time-domain evolutions, we probe the eikonal limit of the gravitational and scalar-field quasinormal modes of large black holes and black branes in anti-de Sitter backgrounds. These results are particularly relevant for the AdS/CFT correspondence, since the eikonal regime is characterized by the existence of long-lived modes which (presumably) dominate the decay time scale of the perturbations. We confirm all the main qualitative features of these slowly damped modes as predicted by Festuccia and Liu [G. Festuccia and H. Liu, arXiv:0811.1033.] for the scalar-field (tensor-type gravitational) fluctuations. However, quantitatively we find dimensional-dependent correction factors. We also investigate the dependence of the quasinormal mode frequencies on the horizon radius of the black hole (brane) and the angular momentum (wave number) of vector- and scalar-type gravitational perturbations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. The space telescope CoRoT searches for transiting extrasolar planets by continuously monitoring the optical flux of thousands of stars in several fields of view. Aims. We report the discovery of CoRoT-10b, a giant planet on a highly eccentric orbit (e = 0.53 +/- 0.04) revolving in 13.24 days around a faint (V = 15.22) metal-rich K1V star. Methods. We used CoRoT photometry, radial velocity observations taken with the HARPS spectrograph, and UVES spectra of the parent star to derive the orbital, stellar, and planetary parameters. Results. We derive a radius of the planet of 0.97 +/- 0.07 R(Jup) and a mass of 2.75 +/- 0.16 M(Jup). The bulk density,rho(p) = 3.70 +/- 0.83 g cm(-3), is similar to 2.8 that of Jupiter. The core of CoRoT-10b could contain up to 240 M(circle plus) of heavy elements. Moving along its eccentric orbit, the planet experiences a 10.6-fold variation in insolation. Owing to the long circularisation time, tau(circ) > 7 Gyr, a resonant perturber is not required to excite and maintain the high eccentricity of CoRoT-10b.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CoRoT satellite exoplanetary team announces its sixth transiting planet in this paper. We describe and discuss the satellite observations as well as the complementary ground-based observations - photometric and spectroscopic - carried out to assess the planetary nature of the object and determine its specific physical parameters. The discovery reported here is a ""hot Jupiter"" planet in an 8.9d orbit, 18 stellar radii, or 0.08 AU, away from its primary star, which is a solar-type star (F9V) with an estimated age of 3.0 Gyr. The planet mass is close to 3 times that of Jupiter. The star has a metallicity of 0.2 dex lower than the Sun, and a relatively high (7)Li abundance. While the light curve indicates a much higher level of activity than, e. g., the Sun, there is no sign of activity spectroscopically in e. g., the [Ca II] H&K lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the transition amplitude for a particle moving in a space with two times and D space dimensions having an Sp(2, R) local symmetry and an SO(D, 2) rigid symmetry. It was obtained from the BRST-BFV quantization with a unique gauge choice. We show that by constraining the initial and final points of this amplitude to lie on some hypersurface of the D + 2 space the resulting amplitude reproduces well-known systems in lower dimensions. This work provides an alternative way to derive the effects of two-time physics where all the results come from a single transition amplitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demands for improvement in sound quality and reduction of noise generated by vehicles are constantly increasing, as well as the penalties for space and weight of the control solutions. A promising approach to cope with this challenge is the use of active structural-acoustic control. Usually, the low frequency noise is transmitted into the vehicle`s cabin through structural paths, which raises the necessity of dealing with vibro-acoustic models. This kind of models should allow the inclusion of sensors and actuators models, if accurate performance indexes are to be accessed. The challenge thus resides in deriving reasonable sized models that integrate structural, acoustic, electrical components and the controller algorithm. The advantages of adequate active control simulation strategies relies on the cost and time reduction in the development phase. Therefore, the aim of this paper is to present a methodology for simulating vibro-acoustic systems including this coupled model in a closed loop control simulation framework that also takes into account the interaction between the system and the control sensors/actuators. It is shown that neglecting the sensor/actuator dynamics can lead to inaccurate performance predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surface heat treatment in glasses and ceramics, using CO(2) lasers, has attracted the attention of several researchers around the world due to its impact in technological applications, such as lab-on-a-chip devices, diffraction gratings and microlenses. Microlens fabrication on a glass surface has been studied mainly due to its importance in optical devices (fiber coupling, CCD signal enhancement, etc). The goal of this work is to present a systematic study of the conditions for microlens fabrications, along with the viability of using microlens arrays, recorded on the glass surface, as bidimensional codes for product identification. This would allow the production of codes without any residues (like the fine powder generated by laser ablation) and resistance to an aggressive environment, such as sterilization processes. The microlens arrays were fabricated using a continuous wave CO(2) laser, focused on the surface of flat commercial soda-lime silicate glass substrates. The fabrication conditions were studied based on laser power, heating time and microlens profiles. A He-Ne laser was used as a light source in a qualitative experiment to test the viability of using the microlenses as bidimensional codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of new phenyl-based conjugated copolymers has been synthesized and investigated by vibrational and photoluminescence spectroscopy (PL). The materials are: poly( 1,4-phenylene-alt-3,6-pyridazine) (COP-PIR), poly(9,9-dioctylfluorene)-co-quaterphenylene (COP-PPP) and poly[(1,4-phenylene-alt-3,6-pyridazine)-co-(1,4-phenylene-alt-9,9-dioctylfluorene)] (COP-PIR-FLUOR), with 3.5% of fluorene. COP-PPP and COP-PIR-FLUOR have high fluorescence quantum yields in solution. Infrared and Raman spectra were used to check the chemical structure of the compounds. The copolymers exhibit blue emission ranging front 2.8 to 3.6 eV when excited at E(exc)=4.13 eV. Stokes-shift Values were estimated on pristine samples in their condensed state from steady-state PL-emission and PL-excitation spectra. They suggest a difference in the torsional angle between the molecular configuration of the polymer blocks at the absorption and PL transitions and also in the photoexcitation diffusion. Additionally, the time-resolved PL of these materials has been investigated by using 100 fs laser pulses at E(exc)=4.64 eV and a streak camera. Results show very fast biexponential kinetics for the two fluorene-based polymers with decay times below 300 ps indicating both intramolecular, fast radiative recombination and migration of photogenerated electron-hole pairs. By contrast, the PL of COP-PIR is less intense and longer lived, indicating that excitons are confined to the chains in this polymer. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective was to study the flow pattern in a plate heat exchanger (PHE) through residence time distribution (RTD) experiments. The tested PHE had flat plates and it was part of a laboratory scale pasteurization unit. Series flow and parallel flow configurations were tested with a variable number of passes and channels per pass. Owing to the small scale of the equipment and the short residence times, it was necessary to take into account the influence of the tracer detection unit on the RID data. Four theoretical RID models were adjusted: combined, series combined, generalized convection and axial dispersion. The combined model provided the best fit and it was useful to quantify the active and dead space volumes of the PHE and their dependence on its configuration. Results suggest that the axial dispersion model would present good results for a larger number of passes because of the turbulence associated with the changes of pass. This type of study can be useful to compare the hydraulic performance of different plates or to provide data for the evaluation of heat-induced changes that occur in the processing of heat-sensitive products. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work considers a nonlinear time-varying system described by a state representation, with input u and state x. A given set of functions v, which is not necessarily the original input u of the system, is the (new) input candidate. The main result provides necessary and sufficient conditions for the existence of a local classical state space representation with input v. These conditions rely on integrability tests that are based on a derived flag. As a byproduct, one obtains a sufficient condition of differential flatness of nonlinear systems. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The question raised by researchers in the field of mathematical biology regarding the existence of error-correcting codes in the structure of the DNA sequences is answered positively. It is shown, for the first time, that DNA sequences such as proteins, targeting sequences and internal sequences are identified as codewords of BCH codes over Galois fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.