12 resultados para split-step Fourier method

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vehicle operations in underwater environments are often compromised by poor visibility conditions. For instance, the perception range of optical devices is heavily constrained in turbid waters, thus complicating navigation and mapping tasks in environments such as harbors, bays, or rivers. A new generation of high-definition forward-looking sonars providing acoustic imagery at high frame rates has recently emerged as a promising alternative for working under these challenging conditions. However, the characteristics of the sonar data introduce difficulties in image registration, a key step in mosaicing and motion estimation applications. In this work, we propose the use of a Fourier-based registration technique capable of handling the low resolution, noise, and artifacts associated with sonar image formation. When compared to a state-of-the art region-based technique, our approach shows superior performance in the alignment of both consecutive and nonconsecutive views as well as higher robustness in featureless environments. The method is used to compute pose constraints between sonar frames that, integrated inside a global alignment framework, enable the rendering of consistent acoustic mosaics with high detail and increased resolution. An extensive experimental section is reported showing results in relevant field applications, such as ship hull inspection and harbor mapping

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A blind speech watermarking scheme that meets hard real-time deadlines is presented and implemented. In addition, one of the key issues in these block-oriented watermarking techniques is to preserve the synchronization. Namely, to recover the exact position of each block in the mark extract process. In fact, the presented scheme can be split up into two distinguished parts, the synchronization and the information mark methods. The former is embedded into the time domain and it is fast enough to be run meeting real-time requirements. The latter contains the authentication information and it is embedded into the wavelet domain. The synchronization and information mark techniques are both tunable in order to allow a con gurable method. Thus, capacity, transparency and robustness can be con gured depending on the needs. It makes the scheme useful for professional applications, such telephony authentication or even sending information throw radio applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Realistic rendering animation is known to be an expensive processing task when physically-based global illumination methods are used in order to improve illumination details. This paper presents an acceleration technique to compute animations in radiosity environments. The technique is based on an interpolated approach that exploits temporal coherence in radiosity. A fast global Monte Carlo pre-processing step is introduced to the whole computation of the animated sequence to select important frames. These are fully computed and used as a base for the interpolation of all the sequence. The approach is completely view-independent. Once the illumination is computed, it can be visualized by any animated camera. Results present significant high speed-ups showing that the technique could be an interesting alternative to deterministic methods for computing non-interactive radiosity animations for moderately complex scenarios

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the rate of convergence of an appropriatediscretization scheme of the solution of the Mc Kean-Vlasovequation introduced by Bossy and Talay. More specifically,we consider approximations of the distribution and of thedensity of the solution of the stochastic differentialequation associated to the Mc Kean - Vlasov equation. Thescheme adopted here is a mixed one: Euler/weakly interactingparticle system. If $n$ is the number of weakly interactingparticles and $h$ is the uniform step in the timediscretization, we prove that the rate of convergence of thedistribution functions of the approximating sequence in the $L^1(\Omega\times \Bbb R)$ norm and in the sup norm is of theorder of $\frac 1{\sqrt n} + h $, while for the densities is ofthe order $ h +\frac 1 {\sqrt {nh}}$. This result is obtainedby carefully employing techniques of Malliavin Calculus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel and simple procedure for concentrating adenoviruses from seawater samples is described. The technique entails the adsorption of viruses to pre-flocculated skimmed milk proteins, allowing the flocs to sediment by gravity, and dissolving the separated sediment in phosphate buffer. Concentrated virus may be detected by PCR techniques following nucleic acid extraction. The method requires no specialized equipment other than that usually available in routine public health laboratories, and due to its straightforwardness it allows the processing of a larger number of water samples simultaneously. The usefulness of the method was demonstrated in concentration of virus in multiple seawater samples during a survey of adenoviruses in coastal waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Pharmacogenetic studies are essential in understanding the interindividual variability of drug responses. DNA sample collection for genotyping is a critical step in genetic studies. A method using dried blood samples from finger-puncture, collected on DNA-cards, has been described as an alternative to the usual venepuncture technique. The purpose of this study is to evaluate the implementation of the DNA cards method in a multicentre clinical trial, and to assess the degree of investigators' satisfaction and the acceptance of the patients perceived by the investigators.Methods: Blood samples were collected on DNA-cards. The quality and quantity of DNA recovered were analyzed. Investigators were questioned regarding their general interest, previous experience, safety issues, preferences and perceived patient satisfaction. Results: 151 patients' blood samples were collected. Genotyping of GST polymorphisms was achieved in all samples (100%). 28 investigators completed the survey. Investigators perceived patient satisfaction as very good (60.7%) or good (39.3%), without reluctance to finger puncture. Investigators preferred this method, which was considered safer and better than the usual methods. All investigators would recommend using it in future genetic studies. Conclusion: Within the clinical trial setting, the DNA-cards method was very well accepted by investigators and patients (in perception of investigators), and was preferred to conventional methods due to its ease of use and safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports the method development for the simultaneous determination of methylmercury MeHgþ) and inorganic mercury (iHg) species in seafood samples. The study focused on the extraction and quantification of MeHgþ (the most toxic species) by liquid chromatography coupled to on-line UV irradiation and cold vapour atomic fluorescence spectroscopy (LC-UV-CV-AFS), using HCl 4 mol/L as the extractant agent. Accuracy of the method has been verified by analysing three certified reference materials and different spiked samples. The values found for total Hg and MeHgþ for the CRMs did not differ significantly from certified values at a 95% confidence level, and recoveries between 85% and 97% for MeHgþ, based on spikes, were achieved. The detection limits (LODs) obtained were 0.001 mg Hg/kg for total mercury, 0.0003 mg Hg/kg for MeHgþ and 0.0004 mg Hg/kg for iHg. The quantification limits (LOQs) established were 0.003 mg Hg/kg for total mercury, 0.0010 mg Hg/kg for MeHgþ and 0.0012 mg Hg/kg for iHg. Precision for each mercury species was established, being 12% in terms of RSD in all cases. Finally, the developed method was applied to 24 seafood samples from different origins and total mercury contents. The concentrations for Total Hg, MeHg and iHg ranged from 0.07 to 2.33, 0.003-2.23 and 0.006-0.085 mg Hg/kg, respectively. The established analytical method allows to obtain results for mercury speciation in less than 1 one hour including both, sample pretreatment and measuring step.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms