955 resultados para Random Subspace Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long polymers in solution frequently adopt knotted configurations. To understand the physical properties of knotted polymers, it is important to find out whether the knots formed at thermodynamic equilibrium are spread over the whole polymer chain or rather are localized as tight knots. We present here a method to analyze the knottedness of short linear portions of simulated random chains. Using this method, we observe that knot-determining domains are usually very tight, so that, for example, the preferred size of the trefoil-determining portions of knotted polymer chains corresponds to just seven freely jointed segments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a general technique to develop first and second order closed-form approximation formulas for short-time options withrandom strikes. Our method is based on Malliavin calculus techniques andallows us to obtain simple closed-form approximation formulas dependingon the derivative operator. The numerical analysis shows that these formulas are extremely accurate and improve some previous approaches ontwo-assets and three-assets spread options as Kirk's formula or the decomposition mehod presented in Alòs, Eydeland and Laurence (2011).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Confidence in decision making is an important dimension of managerialbehavior. However, what is the relation between confidence, on the onehand, and the fact of receiving or expecting to receive feedback ondecisions taken, on the other hand? To explore this and related issuesin the context of everyday decision making, use was made of the ESM(Experience Sampling Method) to sample decisions taken by undergraduatesand business executives. For several days, participants received 4 or 5SMS messages daily (on their mobile telephones) at random moments at whichpoint they completed brief questionnaires about their current decisionmaking activities. Issues considered here include differences between thetypes of decisions faced by the two groups, their structure, feedback(received and expected), and confidence in decisions taken as well as inthe validity of feedback. No relation was found between confidence indecisions and whether participants received or expected to receivefeedback on those decisions. In addition, although participants areclearly aware that feedback can provide both confirming and disconfirming evidence, their ability to specify appropriatefeedback is imperfect. Finally, difficulties experienced inusing the ESM are discussed as are possibilities for further researchusing this methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The experiential sampling method (ESM) was used to collect data from 74 parttimestudents who described and assessed the risks involved in their current activitieswhen interrupted at random moments by text messages. The major categories ofperceived risk were short-term in nature and involved loss of time or materials relatedto work and physical damage (e.g., from transportation). Using techniques of multilevelanalysis, we demonstrate effects of gender, emotional state, and types of risk onassessments of risk. Specifically, females do not differ from males in assessing thepotential severity of risks but they see these as more likely to occur. Also, participantsassessed risks to be lower when in more positive self-reported emotional states. Wefurther demonstrate the potential of ESM by showing that risk assessments associatedwith current actions exceed those made retrospectively. We conclude by notingadvantages and disadvantages of ESM for collecting data about risk perceptions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g.,rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperaturedependent viscosity. Results show that average RMS errors are ∼5%–7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck’09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ∼10°C–15°C effluent reach ∼5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2‐D flows where background objects have a small spatial scale, such as sand or gravel

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene correction at the site of the mutation in the chromosome is the absolute way to really cure a genetic disease. The oligonucleotide (ODN)-mediated gene repair technology uses an ODN perfectly complementary to the genomic sequence except for a mismatch at the base that is mutated. The endogenous repair machinery of the targeted cell then mediates substitution of the desired base in the gene, resulting in a completely normal sequence. Theoretically, it avoids potential gene silencing or random integration associated with common viral gene augmentation approaches and allows an intact regulation of expression of the therapeutic protein. The eye is a particularly attractive target for gene repair because of its unique features (small organ, easily accessible, low diffusion into systemic circulation). Moreover therapeutic effects on visual impairment could be obtained with modest levels of repair. This chapter describes in details the optimized method to target active ODNs to the nuclei of photoreceptors in neonatal mouse using (1) an electric current application at the eye surface (saline transpalpebral iontophoresis), (2) combined with an intravitreous injection of ODNs, as well as the experimental methods for (3) the dissection of adult neural retinas, (4) their immuno-labelling, and (5) flat-mounting for direct observation of photoreceptor survival, a relevant criteria of treatment outcomes for retinal degeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we present a method designed to generate dynamic holograms in holographic optical tweezers. The approach combines our random mask encoding method with iterative high-efficiency algorithms. This hybrid method can be used to dynamically modify precalculated holograms, giving them new functionalities¿temporarily or permanently¿with a low computational cost. This allows the easy addition or removal of a single trap or the independent control of groups of traps for manipulating a variety of rigid structures in real time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present here a nonbiased probabilistic method that allows us to consistently analyze knottedness of linear random walks with up to several hundred noncorrelated steps. The method consists of analyzing the spectrum of knots formed by multiple closures of the same open walk through random points on a sphere enclosing the walk. Knottedness of individual "frozen" configurations of linear chains is therefore defined by a characteristic spectrum of realizable knots. We show that in the great majority of cases this method clearly defines the dominant knot type of a walk, i.e., the strongest component of the spectrum. In such cases, direct end-to-end closure creates a knot that usually coincides with the knot type that dominates the random closure spectrum. Interestingly, in a very small proportion of linear random walks, the knot type is not clearly defined. Such walks can be considered as residing in a border zone of the configuration space of two or more knot types. We also characterize the scaling behavior of linear random knots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper suggests a method for obtaining efficiency bounds in models containing either only infinite-dimensional parameters or both finite- and infinite-dimensional parameters (semiparametric models). The method is based on a theory of random linear functionals applied to the gradient of the log-likelihood functional and is illustrated by computing the lower bound for Cox's regression model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerical linear algebra, students encounter earlythe iterative power method, which finds eigenvectors of a matrixfrom an arbitrary starting point through repeated normalizationand multiplications by the matrix itself. In practice, more sophisticatedmethods are used nowadays, threatening to make the powermethod a historical and pedagogic footnote. However, in the contextof communication over a time-division duplex (TDD) multipleinputmultiple-output (MIMO) channel, the power method takes aspecial position. It can be viewed as an intrinsic part of the uplinkand downlink communication switching, enabling estimationof the eigenmodes of the channel without extra overhead. Generalizingthe method to vector subspaces, communication in thesubspaces with the best receive and transmit signal-to-noise ratio(SNR) is made possible. In exploring this intrinsic subspace convergence(ISC), we show that several published and new schemes canbe cast into a common framework where all members benefit fromthe ISC.