962 resultados para step-down method
Resumo:
The nature of the spatial representations that underlie simple visually guided actions early in life was investigated in toddlers with Williams syndrome (WS), Down syndrome (DS), and healthy chronological age- and mental age-matched controls, through the use of a "double-step" saccade paradigm. The experiment tested the hypothesis that, compared to typically developing infants and toddlers, and toddlers with DS, those with WS display a deficit in using spatial representations to guide actions. Levels of sustained attention were also measured within these groups, to establish whether differences in levels of engagement influenced performance on the double-step saccade task. The results showed that toddlers with WS were unable to combine extra-retinal information with retinal information to the same extent as the other groups, and displayed evidence of other deficits in saccade planning, suggesting a greater reliance on sub-cortical mechanisms than the other populations. Results also indicated that their exploration of the visual environment is less developed. The sustained attention task revealed shorter and fewer periods of sustained attention in toddlers with DS, but not those with WS, suggesting that WS performance on the double-step saccade task is not explained by poorer engagement. The findings are also discussed in relation to a possible attention disengagement deficit in WS toddlers. Our study highlights the importance of studying genetic disorders early in development. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Finding the smallest eigenvalue of a given square matrix A of order n is computationally very intensive problem. The most popular method for this problem is the Inverse Power Method which uses LU-decomposition and forward and backward solving of the factored system at every iteration step. An alternative to this method is the Resolvent Monte Carlo method which uses representation of the resolvent matrix [I -qA](-m) as a series and then performs Monte Carlo iterations (random walks) on the elements of the matrix. This leads to great savings in computations, but the method has many restrictions and a very slow convergence. In this paper we propose a method that includes fast Monte Carlo procedure for finding the inverse matrix, refinement procedure to improve approximation of the inverse if necessary, and Monte Carlo power iterations to compute the smallest eigenvalue. We provide not only theoretical estimations about accuracy and convergence but also results from numerical tests performed on a number of test matrices.
Resumo:
Feature tracking is a key step in the derivation of Atmospheric Motion Vectors (AMV). Most operational derivation processes use some template matching technique, such as Euclidean distance or cross-correlation, for the tracking step. As this step is very expensive computationally, often shortrange forecasts generated by Numerical Weather Prediction (NWP) systems are used to reduce the search area. Alternatives, such as optical flow methods, have been explored, with the aim of improving the number and quality of the vectors generated and the computational efficiency of the process. This paper will present the research carried out to apply Stochastic Diffusion Search, a generic search technique in the Swarm Intelligence family, to feature tracking in the context of AMV derivation. The method will be described, and we will present initial results, with Euclidean distance as reference.
Resumo:
Plants may be regenerated from stomatal cells or protoplasts of such cells. Prior to regeneration the cells or protoplasts may be genetically transformed by the introduction of hereditary material most preferably by a DNA construct which is free of genes which specify resistance to antibiotics. The regeneration step may include callus formation on a hormone-free medium. The method is particularly suitable for sugar beet.
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.
Assessment of the Wind Gust Estimate Method in mesoscale modelling of storm events over West Germany
Resumo:
A physically based gust parameterisation is added to the atmospheric mesoscale model FOOT3DK to estimate wind gusts associated with storms over West Germany. The gust parameterisation follows the Wind Gust Estimate (WGE) method and its functionality is verified in this study. The method assumes that gusts occurring at the surface are induced by turbulent eddies in the planetary boundary layer, deflecting air parcels from higher levels down to the surface under suitable conditions. Model simulations are performed with horizontal resolutions of 20 km and 5 km. Ten historical storm events of different characteristics and intensities are chosen in order to include a wide range of typical storms affecting Central Europe. All simulated storms occurred between 1990 and 1998. The accuracy of the method is assessed objectively by validating the simulated wind gusts against data from 16 synoptic stations by means of “quality parameters”. Concerning these parameters, the temporal and spatial evolution of the simulated gusts is well reproduced. Simulated values for low altitude stations agree particularly well with the measured gusts. For orographically exposed locations, the gust speeds are partly underestimated. The absolute maximum gusts lie in most cases within the bounding interval given by the WGE method. Focussing on individual storms, the performance of the method is better for intense and large storms than for weaker ones. Particularly for weaker storms, the gusts are typically overestimated. The results for the sample of ten storms document that the method is generally applicable with the mesoscale model FOOT3DK for mid-latitude winter storms, even in areas with complex orography.
Resumo:
The hybrid Monte Carlo (HMC) method is a popular and rigorous method for sampling from a canonical ensemble. The HMC method is based on classical molecular dynamics simulations combined with a Metropolis acceptance criterion and a momentum resampling step. While the HMC method completely resamples the momentum after each Monte Carlo step, the generalized hybrid Monte Carlo (GHMC) method can be implemented with a partial momentum refreshment step. This property seems desirable for keeping some of the dynamic information throughout the sampling process similar to stochastic Langevin and Brownian dynamics simulations. It is, however, ultimate to the success of the GHMC method that the rejection rate in the molecular dynamics part is kept at a minimum. Otherwise an undesirable Zitterbewegung in the Monte Carlo samples is observed. In this paper, we describe a method to achieve very low rejection rates by using a modified energy, which is preserved to high-order along molecular dynamics trajectories. The modified energy is based on backward error results for symplectic time-stepping methods. The proposed generalized shadow hybrid Monte Carlo (GSHMC) method is applicable to NVT as well as NPT ensemble simulations.
Resumo:
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics
Resumo:
Sensory thresholds are often collected through ascending forced-choice methods. Group thresholds are important for comparing stimuli or populations; yet, the method has two problems. An individual may correctly guess the correct answer at any concentration step and might detect correctly at low concentrations but become adapted or fatigued at higher concentrations. The survival analysis method deals with both issues. Individual sequences of incorrect and correct answers are adjusted, taking into account the group performance at each concentration. The technique reduces the chance probability where there are consecutive correct answers. Adjusted sequences are submitted to survival analysis to determine group thresholds. The technique was applied to an aroma threshold and a taste threshold study. It resulted in group thresholds similar to ASTM or logarithmic regression procedures. Significant differences in taste thresholds between younger and older adults were determined. The approach provides a more robust technique over previous estimation methods.
Resumo:
A new online method to analyse water isotopes of speleothem fluid inclusions using a wavelength scanned cavity ring down spectroscopy (WS-CRDS) instrument is presented. This novel technique allows us simultaneously to measure hydrogen and oxygen isotopes for a released aliquot of water. To do so, we designed a new simple line that allows the online water extraction and isotope analysis of speleothem samples. The specificity of the method lies in the fact that fluid inclusions release is made on a standard water background, which mainly improves the δ D robustness. To saturate the line, a peristaltic pump continuously injects standard water into the line that is permanently heated to 140 °C and flushed with dry nitrogen gas. This permits instantaneous and complete vaporisation of the standard water, resulting in an artificial water background with well-known δ D and δ18O values. The speleothem sample is placed in a copper tube, attached to the line, and after system stabilisation it is crushed using a simple hydraulic device to liberate speleothem fluid inclusions water. The released water is carried by the nitrogen/standard water gas stream directly to a Picarro L1102-i for isotope determination. To test the accuracy and reproducibility of the line and to measure standard water during speleothem measurements, a syringe injection unit was added to the line. Peak evaluation is done similarly as in gas chromatography to obtain &delta D; and δ18O isotopic compositions of measured water aliquots. Precision is better than 1.5 ‰ for δ D and 0.4 ‰ for δ18O for water measurements for an extended range (−210 to 0 ‰ for δ D and −27 to 0 ‰ for δ18O) primarily dependent on the amount of water released from speleothem fluid inclusions and secondarily on the isotopic composition of the sample. The results show that WS-CRDS technology is suitable for speleothem fluid inclusion measurements and gives results that are comparable to the isotope ratio mass spectrometry (IRMS) technique.
Resumo:
Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.
Resumo:
This brief proposes a new method for the identification of fractional order transfer functions based on the time response resulting from a single step excitation. The proposed method is applied to the identification of a three-dimensional RC network, which can be tailored in terms of topology and composition to emulate real time systems governed by fractional order dynamics. The results are in excellent agreement with the actual network response, yet the identification procedure only requires a small number of coefficients to be determined, demonstrating that the fractional order modelling approach leads to very parsimonious model formulations.
Resumo:
Human respiratory syncytial virus (HRSV) is the main cause of acute lower respiratory tract infections in infants and children. Rapid diagnosis is required to permit appropriate care and treatment and to avoid unnecessary antibiotic use. Reverse transcriptase (RT-PCR) and indirect immunofluorescence assay (IFA) methods have been considered important tools for virus detection due to their high sensitivity and specificity. In order to maximize use-simplicity and minimize the risk of sample cross-contamination inherent in two-step techniques, a RT-PCR method using only a single tube to detect HRSV in clinical samples was developed. Nasopharyngeal aspirates from 226 patients with acute respiratory illness, ranging from infants to 5 years old, were collected at the University Hospital of the University of Sao Paulo (HU-USP), and tested using IFA, one-step RT-PCR, and semi-nested RT-PCR. One hundred and two (45.1%) samples were positive by at least one of the three methods, and 75 (33.2%) were positive by all methods: 92 (40.7%) were positive by one-step RT-PCR, 84 (37.2%) by IFA, and 96 (42.5%) by the semi-nested RT-PCR technique. One-step RT-PCR was shown to be fast, sensitive, and specific for RSV diagnosis, without the added inconvenience and risk of false positive results associated with semi-nested PCR. The combined use of these two methods enhances HRSV detection. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We here report the preparation of supported palladium nanoparticles (NPs) stabilized by pendant phosphine groups by reacting a palladium complex containing the ligand 2-(diphenylphosphino)benzaldehyde with an amino-functionalized silica surface The Pd nanocatalyst is active for Suzuki cross-coupling reaction avoiding any addition of other sources of phosphine ligands The Pd intermediates and Pd NPs were characterized by solid-state nuclear magnetic resonance and transmission electron microscopy techniques The synthetic method was also applied to prepare magnetically recoverable Pd NPs leading to a catalyst that could be reused for up to 10 recycles In summary we gathered the advantages of heterogeneous catalysis magnetic separation and enhanced catalytic activity of palladium promoted by phosphine ligands to synthesize a new catalyst for Suzuki cross-coupling reactions The Pd NP catalyst prepared on the phosphine-functionalized support was more active and selective than a similar Pd NP catalyst prepared on an amino-functionalized support (C) 2010 Elsevier Inc All rights reserved
Resumo:
Nuclear (p,alpha) reactions destroying the so-called ""light-elements"" lithium, beryllium and boron have been largely studied in the past mainly because their role in understanding some astrophysical phenomena, i.e. mixing-phenomena occurring in young F-G stars [1]. Such mechanisms transport the surface material down to the region close to the nuclear destruction zone, where typical temperatures of the order of similar to 10(6) K are reached. The corresponding Gamow energy E(0)=1.22 (Z(x)(2)Z(X)(2)T(6)(2))(1/3) [2] is about similar to 10 keV if one considers the ""boron-case"" and replaces in the previous formula Z(x) = 1, Z(X) = 5 and T(6) = 5. Direct measurements of the two (11)B(p,alpha(0))(8)Be and (10)B(p,alpha)(7)Be reactions in correspondence of this energy region are difficult to perform mainly because the combined effects of Coulomb barrier penetrability and electron screening [3]. The indirect method of the Trojan Horse (THM) [4-6] allows one to extract the two-body reaction cross section of interest for astrophysics without the extrapolation-procedures. Due to the THM formalism, the extracted indirect data have to be normalized to the available direct ones at higher energies thus implying that the method is a complementary tool in solving some still open questions for both nuclear and astrophysical issues [7-12].