980 resultados para Proximal Point Algorithm
Resumo:
AIM: To evaluate the effects of meal size and three segmentations on intragastric distribution of the meal and gastric motility, by scintigraphy. METHODS: Twelve healthy volunteers were randomly assessed, twice, by scintigraphy. The test meal consisted of 60 or 180 mL of yogurt labeled with 64 MBq (99m)Tc-tin colloid. Anterior and posterior dynamic frames were simultaneously acquired for 18 min and all data were analyzed in MatLab. Three proximal-distal segmentations using regions of interest were adopted for both meals. RESULTS: Intragastric distribution of the meal between the proximal and distal compartments was strongly influenced by the way in which the stomach was divided, showing greater proximal retention after the 180 mL. An important finding was that both dominant frequencies (1 and 3 cpm) were simultaneously recorded in the proximal and distal stomach; however, the power ratio of those dominant frequencies varied in agreement with the segmentation adopted and was independent of the meal size. CONCLUSION: It was possible to simultaneously evaluate the static intragastric distribution and phasic contractility from the same recording using our scintigraphic approach. (C) 2010 Baishideng. All rights reserved.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Context. B[e] supergiants are luminous, massive post-main sequence stars exhibiting non-spherical winds, forbidden lines, and hot dust in a disc-like structure. The physical properties of their rich and complex circumstellar environment (CSE) are not well understood, partly because these CSE cannot be easily resolved at the large distances found for B[e] supergiants (typically greater than or similar to 1 kpc). Aims. From mid-IR spectro-interferometric observations obtained with VLTI/MIDI we seek to resolve and study the CSE of the Galactic B[e] supergiant CPD-57 degrees 2874. Methods. For a physical interpretation of the observables (visibilities and spectrum) we use our ray-tracing radiative transfer code (FRACS), which is optimised for thermal spectro-interferometric observations. Results. Thanks to the short computing time required by FRACS (<10 s per monochromatic model), best-fit parameters and uncertainties for several physical quantities of CPD-57 degrees 2874 were obtained, such as inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination. Conclusions. The analysis of VLTI/MIDI data with FRACS allowed one of the first direct determinations of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a full model-fitting approach. In a larger context, the study of B[e] supergiants is important for a deeper understanding of the complex structure and evolution of hot, massive stars.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
We study the transport properties of HgTe-based quantum wells containing simultaneously electrons and holes in a magnetic field B. At the charge neutrality point (CNP) with nearly equal electron and hole densities, the resistance is found to increase very strongly with B while the Hall resistivity turns to zero. This behavior results in a wide plateau in the Hall conductivity sigma(xy) approximate to 0 and in a minimum of diagonal conductivity sigma(xx) at nu = nu(p) - nu(n) = 0, where nu(n) and nu(p) are the electron and hole Landau level filling factors. We suggest that the transport at the CNP point is determined by electron-hole ""snake states'' propagating along the nu = 0 lines. Our observations are qualitatively similar to the quantum Hall effect in graphene as well as to the transport in a random magnetic field with a zero mean value.
Resumo:
The dynamics and mechanism of migration of a vacancy point defect in a two-dimensional (2D) colloidal crystal are studied using numerical simulations. We find that the migration of a vacancy is always realized by topology switching between its different configurations. From the temperature dependence of the topology switch frequencies, we obtain the activation energies for possible topology transitions associated with the vacancy diffusion in the 2D crystal. (C) 2011 American Institute of Physics. [doi:10.1063/1.3615287]
Resumo:
Vertices are of central importance for constructing QCD bound states out of the individual constituents of the theory, i.e. quarks and gluons. In particular, the determination of three-point vertices is crucial in nonperturbative investigations of QCD. We use numerical simulations of lattice gauge theory to obtain results for the 3-point vertices in Landau-gauge SU(2) Yang-Mills theory in three and four space-time dimensions for various kinematic configurations. In all cases considered, the ghost-gluon vertex is found to be essentially tree-level-like, while the three-gluon vertex is suppressed at intermediate momenta. For the smallest physical momenta, reachable only in three dimensions, we find that some of the three-gluon-vertex tensor structures change sign.
Resumo:
Consider a discrete locally finite subset Gamma of R(d) and the cornplete graph (Gamma, E), with vertices Gamma and edges E. We consider Gibbs measures on the set of sub-graphs with vertices Gamma and edges E` subset of E. The Gibbs interaction acts between open edges having a vertex in common. We study percolation properties of the Gibbs distribution of the graph ensemble. The main results concern percolation properties of the open edges in two cases: (a) when Gamma is sampled from a homogeneous Poisson process; and (b) for a fixed Gamma with sufficiently sparse points. (c) 2010 American Institute of Physics. [doi:10.1063/1.3514605]
Resumo:
An analytical procedure based on microwave-assisted digestion with diluted acid and a double cloud point extraction is proposed for nickel determination in plant materials by flame atomic absorption spectrometry. Extraction in micellar medium was successfully applied for sample clean up, aiming to remove organic species containing phosphorous that caused spectral interferences by structured background attributed to the formation of PO species in the flame. Cloud point extraction of nickel complexes formed with 1,2-thiazolylazo-2-naphthol was explored for pre-concentration, with enrichment factor estimated as 30, detection limit of 5 mu g L(-1) (99.7% confidence level) and linear response up to 80 mu g L(-1). The accuracy of the procedure was evaluated by nickel determinations in reference materials and the results agreed with the certified values at the 95% confidence level.
Resumo:
The aim of this Study was to compare the learning process of a highly complex ballet skill following demonstrations of point light and video models 16 participants divided into point light and video groups (ns = 8) performed 160 trials of a pirouette equally distributed in blocks of 20 trials alternating periods of demonstration and practice with a retention test a day later Measures of head and trunk oscillation coordination d1 parity from the model and movement time difference showed similarities between video and point light groups ballet experts evaluations indicated superiority of performance in the video over the point light group Results are discussed in terms of the task requirements of dissociation between head and trunk rotations focusing on the hypothesis of sufficiency and higher relevance of information contained in biological motion models applied to learning of complex motor skills
Resumo:
A procedure for simultaneous separation/preconcentration of copper. zinc, cadmium, and nickel in water samples, based on cloud point extraction (CPE) as a prior step to their determination by inductively coupled plasma optic emission spectrometry (ICP-OES), has been developed. The analytes reacted with 4-(2-pyridylazo)-resorcinol (PAR) at pH 5 to form hydrophobic chelates, which were separated and preconcentrated in a surfactant-rich phase of octylphenoxypolyethoxyethanol (Triton X-I 14). The parameters affecting the extraction efficiency of the proposed method, such as sample pH, complexing agent concentration, buffer amount, surfactant concentration, temperature, kinetics of complexation reaction, and incubation time were optimized and their respective values were 5, 0.6 mmol L(-1). 0.3 mL, 0.15% (w/v), 50 degrees C, 40 min, and 10 min for 15 mL of preconcentrated solution. The method presented precision (R.S.D.) between 1.3% and 2.6% (n = 9). The concentration factors with and without dilution of the surfactant-rich phase for the analytes ranged from 9.4 to 10.1 and from 94.0 to 100.1, respectively. The limits of detection (L.O.D.) obtained for copper, zinc, cadmium, and nickel were 1.2, 1.1, 1.0. and 6.3 mu g L(-1), respectively. The accuracy of the procedure was evaluated through recovery experiments on aqueous samples. (C) 2009 Published by Elsevier B.V.
Resumo:
An improved procedure is proposed for determination of the pesticide carbaryl in natural waters based on double cloud point extraction. The clean up step was carried out only with Triton X-114 in alkaline medium in order to avoid the use of toxic organic solvents as well as to minimise waste generation. Cloud point preconcentration of the product of the reaction of the analyte with p-aminophenol and cetyltrimethylammonium bromide was explored to increase sensitivity and improve the detection limit. Linear response was achieved within 10 and 500 mu g L-1 and the apparent molar absorptivity was estimated as 4.6 x 105 L mol-1 cm-1. The detection limit was estimated as 7 mu g L-1 at the 99.7% confidence level and the coefficient of variation was 3.4% (n = 8). Recoveries within 91 and 99% were estimated for carbaryl spiked water samples. The results obtained for natural water samples were in agreement with those achieved by the batch of spectrophotometric procedure at the 95% confidence level. The proposed procedure is then a simple, fast, inexpensive and greener alternative for carbaryl determination.
Resumo:
A flow injection (FI) micelle-mediated separation/preconcentration procedure for the determination of lead and cadmium by flame atomic absorption spectrometry (FAAS) has been proposed. The analytes reacted with 1-(2-thiazolylazo)-2-naphthol (TAN) to form hydrophobic chelates, which were extracted into the micelles of 0.05% (w/v) Triton X-114 in a solution buffered at pH 8.4. In the preconcentration stage, the micellar solution was continuously injected into a flow system with four mini-columns packed with cotton, glass wool. or TNT compresses for phase separation. The analytes-containing micelles were eluted from the mini-columns by a stream of 3 mol L(-1) HCl solution and the analytes were determined by FAAS. Chemical and flow variables affecting the preconcentration of the analytes were studied. For 15 mL. of preconcentrated solution, the enhancement factors varied between 15.1 and 20.3, the limits of detection were approximately 4.5 and 0.75 mu g L(-1) for lead and cadmium, respectively. For a solution containing 100 and 10 mu g L(-1) of lead and cadmium, respectively, the R.S.D. values varied from 1.6 to 3.2% (n = 7). The accuracy of the preconcentration system was evaluated by recovery measurements on spiked water samples. The method was susceptible to matrix effects, but these interferences were minimized by adding barium ions as masking agent in the sample solutions, and recoveries from spiked sample varied in the range of 95.1-107.3%. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The concentration of hydrogen peroxide is an important parameter in the azo dyes decoloration process through the utilization of advanced oxidizing processes, particularly by oxidizing via UV/H2O2. It is pointed out that, from a specific concentration, the hydrogen peroxide works as a hydroxyl radical self-consumer and thus a decrease of the system`s oxidizing power happens. The determination of the process critical point (maximum amount of hydrogen peroxide to be added) was performed through a ""thorough mapping"" or discretization of the target region, founded on the maximization of an objective function objective (constant of reaction kinetics of pseudo-first order). The discretization of the operational region occurred through a feedforward backpropagation neural model. The neural model obtained presented remarkable coefficient of correlation between real and predicted values for the absorbance variable, above 0.98. In the present work, the neural model had, as phenomenological basis the Acid Brown 75 dye decoloration process. The hydrogen peroxide addition critical point, represented by a value of mass relation (F) between the hydrogen peroxide mass and the dye mass, was established in the interval 50 < F < 60. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.