907 resultados para Single-Blind Method
Resumo:
Uniformly distributed ZnO nanorods with diameter 70-100 nm and 1-2μm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip (needle shape) to flat tip (rod shape). These kinds of structure are useful in laser and field emission application.
Resumo:
Uniformly distributed ZnO nanorods with diameter 80-120 nm and 1-2µm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip with high aspect ratio to flat tip with smaller aspect ratio. These kinds of structure are useful in laser and field emission application.
Resumo:
A new approach for the control of the size of particles fabricated using the Electrohydrodynamic Atomization (EHDA) method is being developed. In short, the EHDA process produces solution droplets in a controlled manner, and as the solvent evaporates from the surface of the droplets, polymeric particles are formed. By varying the voltage applied, the size of the droplets can be changed, and consequently, the size of the particles can also be controlled. By using both a nozzle electrode and a ring electrode placed axisymmetrically and slightly above the nozzle electrode, we are able to produce a Single Taylor Cone Single Jet for a wide range of voltages, contrary to just using a single nozzle electrode where the range of permissible voltage for the creation of the Single Taylor Cone Single Jet is usually very small. Phase Doppler Particle Analyzer (PDPA) test results have shown that the droplet size increases with increasing voltage applied. This trend is predicted by the electrohydrodynamic theory of the Single Taylor Cone Single Jet based on a perfect dielectric fluid model. Particles fabricated using different voltages do not show much change in the particles size, and this may be attributed to the solvent evaporation process. Nevertheless, these preliminary results do show that this method has the potential of providing us with a way of fine controlling the particles size using relatively simple method with trends predictable by existing theories.
Resumo:
This work presents detailed numerical calculations of the dielectrophoretic force in octupolar traps designed for single-cell trapping. A trap with eight planar electrodes is studied for spherical and ellipsoidal particles using an indirect implementation of the boundary element method (BEM). Multipolar approximations of orders one to three are compared with the full Maxwell stress tensor (MST) calculation of the electrical force on spherical particles. Ellipsoidal particles are also studied, but in their case only the dipolar approximation is available for comparison with the MST solution. The results show that the full MST calculation is only required in the study of non-spherical particles.
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
A simple and most promising oxide-assisted catalyst-free method is used to prepare silicon nitride nanowires that give rise to high yield in a short time. After a brief analysis of the state of the art, we reveal the crucial role played by the oxygen partial pressure: when oxygen partial pressure is slightly below the threshold of passive oxidation, a high yield inhibiting the formation of any silica layer covering the nanowires occurs and thanks to the synthesis temperature one can control nanowire dimensions
Resumo:
A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants
Resumo:
Bayesian inference has been used to determine rigorous estimates of hydroxyl radical concentrations () and air mass dilution rates (K) averaged following air masses between linked observations of nonmethane hydrocarbons (NMHCs) spanning the North Atlantic during the Intercontinental Transport and Chemical Transformation (ITCT)-Lagrangian-2K4 experiment. The Bayesian technique obtains a refined (posterior) distribution of a parameter given data related to the parameter through a model and prior beliefs about the parameter distribution. Here, the model describes hydrocarbon loss through OH reaction and mixing with a background concentration at rate K. The Lagrangian experiment provides direct observations of hydrocarbons at two time points, removing assumptions regarding composition or sources upstream of a single observation. The estimates are sharpened by using many hydrocarbons with different reactivities and accounting for their variability and measurement uncertainty. A novel technique is used to construct prior background distributions of many species, described by variation of a single parameter . This exploits the high correlation of species, related by the first principal component of many NMHC samples. The Bayesian method obtains posterior estimates of , K and following each air mass. Median values are typically between 0.5 and 2.0 × 106 molecules cm−3, but are elevated to between 2.5 and 3.5 × 106 molecules cm−3, in low-level pollution. A comparison of estimates from absolute NMHC concentrations and NMHC ratios assuming zero background (the “photochemical clock” method) shows similar distributions but reveals systematic high bias in the estimates from ratios. Estimates of K are ∼0.1 day−1 but show more sensitivity to the prior distribution assumed.
Resumo:
We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stchastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, nonetheless SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS community, simulating the transitions between active and suppressed periods of tropical convection.
Resumo:
We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.
Resumo:
Many recent inverse scattering techniques have been designed for single frequency scattered fields in the frequency domain. In practice, however, the data is collected in the time domain. Frequency domain inverse scattering algorithms obviously apply to time-harmonic scattering, or nearly time-harmonic scattering, through application of the Fourier transform. Fourier transform techniques can also be applied to non-time-harmonic scattering from pulses. Our goal here is twofold: first, to establish conditions on the time-dependent waves that provide a correspondence between time domain and frequency domain inverse scattering via Fourier transforms without recourse to the conventional limiting amplitude principle; secondly, we apply the analysis in the first part of this work toward the extension of a particular scattering technique, namely the point source method, to scattering from the requisite pulses. Numerical examples illustrate the method and suggest that reconstructions from admissible pulses deliver superior reconstructions compared to straight averaging of multi-frequency data. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
There is increasing interest in combining Phases II and III of clinical development into a single trial in which one of a small number of competing experimental treatments is ultimately selected and where a valid comparison is made between this treatment and the control treatment. Such a trial usually proceeds in stages, with the least promising experimental treatments dropped as soon as possible. In this paper we present a highly flexible design that uses adaptive group sequential methodology to monitor an order statistic. By using this approach, it is possible to design a trial which can have any number of stages, begins with any number of experimental treatments, and permits any number of these to continue at any stage. The test statistic used is based upon efficient scores, so the method can be easily applied to binary, ordinal, failure time, or normally distributed outcomes. The method is illustrated with an example, and simulations are conducted to investigate its type I error rate and power under a range of scenarios.
Resumo:
We have developed a new simple method for transport, storage, and analysis of genetic material from the corals Agaricia agaricites, Dendrogyra cylindrica, Eusmilia ancora, Meandrina meandrites, Montastrea annularis, Porites astreoides, Porites furcata, Porites porites, and Siderastrea siderea at room temperature. All species yielded sufficient DNA from a single FTA(R) card (19 mug-43 ng) for subsequent PCR amplification of both coral and zooxanthellar DNA. The D1 and D2 variable region of the large Subunit rRNA gene (LSUrDNA) was amplified from the DNA of P. furcata and S. siderea by PCR. Electrophoresis yielded two major DNA bands: an 800-base pair (bp) DNA, which represented the coral ribosomal RNA (rRNA) gene, and a 600-bp DNA, which represented the zooxanthellar srRNA gene. Extraction of DNA from the bands yielded between 290 mug total DNA (S. siderea coral DNA) and 9 mug total DNA (P. furcata zooxanthellar DNA). The ability to transport and store genetic material from scleractinian corals without resort to laboratory facilities in the field allows for the molecular Study of a far wider range and variety of coral sites than have been studied to date. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
A method is described for the analysis of deuterated and undeuterated alpha-tocopherol in blood components using liquid chromatography coupled to an orthogonal acceleration time-of-flight (TOF) mass spectrometer. Optimal ionisation conditions for undeuterated (d0) and tri- and hexadeuterated (d3 or d6) alpha-tocopherol standards were found with negative ion mode electrospray ionisation. Each species produced an isotopically resolved single ion of exact mass. Calibration curves of pure standards were linear in the range tested (0-1.5 muM, 0-15 pmol injected). For quantification of d0 and d6 in blood components following a standard solvent extraction, a stable-isotope-labelled internal standard (d3-alpha-tocopherol) was employed. To counter matrix ion suppression effects, standard response curves were generated following identical solvent extraction procedures to those of the samples. Within-day and between-day precision were determined for quantification of d0- and d6-labelled alpha-tocopherol in each blood component and both averaged 3-10%. Accuracy was assessed by comparison with a standard high-performance liquid chromatography (HPLC) method, achieving good correlation (r(2) = 0.94), and by spiking with known concentrations of alpha-tocopherol (98% accuracy). Limits of detection and quantification were determined to be 5 and 50 fmol injected, respectively. The assay was used to measure the appearance and disappearance of deuterium-labelled alpha-tocopherol in human blood components following deuterium-labelled (d6) RRR-alpha-tocopheryl acetate ingestion. The new LC/TOFMS method was found to be sensitive, required small sample volumes, was reproducible and robust, and was capable of high throughput when large numbers of samples were generated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Several pixel-based people counting methods have been developed over the years. Among these the product of scale-weighted pixel sums and a linear correlation coefficient is a popular people counting approach. However most approaches have paid little attention to resolving the true background and instead take all foreground pixels into account. With large crowds moving at varying speeds and with the presence of other moving objects such as vehicles this approach is prone to problems. In this paper we present a method which concentrates on determining the true-foreground, i.e. human-image pixels only. To do this we have proposed, implemented and comparatively evaluated a human detection layer to make people counting more robust in the presence of noise and lack of empty background sequences. We show the effect of combining human detection with a pixel-map based algorithm to i) count only human-classified pixels and ii) prevent foreground pixels belonging to humans from being absorbed into the background model. We evaluate the performance of this approach on the PETS 2009 dataset using various configurations of the proposed methods. Our evaluation demonstrates that the basic benchmark method we implemented can achieve an accuracy of up to 87% on sequence ¿S1.L1 13-57 View 001¿ and our proposed approach can achieve up to 82% on sequence ¿S1.L3 14-33 View 001¿ where the crowd stops and the benchmark accuracy falls to 64%.