101 resultados para Pencil Beam Convolution Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work develops a method for solving ordinary differential equations, that is, initial-value problems, with solutions approximated by using Legendre's polynomials. An iterative procedure for the adjustment of the polynomial coefficients is developed, based on the genetic algorithm. This procedure is applied to several examples providing comparisons between its results and the best polynomial fitting when numerical solutions by the traditional Runge-Kutta or Adams methods are available. The resulting algorithm provides reliable solutions even if the numerical solutions are not available, that is, when the mass matrix is singular or the equation produces unstable running processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present measurements of net charge fluctuations in Au+Au collisions at s(NN)=19.6, 62.4, 130, and 200 GeV, Cu+Cu collisions at s(NN)=62.4 and 200 GeV, and p+p collisions at s=200 GeV using the dynamical net charge fluctuations measure nu(+-,dyn). We observe that the dynamical fluctuations are nonzero at all energies and exhibit a modest dependence on beam energy. A weak system size dependence is also observed. We examine the collision centrality dependence of the net charge fluctuations and find that dynamical net charge fluctuations violate 1/N(ch) scaling but display approximate 1/N(part) scaling. We also study the azimuthal and rapidity dependence of the net charge correlation strength and observe strong dependence on the azimuthal angular range and pseudorapidity widths integrated to measure the correlation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. B[e] supergiants are luminous, massive post-main sequence stars exhibiting non-spherical winds, forbidden lines, and hot dust in a disc-like structure. The physical properties of their rich and complex circumstellar environment (CSE) are not well understood, partly because these CSE cannot be easily resolved at the large distances found for B[e] supergiants (typically greater than or similar to 1 kpc). Aims. From mid-IR spectro-interferometric observations obtained with VLTI/MIDI we seek to resolve and study the CSE of the Galactic B[e] supergiant CPD-57 degrees 2874. Methods. For a physical interpretation of the observables (visibilities and spectrum) we use our ray-tracing radiative transfer code (FRACS), which is optimised for thermal spectro-interferometric observations. Results. Thanks to the short computing time required by FRACS (<10 s per monochromatic model), best-fit parameters and uncertainties for several physical quantities of CPD-57 degrees 2874 were obtained, such as inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination. Conclusions. The analysis of VLTI/MIDI data with FRACS allowed one of the first direct determinations of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a full model-fitting approach. In a larger context, the study of B[e] supergiants is important for a deeper understanding of the complex structure and evolution of hot, massive stars.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The aim of this study was to assess by atomic force microscopy (AFM) the effect of Er,Cr:YSGG laser application on the surface microtopography of radicular dentin. Background: Lasers have been used for various purposes in dentistry, where they are clinically effective when used in an appropriate manner. The Er, Cr: YSGG laser can be used for caries prevention when settings are below the ablation threshold. Materials and Methods: Four specimens of bovine dentin were irradiated using an Er, Cr:YSGG laser (lambda = 2.78 mu m), at a repetition rate of 20 Hz, with a 750-mu m-diameter sapphire tip and energy density of 2.8 J/cm(2) (12.5 mJ/pulse). After irradiation, surface topography was analyzed by AFM using a Si probe in tapping mode. Quantitative and qualitative information concerning the arithmetic average roughness (Ra) and power spectral density analyses were obtained from central, intermediate, and peripheral areas of laser pulses and compared with data from nonirradiated samples. Results: Dentin Ra for different areas were as follows: central, 261.26 (+/- 21.65) nm; intermediate, 83.48 (+/- 6.34) nm; peripheral, 45.8 (+/- 13.47) nm; and nonirradiated, 35.18 (+/- 2.9) nm. The central region of laser pulses presented higher ablation of intertubular dentin, with about 340-760 nm height, while intermediate, peripheral, and nonirradiated regions presented no difference in height of peritubular and interperitubular dentin. Conclusion: According to these results, we can assume that even when used at a low-energy density parameter, Er, Cr: YSGG laser can significantly alter the microtopography of radicular dentin, which is an important characteristic to be considered when laser is used for clinical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Letter reports new results from the MINOS experiment based on a two-year exposure to muon neutrinos from the Fermilab NuMI beam. Our data are consistent with quantum-mechanical oscillations of neutrino flavor with mass splitting vertical bar Delta m(2)vertical bar = (2.43 +/- 0.13) x 10(-3) eV(2) (68% C.L.) and mixing angle sin(2)(2 theta) > 0.90 (90% C.L.). Our data disfavor two alternative explanations for the disappearance of neutrinos in flight: namely, neutrino decays into lighter particles and quantum decoherence of neutrinos, at the 3.7 and 5.7 standard-deviation levels, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on the event structure and double helicity asymmetry (A(LL)) of jet production in longitudinally polarized p + p collisions at root s = 200 GeV. Photons and charged particles were measured by the PHENIX experiment at midrapidity vertical bar eta vertical bar < 0.35 with the requirement of a high-momentum (> 2 GeV/c) photon in the event. Event structure, such as multiplicity, p(T) density and thrust in the PHENIX acceptance, were measured and compared with the results from the PYTHIA event generator and the GEANT detector simulation. The shape of jets and the underlying event were well reproduced at this collision energy. For the measurement of jet A(LL), photons and charged particles were clustered with a seed-cone algorithm to obtain the cluster pT sum (p(T)(reco)). The effect of detector response and the underlying events on p(T)(reco) was evaluated with the simulation. The production rate of reconstructed jets is satisfactorily reproduced with the next-to-leading-order and perturbative quantum chromodynamics jet production cross section. For 4< p(T)(reco) < 12 GeV/c with an average beam polarization of < P > = 49% we measured Lambda(LL) = -0.0014 +/- 0.0037(stat) at the lowest p(T)(reco) bin (4-5 GeV= c) and -0.0181 +/- 0.0282(stat) at the highest p(T)(reco) bin (10-12 GeV= c) with a beam polarization scale error of 9.4% and a pT scale error of 10%. Jets in the measured p(T)(reco) range arise primarily from hard-scattered gluons with momentum fraction 0: 02 < x < 0: 3 according to PYTHIA. The measured A(LL) is compared with predictions that assume various Delta G(x) distributions based on the Gluck-Reya-Stratmann-Vogelsang parameterization. The present result imposes the limit -a.1 < integral(0.3)(0.02) dx Delta G(x, mu(2) = GeV2) < 0.4 at 95% confidence level or integral(0.3)(0.002) dx Delta G(x, mu(2) = 1 GeV2) < 0.5 at 99% confidence level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Very low intensity and phase fluctuations are present in a bright light field such as a laser beam. These subtle quantum fluctuations may be used to encode quantum information. Although intensity is easily measured with common photodetectors, accessing the phase information requires interference experiments. We introduce one such technique, the rotation of the noise ellipse of light, which employs an optical cavity to achieve the conversion of phase to intensity fluctuations. We describe the quantum noise of light and how it can be manipulated by employing an optical resonance technique and compare it to similar techniques, such as Pound - Drever - Hall laser stabilization and homodyne detection. (c) 2008 American Association of Physics Teachers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, pyrolysis-molecular beam mass spectrometry analysis coupled with principal components analysis and (13)C-labeled tetramethylammonium hydroxide thermochemolysis were used to study lignin oxidation, depolymerization, and demethylation of spruce wood treated by biomimetic oxidative systems. Neat Fenton and chelator-mediated Fenton reaction (CMFR) systems as well as cellulosic enzyme treatments were used to mimic the nonenzymatic process involved in wood brown-rot biodegradation. The results suggest that compared with enzymatic processes, Fenton-based treatment more readily opens the structure of the lignocellulosic matrix, freeing cellulose fibrils from the matrix. The results demonstrate that, under the current treatment conditions, Fenton and CMFR treatment cause limited demethoxylation of lignin in the insoluble wood residue. However, analysis of a water-extractable fraction revealed considerable soluble lignin residue structures that had undergone side chain oxidation as well as demethoxylation upon CMFR treatment. This research has implications for our understanding of nonenzymatic degradation of wood and the diffusion of CMFR agents in the wood cell wall during fungal degradation processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this paper is to relieve the power system engineers from the burden of the complex and time-consuming process of power system stabilizer (PSS) tuning. To achieve this goal, the paper proposes an automatic process for computerized tuning of PSSs, which is based on an iterative process that uses a linear matrix inequality (LMI) solver to find the PSS parameters. It is shown in the paper that PSS tuning can be written as a search problem over a non-convex feasible set. The proposed algorithm solves this feasibility problem using an iterative LMI approach and a suitable initial condition, corresponding to a PSS designed for nominal operating conditions only (which is a quite simple task, since the required phase compensation is uniquely defined). Some knowledge about the PSS tuning is also incorporated in the algorithm through the specification of bounds defining the allowable PSS parameters. The application of the proposed algorithm to a benchmark test system and the nonlinear simulation of the resulting closed-loop models demonstrate the efficiency of this algorithm. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the Askey-Wiener scheme and the Galerkin method are used to obtain approximate solutions to stochastic beam bending on Winkler foundation. The study addresses Euler-Bernoulli beams with uncertainty in the bending stiffness modulus and in the stiffness of the foundation. Uncertainties are represented by parameterized stochastic processes. The random behavior of beam response is modeled using the Askey-Wiener scheme. One contribution of the paper is a sketch of proof of existence and uniqueness of the solution to problems involving fourth order operators applied to random fields. From the approximate Galerkin solution, expected value and variance of beam displacement responses are derived, and compared with corresponding estimates obtained via Monte Carlo simulation. Results show very fast convergence and excellent accuracies in comparison to Monte Carlo simulation. The Askey-Wiener Galerkin scheme presented herein is shown to be a theoretically solid and numerically efficient method for the solution of stochastic problems in engineering.