749 resultados para Pencil Beam Convolution Algorithm
Resumo:
We report on the event structure and double helicity asymmetry (A(LL)) of jet production in longitudinally polarized p + p collisions at root s = 200 GeV. Photons and charged particles were measured by the PHENIX experiment at midrapidity vertical bar eta vertical bar < 0.35 with the requirement of a high-momentum (> 2 GeV/c) photon in the event. Event structure, such as multiplicity, p(T) density and thrust in the PHENIX acceptance, were measured and compared with the results from the PYTHIA event generator and the GEANT detector simulation. The shape of jets and the underlying event were well reproduced at this collision energy. For the measurement of jet A(LL), photons and charged particles were clustered with a seed-cone algorithm to obtain the cluster pT sum (p(T)(reco)). The effect of detector response and the underlying events on p(T)(reco) was evaluated with the simulation. The production rate of reconstructed jets is satisfactorily reproduced with the next-to-leading-order and perturbative quantum chromodynamics jet production cross section. For 4< p(T)(reco) < 12 GeV/c with an average beam polarization of < P > = 49% we measured Lambda(LL) = -0.0014 +/- 0.0037(stat) at the lowest p(T)(reco) bin (4-5 GeV= c) and -0.0181 +/- 0.0282(stat) at the highest p(T)(reco) bin (10-12 GeV= c) with a beam polarization scale error of 9.4% and a pT scale error of 10%. Jets in the measured p(T)(reco) range arise primarily from hard-scattered gluons with momentum fraction 0: 02 < x < 0: 3 according to PYTHIA. The measured A(LL) is compared with predictions that assume various Delta G(x) distributions based on the Gluck-Reya-Stratmann-Vogelsang parameterization. The present result imposes the limit -a.1 < integral(0.3)(0.02) dx Delta G(x, mu(2) = GeV2) < 0.4 at 95% confidence level or integral(0.3)(0.002) dx Delta G(x, mu(2) = 1 GeV2) < 0.5 at 99% confidence level.
Resumo:
Very low intensity and phase fluctuations are present in a bright light field such as a laser beam. These subtle quantum fluctuations may be used to encode quantum information. Although intensity is easily measured with common photodetectors, accessing the phase information requires interference experiments. We introduce one such technique, the rotation of the noise ellipse of light, which employs an optical cavity to achieve the conversion of phase to intensity fluctuations. We describe the quantum noise of light and how it can be manipulated by employing an optical resonance technique and compare it to similar techniques, such as Pound - Drever - Hall laser stabilization and homodyne detection. (c) 2008 American Association of Physics Teachers.
Resumo:
In this work, pyrolysis-molecular beam mass spectrometry analysis coupled with principal components analysis and (13)C-labeled tetramethylammonium hydroxide thermochemolysis were used to study lignin oxidation, depolymerization, and demethylation of spruce wood treated by biomimetic oxidative systems. Neat Fenton and chelator-mediated Fenton reaction (CMFR) systems as well as cellulosic enzyme treatments were used to mimic the nonenzymatic process involved in wood brown-rot biodegradation. The results suggest that compared with enzymatic processes, Fenton-based treatment more readily opens the structure of the lignocellulosic matrix, freeing cellulose fibrils from the matrix. The results demonstrate that, under the current treatment conditions, Fenton and CMFR treatment cause limited demethoxylation of lignin in the insoluble wood residue. However, analysis of a water-extractable fraction revealed considerable soluble lignin residue structures that had undergone side chain oxidation as well as demethoxylation upon CMFR treatment. This research has implications for our understanding of nonenzymatic degradation of wood and the diffusion of CMFR agents in the wood cell wall during fungal degradation processes.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
The main objective of this paper is to relieve the power system engineers from the burden of the complex and time-consuming process of power system stabilizer (PSS) tuning. To achieve this goal, the paper proposes an automatic process for computerized tuning of PSSs, which is based on an iterative process that uses a linear matrix inequality (LMI) solver to find the PSS parameters. It is shown in the paper that PSS tuning can be written as a search problem over a non-convex feasible set. The proposed algorithm solves this feasibility problem using an iterative LMI approach and a suitable initial condition, corresponding to a PSS designed for nominal operating conditions only (which is a quite simple task, since the required phase compensation is uniquely defined). Some knowledge about the PSS tuning is also incorporated in the algorithm through the specification of bounds defining the allowable PSS parameters. The application of the proposed algorithm to a benchmark test system and the nonlinear simulation of the resulting closed-loop models demonstrate the efficiency of this algorithm. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, the Askey-Wiener scheme and the Galerkin method are used to obtain approximate solutions to stochastic beam bending on Winkler foundation. The study addresses Euler-Bernoulli beams with uncertainty in the bending stiffness modulus and in the stiffness of the foundation. Uncertainties are represented by parameterized stochastic processes. The random behavior of beam response is modeled using the Askey-Wiener scheme. One contribution of the paper is a sketch of proof of existence and uniqueness of the solution to problems involving fourth order operators applied to random fields. From the approximate Galerkin solution, expected value and variance of beam displacement responses are derived, and compared with corresponding estimates obtained via Monte Carlo simulation. Results show very fast convergence and excellent accuracies in comparison to Monte Carlo simulation. The Askey-Wiener Galerkin scheme presented herein is shown to be a theoretically solid and numerically efficient method for the solution of stochastic problems in engineering.
Resumo:
In this work, a new boundary element formulation for the analysis of plate-beam interaction is presented. This formulation uses a three nodal value boundary elements and each beam element is replaced by its actions on the plate, i.e., a distributed load and end of element forces. From the solution of the differential equation of a beam with linearly distributed load the plate-beam interaction tractions can be written as a function of the nodal values of the beam. With this transformation a final system of equation in the nodal values of displacements of plate boundary and beam nodes is obtained and from it, all unknowns of the plate-beam system are obtained. Many examples are analyzed and the results show an excellent agreement with those from the analytical solution and other numerical methods. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The present research studies the behavior of reinforced concrete locking beams supported by two capped piles with the socket embedded; used as connections for pre-cast concrete structures. The effect provoked by locking the beam on the pile-caps when supported by the lateral socket walls was evaluated. Three-dimensional numerical analyses using software based on the finite element method (FEM) were developed considering the nonlinear physical behavior of the material. To evaluate the adopted software, a comparative analysis was made using the numerical and experimented results obtained from other software. In the pile caps studied, a variation in the wall thickness, socket interface, strut angle inclination and action on beam. The results show that the presence of a beam does not significantly change pile cap behavior and that the socket wall is able to effectively transfer the force from the beam to the pile caps. By the tensions on the bars of longitudinal reinforcement, it was possible to obtain the force on the tie and the strut angle inclination before the collapse of models. It was found that the angles present more inclinations than those used in the design, which was made based on a strut-and-tie model. More results are available at http://www.set.eesc.usp.br/pdf/download/2009ME_RodrigoBarros.pdf
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper presents a free software tool that supports the next-generation Mobile Communications, through the automatic generation of models of components and electronic devices based on neural networks. This tool enables the creation, training, validation and simulation of the model directly from measurements made on devices of interest, using an interface totally oriented to non-experts in neural models. The resulting model can be exported automatically to a traditional circuit simulator to test different scenarios.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.