998 resultados para 020203 Particle Physics
Resumo:
The Load-Unload Response Ratio (LURR) method is an intermediate-term earthquake prediction approach that has shown considerable promise. It involves calculating the ratio of a specified energy release measure during loading and unloading where loading and unloading periods are determined from the earth tide induced perturbations in the Coulomb Failure Stress on optimally oriented faults. In the lead-up to large earthquakes, high LURR values are frequently observed a few months or years prior to the event. These signals may have a similar origin to the observed accelerating seismic moment release (AMR) prior to many large earthquakes or may be due to critical sensitivity of the crust when a large earthquake is imminent. As a first step towards studying the underlying physical mechanism for the LURR observations, numerical studies are conducted using the particle based lattice solid model (LSM) to determine whether LURR observations can be reproduced. The model is initialized as a heterogeneous 2-D block made up of random-sized particles bonded by elastic-brittle links. The system is subjected to uniaxial compression from rigid driving plates on the upper and lower edges of the model. Experiments are conducted using both strain and stress control to load the plates. A sinusoidal stress perturbation is added to the gradual compressional loading to simulate loading and unloading cycles and LURR is calculated. The results reproduce signals similar to those observed in earthquake prediction practice with a high LURR value followed by a sudden drop prior to macroscopic failure of the sample. The results suggest that LURR provides a good predictor for catastrophic failure in elastic-brittle systems and motivate further research to study the underlying physical mechanisms and statistical properties of high LURR values. The results provide encouragement for earthquake prediction research and the use of advanced simulation models to probe the physics of earthquakes.
Resumo:
Co-sintering aid has been added to Ce1.9Gd0.1O1.95 (CGO) by treating a commercial powder with Co(NO3)(2) (COCGO), X-ray diffraction (XRD) measurements of lattice parameter indicated that the Co was located on the CGO particle surface after calcination at 650 degreesC. After heat treatment at temperatures above 650 degreesC, the room temperature lattice parameter of CGO was found to increase, indicating redistribution of the Gd. Compared to CGO, the lattice parameter of CGO + 2 cation% Co (2CoCGO) was lower for a given temperature (650-1100 degreesC), A.C. impedance revealed that the lattice conductivity of 2CoCGO was enhanced when densified at lower temperatures, Transmission electron microscopy (TEM) showed that, even after sintering for 4 h at 980 degreesC, most of the Co was located at grain boundaries. (C) 2002 Published by Elsevier Science B.V.
Resumo:
The ability to generate enormous random libraries of DNA probes via split-and-mix synthesis on solid supports is an important biotechnological application of colloids that has not been fully utilized to date. To discriminate between colloid-based DNA probes each colloidal particle must be 'encoded' so it is distinguishable from all other particles. To this end, we have used novel particle synthesis strategies to produce large numbers of optically encoded particle suitable for DNA library synthesis. Multifluorescent particles with unique and reproducible optical signatures (i.e., fluorescence and light-scattering attributes) suitable for high-throughput flow cytometry have been produced. In the spectroscopic study presented here, we investigated the optical characteristics of multi-fluorescent particles that were synthesized by coating silica 'core' particles with up to six different fluorescent dye shells alternated with non-fluorescent silica 'spacer' shells. It was observed that the diameter of the particles increased by up to 20% as a result of the addition of twelve concentric shells and that there was a significant reduction in fluorescence emission intensities from inner shells as an increasing number of shells were deposited.
Resumo:
Information on the spatial distribution of particle size fractions is essential for use planning and management of soils. The aim of this work to was to study the spatial variability of particle size fractions of a Typic Hapludox cultivated with conilon coffee. The soil samples were collected at depths of 0-0.20 and 0.20-0.40 m in the coffee canopy projection, totaling 109 georeferentiated points. At the depth of 0.2-0.4 m the clay fraction showed average value significantly higher, while the sand fraction showed was higher in the depth of 0-0.20 m. The silt showed no significant difference between the two depths. The particle size fractions showed medium and high spatial variability. The levels of total sand and clay have positive and negative correlation, respectively, with the altitude of the sampling points, indicating the influence of landscape configuration.
Resumo:
The efficiency of sources used for soil acidity correction depends on reactivity rate (RR) and neutralization power (NP), indicated by effective calcium carbonate (ECC). Few studies establish relative efficiency of reactivity (RER) for silicate particle-size fractions, therefore, the RER applied for lime are used. This study aimed to evaluate the reactivity of silicate materials affected by particle size throughout incubation periods in comparison to lime, and to calculate the RER for silicate particle-size fractions. Six correction sources were evaluated: three slags from distinct origins, dolomitic and calcitic lime separated into four particle-size fractions (2, 0.84, 0.30 and <0.30-mm sieves), and wollastonite, as an additional treatment. The treatments were applied to three soils with different texture classes. The dose of neutralizing material (calcium and magnesium oxides) was applied at equal quantities, and the only variation was the particle-size material. After a 90-day incubation period, the RER was calculated for each particle-size fraction, as well as the RR and ECC of each source. The neutralization of soil acidity of the same particle-size fraction for different sources showed distinct solubility and a distinct reaction between silicates and lime. The RER for slag were higher than the limits established by Brazilian legislation, indicating that the method used for limes should not be used for the slags studied here.
Resumo:
The idea of grand unification in a minimal supersymmetric SU(5) x SU(5) framework is revisited. It is shown that the unification of gauge couplings into a unique coupling constant can be achieved at a high-energy scale compatible with proton decay constraints. This requires the addition of minimal particle content at intermediate energy scales. In particular, the introduction of the SU(2)(L) triplets belonging to the (15, 1)+((15) over bar, 1) representations, as well as of the scalar triplet Sigma(3) and octet Sigma(8) in the (24, 1) representation, turns out to be crucial for unification. The masses of these intermediate particles can vary over a wide range, and even lie in the TeV region. In contrast, the exotic vector-like fermions must be heavy enough and have masses above 10(10) GeV. We also show that, if the SU(5) x SU(5) theory is embedded into a heterotic string scenario, it is not possible to achieve gauge coupling unification with gravity at the perturbative string scale.
Resumo:
We consider a simple model consisting of particles with four bonding sites ("patches"), two of type A and two of type B, on the square lattice, and investigate its global phase behavior by simulations and theory. We set the interaction between B patches to zero and calculate the phase diagram as the ratio between the AB and the AA interactions, epsilon(AB)*, varies. In line with previous work, on three-dimensional off-lattice models, we show that the liquid-vapor phase diagram exhibits a re-entrant or "pinched" shape for the same range of epsilon(AB)*, suggesting that the ratio of the energy scales - and the corresponding empty fluid regime - is independent of the dimensionality of the system and of the lattice structure. In addition, the model exhibits an order-disorder transition that is ferromagnetic in the re-entrant regime. The use of low-dimensional lattice models allows the simulation of sufficiently large systems to establish the nature of the liquid-vapor critical points and to describe the structure of the liquid phase in the empty fluid regime, where the size of the "voids" increases as the temperature decreases. We have found that the liquid-vapor critical point is in the 2D Ising universality class, with a scaling region that decreases rapidly as the temperature decreases. The results of simulations and theoretical analysis suggest that the line of order-disorder transitions intersects the condensation line at a multi-critical point at zero temperature and density, for patchy particle models with a re-entrant, empty fluid, regime. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3657406]
Resumo:
The Tevatron has measured a discrepancy relative to the standard model prediction in the forward-backward asymmetry in top quark pair production. This asymmetry grows with the rapidity difference of the two top quarks. It also increases with the invariant mass of the t (t) over bar pair, reaching, for high invariant masses, 3.4 standard deviations above the next-to-leading order prediction for the charge asymmetry of QCD. However, perfect agreement between experiment and the standard model was found in both total and differential cross section of top quark pair production. As this result could be a sign of new physics we have parametrized this new physics in terms of a complete set of dimension six operators involving the top quark. We have then used a Markov chain Monte Carlo approach in order to find the best set of parameters that fits the data, using all available data regarding top quark pair production at the Tevatron. We have found that just a very small number of operators are able to fit the data better than the standard model.
Resumo:
LHC has found hints for a Higgs particle of 125 GeV. We investigate the possibility that such a particle is a mixture of scalar and pseudoscalar states. For definiteness, we concentrate on a two-Higgs doublet model with explicit CP violation and soft Z(2) violation. Including all Higgs production mechanisms, we determine the current constraints obtained by comparing h -> yy with h -> VV*, and comment on the information which can be gained by measurements of h -> b (b) over bar. We find bounds vertical bar s(2)vertical bar less than or similar to 0.83 at one sigma, where vertical bar s(2)vertical bar = 0 (vertical bar s(2)vertical bar = 1) corresponds to a pure scalar (pure pseudoscalar) state.
Resumo:
We investigate whether the liquid-vapour phase transition of strongly dipolar fluids can be understood using a model of patchy colloids. These consist of hard spherical particles with three short-ranged attractive sites (patches) on their surfaces. Two of the patches are of type A and one is of type B. Patches A on a particle may bond either to a patch A or to a patch B on another particle. Formation of an AA (AB) bond lowers the energy by epsilon AA (epsilon AB). In the limit [image omitted], this patchy model exhibits condensation driven by AB-bonds (Y-junctions). Y-junctions are also present in low-density, strongly dipolar fluids, and have been conjectured to play a key role in determining their critical behaviour. We map the dipolar Yukawa hard-sphere (DYHS) fluid onto this 2A + 1B patchy model by requiring that the latter reproduce the correct DYHS critical point as a function of the isotropic interaction strength epsilon Y. This is achieved for sensible values of epsilon AB and the bond volumes. Results for the internal energy and the particle coordination number are in qualitative agreement with simulations of DYHSs. Finally, by taking the limit [image omitted], we arrive at a new estimate for the critical point of the dipolar hard-sphere fluid, which agrees with extrapolations from simulation.
Resumo:
This paper addresses the problem of energy resources management using modern metaheuristics approaches, namely Particle Swarm Optimization (PSO), New Particle Swarm Optimization (NPSO) and Evolutionary Particle Swarm Optimization (EPSO). The addressed problem in this research paper is intended for aggregators’ use operating in a smart grid context, dealing with Distributed Generation (DG), and gridable vehicles intelligently managed on a multi-period basis according to its users’ profiles and requirements. The aggregator can also purchase additional energy from external suppliers. The paper includes a case study considering a 30 kV distribution network with one substation, 180 buses and 90 load points. The distribution network in the case study considers intense penetration of DG, including 116 units from several technologies, and one external supplier. A scenario of 6000 EVs for the given network is simulated during 24 periods, corresponding to one day. The results of the application of the PSO approaches to this case study are discussed deep in the paper.
Resumo:
This paper proposes a particle swarm optimization (PSO) approach to support electricity producers for multiperiod optimal contract allocation. The producer risk preference is stated by a utility function (U) expressing the tradeoff between the expectation and variance of the return. Variance estimation and expected return are based on a forecasted scenario interval determined by a price range forecasting model developed by the authors. A certain confidence level is associated to each forecasted scenario interval. The proposed model makes use of contracts with physical (spot and forward) and financial (options) settlement. PSO performance was evaluated by comparing it with a genetic algorithm-based approach. This model can be used by producers in deregulated electricity markets but can easily be adapted to load serving entities and retailers. Moreover, it can easily be adapted to the use of other type of contracts.
Resumo:
Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology’s performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time.
Resumo:
Short-term risk management is highly dependent on long-term contractual decisions previously established; risk aversion factor of the agent and short-term price forecast accuracy. Trying to give answers to that problem, this paper provides a different approach for short-term risk management on electricity markets. Based on long-term contractual decisions and making use of a price range forecast method developed by the authors, the short-term risk management tool presented here has as main concern to find the optimal spot market strategies that a producer should have for a specific day in function of his risk aversion factor, with the objective to maximize the profits and simultaneously to practice the hedge against price market volatility. Due to the complexity of the optimization problem, the authors make use of Particle Swarm Optimization (PSO) to find the optimal solution. Results from realistic data, namely from OMEL electricity market, are presented and discussed in detail.
Resumo:
The concept of demand response has a growing importance in the context of the future power systems. Demand response can be seen as a resource like distributed generation, storage, electric vehicles, etc. All these resources require the existence of an infrastructure able to give players the means to operate and use them in an efficient way. This infrastructure implements in practice the smart grid concept, and should accommodate a large number of diverse types of players in the context of a competitive business environment. In this paper, demand response is optimally scheduled jointly with other resources such as distributed generation units and the energy provided by the electricity market, minimizing the operation costs from the point of view of a virtual power player, who manages these resources and supplies the aggregated consumers. The optimal schedule is obtained using two approaches based on particle swarm optimization (with and without mutation) which are compared with a deterministic approach that is used as a reference methodology. A case study with two scenarios implemented in DemSi, a demand Response simulator developed by the authors, evidences the advantages of the use of the proposed particle swarm approaches.