67 resultados para Constraints of monotonicity
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We discuss the dynamics of the Universe within the framework of the massive graviton cold dark matter scenario (MGCDM) in which gravitons are geometrically treated as massive particles. In this modified gravity theory, the main effect of the gravitons is to alter the density evolution of the cold dark matter component in such a way that the Universe evolves to an accelerating expanding regime, as presently observed. Tight constraints on the main cosmological parameters of the MGCDM model are derived by performing a joint likelihood analysis involving the recent supernovae type Ia data, the cosmic microwave background shift parameter, and the baryonic acoustic oscillations as traced by the Sloan Digital Sky Survey red luminous galaxies. The linear evolution of small density fluctuations is also analyzed in detail. It is found that the growth factor of the MGCDM model is slightly different (similar to 1-4%) from the one provided by the conventional flat Lambda CDM cosmology. The growth rate of clustering predicted by MGCDM and Lambda CDM models are confronted to the observations and the corresponding best fit values of the growth index (gamma) are also determined. By using the expectations of realistic future x-ray and Sunyaev-Zeldovich cluster surveys we derive the dark matter halo mass function and the corresponding redshift distribution of cluster-size halos for the MGCDM model. Finally, we also show that the Hubble flow differences between the MGCDM and the Lambda CDM models provide a halo redshift distribution departing significantly from the those predicted by other dark energy models. These results suggest that the MGCDM model can observationally be distinguished from Lambda CDM and also from a large number of dark energy models recently proposed in the literature.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this work, a stable MPC that maximizes the domain of attraction of the closed-loop system is proposed. The proposed approach is suitable to real applications in the sense that it accounts for the case of output tracking, it is offset free if the output target is reachable and minimizes the offset if some of the constraints are active at steady state. The new approach is based on the definition of a Minkowski functional related to the input and terminal constraints of the stable infinite horizon MPC. It is also shown that the domain of attraction is defined by the system model and the constraints, and it does not depend on the controller tuning parameters. The proposed controller is illustrated with small order examples of the control literature. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
fit the context of normalized variable formulation (NVF) of Leonard and total variation diminishing (TVD) constraints of Harten. this paper presents an extension of it previous work by the authors for solving unsteady incompressible flow problems. The main contributions of the paper are threefold. First, it presents the results of the development and implementation of a bounded high order upwind adaptative QUICKEST scheme in the 3D robust code (Freeflow), for the numerical solution of the full incompressible Navier-Stokes equations. Second, it reports numerical simulation results for 1D hock tube problem, 2D impinging jet and 2D/3D broken clam flows. Furthermore, these results are compared with existing analytical and experimental data. And third, it presents the application of the numerical method for solving 3D free surface flow problems. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved,
Resumo:
How information transmission processes between individuals are shaped by natural selection is a key question for the understanding of the evolution of acoustic communication systems. Environmental acoustics predict that signal structure will differ depending on general features of the habitat. Social features, like individual spacing and mating behavior, may also be important for the design of communication. Here we present the first experimental study investigating how a tropical rainforest bird, the white-browed warbler Basileuterus leucoblepharus, extracts various information from a received song: species-specific identity, individual identity and location of the sender. Species-specific information is encoded in a resistant acoustic feature and is thus a public signal helping males to reach a wide audience. Conversely, individual identity is supported by song features susceptible to propagation: this private signal is reserved for neighbors. Finally, the receivers can locate the singers by using propagation-induced song modifications. Thus, this communication system is well matched to the acoustic constraints of the rain forest and to the ecological requirements of the species. Our results emphasize that, in a constraining acoustic environment, the efficiency of a sound communication system results from a coding/decoding process particularly well tuned to the acoustic properties of this environment.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.
Resumo:
Dating granulites has always been of great interest because they represent one of the most extreme settings of an orogen. Owing to the resilience of zircon, even in such severe environments, the link between P-T conditions and geological time is possible. However, a challenge to geochronologists is to define whether the growth of new zircon is related to pre- or post-P-T peak conditions and which processes might affect the (re) crystallization. In this context, the Anapolis-Itaucu Complex, a high-grade complex in central Brazil with ultrahigh temperature (UHT) granulites, may provide valuable information within this topic. The Anapolis-Itaucu Complex (AIC) includes ortho- and paragranulites, locally presenting UHT mineral assemblages, with igneous zircon ages varying between 760 and 650 Ma and metamorphic overgrowths dated at around 650-640 Ma. Also common in the Anapolis-Itaucu Complex are layered mafic-ultramafic complexes metamorphosed under high-grade conditions. This article presents the first geological and geochronological constraints of three of these layered complexes within the AIC, the Damolandia, Taquaral and Goianira-Trindade complexes. U-Pb (LA-MC-ICPMS, SHRIMP and ID-TIMS) zircon analyses reveal a spread of concordant ages spanning within an age interval of similar to 80 Ma with an ""upper"" intercept age of similar to 670 Ma. Under cathodoluminescence imaging, these crystals show partially preserved primary sector zoning, as well as internal textures typical of alteration during high-grade metamorphism, such as inward-moving boundaries. Zircon grains reveal homogeneous initial (176)Hf/(177)Hf values in distinct crystal-scale domains in all samples. Moreover. Hf isotopic ratios show correlation neither with U-Pb ages nor with Th/U ratios, suggesting that zircon grains crystallized during a single growth event. It is suggested, therefore, that the observed spread of concordant U-Pb ages may be related to a memory effect due to coupled dissolution-reprecipitation process during high grade metamorphism. Therefore, understanding the emplacement and metamorphism of this voluminous mafic magmatism is crucial as it may represent an additional heat source for the development of the ultrahigh temperature paragenesis recorded in the paragranulites. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We present a re-analysis of the Geneva-Copenhagen survey, which benefits from the infrared flux method to improve the accuracy of the derived stellar effective temperatures and uses the latter to build a consistent and improved metallicity scale. Metallicities are calibrated on high-resolution spectroscopy and checked against four open clusters and a moving group, showing excellent consistency. The new temperature and metallicity scales provide a better match to theoretical isochrones, which are used for a Bayesian analysis of stellar ages. With respect to previous analyses, our stars are on average 100 K hotter and 0.1 dex more metal rich, which shift the peak of the metallicity distribution function around the solar value. From Stromgren photometry we are able to derive for the first time a proxy for [alpha/Fe] abundances, which enables us to perform a tentative dissection of the chemical thin and thick disc. We find evidence for the latter being composed of an old, mildly but systematically alpha-enhanced population that extends to super solar metallicities, in agreement with spectroscopic studies. Our revision offers the largest existing kinematically unbiased sample of the solar neighbourhood that contains full information on kinematics, metallicities, and ages and thus provides better constraints on the physical processes relevant in the build-up of the Milky Way disc, enabling a better understanding of the Sun in a Galactic context.
Resumo:
For Au + Au collisions at 200 GeV, we measure neutral pion production with good statistics for transverse momentum, p(T), up to 20 GeV/c. A fivefold suppression is found, which is essentially constant for 5 < p(T) < 20 GeV/c. Experimental uncertainties are small enough to constrain any model-dependent parametrization for the transport coefficient of the medium, e. g., <(q) over cap > in the parton quenching model. The spectral shape is similar for all collision classes, and the suppression does not saturate in Au + Au collisions.
Resumo:
The PHENIX experiment has measured the suppression of semi-inclusive single high-transverse-momentum pi(0)'s in Au+Au collisions at root s(NN) = 200 GeV. The present understanding of this suppression is in terms of energy loss of the parent (fragmenting) parton in a dense color-charge medium. We have performed a quantitative comparison between various parton energy-loss models and our experimental data. The statistical point-to-point uncorrelated as well as correlated systematic uncertainties are taken into account in the comparison. We detail this methodology and the resulting constraint on the model parameters, such as the initial color-charge density dN(g)/dy, the medium transport coefficient <(q) over cap >, or the initial energy-loss parameter epsilon(0). We find that high-transverse-momentum pi(0) suppression in Au+Au collisions has sufficient precision to constrain these model-dependent parameters at the +/- 20-25% (one standard deviation) level. These constraints include only the experimental uncertainties, and further studies are needed to compute the corresponding theoretical uncertainties.
Resumo:
We present rigorous upper and lower bounds for the momentum-space ghost propagator G(p) of Yang-Mills theories in terms of the smallest nonzero eigenvalue (and of the corresponding eigenvector) of the Faddeev-Popov matrix. We apply our analysis to data from simulations of SU(2) lattice gauge theory in Landau gauge, using the largest lattice sizes to date. Our results suggest that, in three and in four space-time dimensions, the Landau gauge ghost propagator is not enhanced as compared to its tree-level behavior. This is also seen in plots and fits of the ghost dressing function. In the two-dimensional case, on the other hand, we find that G(p) diverges as p(-2-2 kappa) with kappa approximate to 0.15, in agreement with A. Maas, Phys. Rev. D 75, 116004 (2007). We note that our discussion is general, although we make an application only to pure gauge theory in Landau gauge. Our simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
We present rigorous upper and lower bounds for the zero-momentum gluon propagator D(0) of Yang-Mills theories in terms of the average value of the gluon field. This allows us to perform a controlled extrapolation of lattice data to infinite volume, showing that the infrared limit of the Landau-gauge gluon propagator in SU(2) gauge theory is finite and nonzero in three and in four space-time dimensions. In the two-dimensional case, we find D(0)=0, in agreement with Maas. We suggest an explanation for these results. We note that our discussion is general, although we apply our analysis only to pure gauge theory in the Landau gauge. Simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
This paper presents an approach for the active transmission losses allocation between the agents of the system. The approach uses the primal and dual variable information of the Optimal Power Flow in the losses allocation strategy. The allocation coefficients are determined via Lagrange multipliers. The paper emphasizes the necessity to consider the operational constraints and parameters of the systems in the problem solution. An example, for a 3-bus system is presented in details, as well as a comparative test with the main allocation methods. Case studies on the IEEE 14-bus systems are carried out to verify the influence of the constraints and parameters of the system in the losses allocation.
Resumo:
Pipeline systems play a key role in the petroleum business. These operational systems provide connection between ports and/or oil fields and refineries (upstream), as well as between these and consumer markets (downstream). The purpose of this work is to propose a novel MINLP formulation based on a continuous time representation for the scheduling of multiproduct pipeline systems that must supply multiple consumer markets. Moreover, it also considers that the pipeline operates intermittently and that the pumping costs depend on the booster stations yield rates, which in turn may generate different flow rates. The proposed continuous time representation is compared with a previously developed discrete time representation [Rejowski, R., Jr., & Pinto, J. M. (2004). Efficient MILP formulations and valid cuts for multiproduct pipeline scheduling. Computers and Chemical Engineering, 28, 1511] in terms of solution quality and computational performance. The influence of the number of time intervals that represents the transfer operation is studied and several configurations for the booster stations are tested. Finally, the proposed formulation is applied to a larger case, in which several booster configurations with different numbers of stages are tested. (C) 2007 Elsevier Ltd. All rights reserved.