23 resultados para box constraints
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Global optimization seeks a minimum or maximum of a multimodal function over a discrete or continuous domain. In this paper, we propose a hybrid heuristic-based on the CGRASP and GENCAN methods-for finding approximate solutions for continuous global optimization problems subject to box constraints. Experimental results illustrate the relative effectiveness of CGRASP-GENCAN on a set of benchmark multimodal test functions.
Resumo:
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.
Resumo:
The transition redshift (deceleration/acceleration) is discussed by expanding the deceleration parameter to first order around its present value. A detailed study is carried out by considering two different parametrizations, q = q(0) + q(1)z and q = q(0) + q(1)z(1 + z)(-1), and the associated free parameters (q(0), q(1)) are constrained by three different supernovae (SNe) samples. A previous analysis by Riess et al. using the first expansion is slightly improved and confirmed in light of their recent data (Gold07 sample). However, by fitting the model with the Supernova Legacy Survey (SNLS) type Ia sample, we find that the best fit to the redshift transition is z(t) = 0.61, instead of z(t) = 0.46 as derived by the High-z Supernovae Search (HZSNS) team. This result based in the SNLS sample is also in good agreement with the sample of Davis et al., z(t) = 0.60(-0.11)(+0.28) (1 sigma). Such results are in line with some independent analyses and accommodate more easily the concordance flat model (Lambda CDM). For both parametrizations, the three SNe Ia samples considered favour recent acceleration and past deceleration with a high degree of statistical confidence level. All the kinematic results presented here depend neither on the validity of general relativity nor on the matter-energy contents of the Universe.
Resumo:
A new inflationary scenario whose exponential potential V (Phi) has a quadratic dependence on the field Phi in addition to the standard linear term is confronted with the five-year observations of the Wilkinson-Microwave Anisotropy Probe and the Sloan Digital Sky Survey data. The number of e-folds (N), the ratio of tensor-to-scalar perturbations (r), the spectral scalar index of the primordial power spectrum (n(s)) and its running (dn(s)/d ln k) depend on the dimensionless parameter a multiplying the quadratic term in the potential. In the limit a. 0 all the results of the exponential potential are fully recovered. For values of alpha not equal 0, we find that the model predictions are in good agreement with the current observations of the Cosmic Microwave Background (CMB) anisotropies and Large-Scale Structure (LSS) in the Universe. Copyright (C) EPLA, 2008.
Resumo:
We present mid-infrared (mid-IR) spectra of the Compton-thick Seyfert 2 galaxy NGC 3281, obtained with the Thermal-Region Camera Spectrograph at the Gemini-South telescope. The spectra present a very deep silicate absorption at 9.7 mu m, and [S IV] 10.5 mu m and [Ne II] 12.7 mu m ionic lines, but no evidence of polycyclic aromatic hydrocarbon emission. We find that the nuclear optical extinction is in the range 24 mag <= A(V) <= 83 mag. A temperature T = 300 K was found for the blackbody dust continuum component of the unresolved 65 pc nucleus and the region at 130 pc SE, while the region at 130 pc NW reveals a colder temperature (200 K). We describe the nuclear spectrum of NGC 3281 using a clumpy torus model that suggests that the nucleus of this galaxy hosts a dusty toroidal structure. According to this model, the ratio between the inner and outer radius of the torus in NGC 3281 is R(0)/R(d) = 20, with 14 clouds in the equatorial radius with optical depth of tau(V) = 40 mag. We would be looking in the direction of the torus equatorial radius (i = 60 degrees), which has outer radius of R(0) similar to 11 pc. The column density is N(H) approximate to 1.2 x 10(24) cm(-2) and the iron K alpha equivalent width (approximate to 0.5-1.2 keV) is used to check the torus geometry. Our findings indicate that the X-ray absorbing column density, which classifies NGC 3281 as a Compton-thick source, may also be responsible for the absorption at 9.7 mu m providing strong evidence that the silicate dust responsible for this absorption can be located in the active galactic nucleus torus.
Resumo:
Oscillating biochemical reactions are common in cell dynamics and could be closely related to the emergence of the life phenomenon itself. In this work, we study the dynamical features of some classical chemical or biochemical oscillators where the effect of cell volume changes is explicitly considered. Such analysis enables us to find some general conditions about the cell membrane to preserve such oscillatory patterns, of possible relevance to hypothetical primitive cells in which these structures first appeared.
Resumo:
P>Estimates of effective elastic thickness (T(e)) for the western portion of the South American Plate using, independently, forward flexural modelling and coherence analysis, suggest different thermomechanical properties for the same continental lithosphere. We present a review of these T(e) estimates and carry out a critical reappraisal using a common methodology of 3-D finite element method to solve a differential equation for the bending of a thin elastic plate. The finite element flexural model incorporates lateral variations of T(e) and the Andes topography as the load. Three T(e) maps for the entire Andes were analysed: Stewart & Watts (1997), Tassara et al. (2007) and Perez-Gussinye et al. (2007). The predicted flexural deformation obtained for each T(e) map was compared with the depth to the base of the foreland basin sequence. Likewise, the gravity effect of flexurally induced crust-mantle deformation was compared with the observed Bouguer gravity. T(e) estimates using forward flexural modelling by Stewart & Watts (1997) better predict the geological and gravity data for most of the Andean system, particularly in the Central Andes, where T(e) ranges from greater than 70 km in the sub-Andes to less than 15 km under the Andes Cordillera. The misfit between the calculated and observed foreland basin subsidence and the gravity anomaly for the Maranon basin in Peru and the Bermejo basin in Argentina, regardless of the assumed T(e) map, may be due to a dynamic topography component associated with the shallow subduction of the Nazca Plate beneath the Andes at these latitudes.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
The viability of two different classes of Lambda(t)CDM cosmologies is tested by using the APM 08279+5255, an old quasar at redshift z = 3.91. In the first class of models, the cosmological term scales as Lambda(t) similar to R(-n). The particular case n = 0 describes the standard Lambda CDM model whereas n = 2 stands for the Chen and Wu model. For an estimated age of 2 Gyr, it is found that the power index has a lower limit n > 0.21, whereas for 3 Gyr the limit is n > 0.6. Since n can not be so large as similar to 0.81, the Lambda CDM and Chen and Wu models are also ruled out by this analysis. The second class of models is the one recently proposed by Wang and Meng which describes several Lambda(t)CDM cosmologies discussed in the literature. By assuming that the true age is 2 Gyr it is found that the epsilon parameter satisfies the lower bound epsilon > 0.11 while for 3 Gyr, a lower limit of epsilon > 0.52 is obtained. Such limits are slightly modified when the baryonic component is included.
Resumo:
Evolutionary change in New World Monkey (NWM) skulls occurred primarily along the line of least resistance defined by size (including allometric) variation (g(max)). Although the direction of evolution was aligned with this axis, it was not clear whether this macroevolutionary pattern results from the conservation of within population genetic covariance patterns (long-term constraint) or long-term selection along a size dimension, or whether both, constraints and selection, were inextricably involved. Furthermore, G-matrix stability can also be a consequence of selection, which implies that both, constraints embodied in g(max) and evolutionary changes observed on the trait averages, would be influenced by selection Here, we describe a combination of approaches that allows one to test whether any particular instance of size evolution is a correlated by-product due to constraints (g(max)) or is due to direct selection on size and apply it to NWM lineages as a case study. The approach is based on comparing the direction and amount of evolutionary change produced by two different simulated sets of net-selection gradients (beta), a size (isometric and allometric size) and a nonsize set. Using this approach it is possible to distinguish between the two hypotheses (indirect size evolution due to constraints or direct selection on size), because although both may produce an evolutionary response aligned with g(max), the amount of change produced by random selection operating through the variance/covariance patterns (constraints hypothesis) will be much smaller than that produced by selection on size (selection hypothesis). Furthermore, the alignment of simulated evolutionary changes with g(max) when selection is not on size is not as tight as when selection is actually on size, allowing a statistical test of whether a particular observed case of evolution along the line of least resistance is the result of selection along it or not. Also, with matrix diagonalization (principal components [PC]) it is possible to calculate directly the net-selection gradient on size alone (first PC [PC1]) by dividing the amount of phenotypic difference between any two populations by the amount of variation in PC1, which allows one to benchmark whether selection was on size or not
Resumo:
A new species of cubozoan jellyfish has been discovered in shallow waters of Bonaire, Netherlands ( Dutch Caribbean). Thus far, approximately 50 sightings of the species, known commonly as the Bonaire banded box jelly, are recorded, and three specimens have been collected. Three physical encounters between humans and the species have been reported. Available evidence suggests that a serious sting is inflicted by this medusa. To increase awareness of the scientific disciplines of systematics and taxonomy, the public has been involved in naming this new species. The Bonaire banded box jelly, Tamoya ohboya, n. sp., can be distinguished from its close relatives T. haplonema from Brazil and T. sp. from the southeastern United States by differences in tentacle coloration, cnidome, and mitochondrial gene sequences. Tamoya ohboya n. sp. possesses striking dark brown to reddish-orange banded tentacles, nematocyst warts that densely cover the animal, and a deep stomach. We provide a detailed comparison of nematocyst data from Tamoya ohboya n. sp., T. haplonema from Brazil, and T. sp. from the Gulf of Mexico.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this article we propose a 0-1 optimization model to determine a crop rotation schedule for each plot in a cropping area. The rotations have the same duration in all the plots and the crops are selected to maximize plot occupation. The crops may have different production times and planting dates. The problem includes planting constraints for adjacent plots and also for sequences of crops in the rotations. Moreover, cultivating crops for green manuring and fallow periods are scheduled into each plot. As the model has, in general, a great number of constraints and variables, we propose a heuristics based on column generation. To evaluate the performance of the model and the method, computational experiments using real-world data were performed. The solutions obtained indicate that the method generates good results.
Resumo:
This article describes and compares three heuristics for a variant of the Steiner tree problem with revenues, which includes budget and hop constraints. First, a greedy method which obtains good approximations in short computational times is proposed. This initial solution is then improved by means of a destroy-and-repair method or a tabu search algorithm. Computational results compare the three methods in terms of accuracy and speed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We examine different phenomenological interaction models for Dark Energy and Dark Matter by performing statistical joint analysis with observational data arising from the 182 Gold type la supernova samples, the shift parameter of the Cosmic Microwave Background given by the three-year Wilkinson Microwave Anisotropy Probe observations, the baryon acoustic oscillation measurement from the Sloan Digital Sky Survey and age estimates of 35 galaxies. Including the time-dependent observable, we add sensitivity of measurement and give complementary results for the fitting. The compatibility among three different data sets seem to imply that the coupling between dark energy and dark matter is a small positive value, which satisfies the requirement to solve the coincidence problem and the second law of thermodynamics, being compatible with previous estimates. (c) 2008 Elsevier B.V. All rights reserved.