19 resultados para Credit Constraints

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The transition redshift (deceleration/acceleration) is discussed by expanding the deceleration parameter to first order around its present value. A detailed study is carried out by considering two different parametrizations, q = q(0) + q(1)z and q = q(0) + q(1)z(1 + z)(-1), and the associated free parameters (q(0), q(1)) are constrained by three different supernovae (SNe) samples. A previous analysis by Riess et al. using the first expansion is slightly improved and confirmed in light of their recent data (Gold07 sample). However, by fitting the model with the Supernova Legacy Survey (SNLS) type Ia sample, we find that the best fit to the redshift transition is z(t) = 0.61, instead of z(t) = 0.46 as derived by the High-z Supernovae Search (HZSNS) team. This result based in the SNLS sample is also in good agreement with the sample of Davis et al., z(t) = 0.60(-0.11)(+0.28) (1 sigma). Such results are in line with some independent analyses and accommodate more easily the concordance flat model (Lambda CDM). For both parametrizations, the three SNe Ia samples considered favour recent acceleration and past deceleration with a high degree of statistical confidence level. All the kinematic results presented here depend neither on the validity of general relativity nor on the matter-energy contents of the Universe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new inflationary scenario whose exponential potential V (Phi) has a quadratic dependence on the field Phi in addition to the standard linear term is confronted with the five-year observations of the Wilkinson-Microwave Anisotropy Probe and the Sloan Digital Sky Survey data. The number of e-folds (N), the ratio of tensor-to-scalar perturbations (r), the spectral scalar index of the primordial power spectrum (n(s)) and its running (dn(s)/d ln k) depend on the dimensionless parameter a multiplying the quadratic term in the potential. In the limit a. 0 all the results of the exponential potential are fully recovered. For values of alpha not equal 0, we find that the model predictions are in good agreement with the current observations of the Cosmic Microwave Background (CMB) anisotropies and Large-Scale Structure (LSS) in the Universe. Copyright (C) EPLA, 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present mid-infrared (mid-IR) spectra of the Compton-thick Seyfert 2 galaxy NGC 3281, obtained with the Thermal-Region Camera Spectrograph at the Gemini-South telescope. The spectra present a very deep silicate absorption at 9.7 mu m, and [S IV] 10.5 mu m and [Ne II] 12.7 mu m ionic lines, but no evidence of polycyclic aromatic hydrocarbon emission. We find that the nuclear optical extinction is in the range 24 mag <= A(V) <= 83 mag. A temperature T = 300 K was found for the blackbody dust continuum component of the unresolved 65 pc nucleus and the region at 130 pc SE, while the region at 130 pc NW reveals a colder temperature (200 K). We describe the nuclear spectrum of NGC 3281 using a clumpy torus model that suggests that the nucleus of this galaxy hosts a dusty toroidal structure. According to this model, the ratio between the inner and outer radius of the torus in NGC 3281 is R(0)/R(d) = 20, with 14 clouds in the equatorial radius with optical depth of tau(V) = 40 mag. We would be looking in the direction of the torus equatorial radius (i = 60 degrees), which has outer radius of R(0) similar to 11 pc. The column density is N(H) approximate to 1.2 x 10(24) cm(-2) and the iron K alpha equivalent width (approximate to 0.5-1.2 keV) is used to check the torus geometry. Our findings indicate that the X-ray absorbing column density, which classifies NGC 3281 as a Compton-thick source, may also be responsible for the absorption at 9.7 mu m providing strong evidence that the silicate dust responsible for this absorption can be located in the active galactic nucleus torus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oscillating biochemical reactions are common in cell dynamics and could be closely related to the emergence of the life phenomenon itself. In this work, we study the dynamical features of some classical chemical or biochemical oscillators where the effect of cell volume changes is explicitly considered. Such analysis enables us to find some general conditions about the cell membrane to preserve such oscillatory patterns, of possible relevance to hypothetical primitive cells in which these structures first appeared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

P>Estimates of effective elastic thickness (T(e)) for the western portion of the South American Plate using, independently, forward flexural modelling and coherence analysis, suggest different thermomechanical properties for the same continental lithosphere. We present a review of these T(e) estimates and carry out a critical reappraisal using a common methodology of 3-D finite element method to solve a differential equation for the bending of a thin elastic plate. The finite element flexural model incorporates lateral variations of T(e) and the Andes topography as the load. Three T(e) maps for the entire Andes were analysed: Stewart & Watts (1997), Tassara et al. (2007) and Perez-Gussinye et al. (2007). The predicted flexural deformation obtained for each T(e) map was compared with the depth to the base of the foreland basin sequence. Likewise, the gravity effect of flexurally induced crust-mantle deformation was compared with the observed Bouguer gravity. T(e) estimates using forward flexural modelling by Stewart & Watts (1997) better predict the geological and gravity data for most of the Andean system, particularly in the Central Andes, where T(e) ranges from greater than 70 km in the sub-Andes to less than 15 km under the Andes Cordillera. The misfit between the calculated and observed foreland basin subsidence and the gravity anomaly for the Maranon basin in Peru and the Bermejo basin in Argentina, regardless of the assumed T(e) map, may be due to a dynamic topography component associated with the shallow subduction of the Nazca Plate beneath the Andes at these latitudes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The viability of two different classes of Lambda(t)CDM cosmologies is tested by using the APM 08279+5255, an old quasar at redshift z = 3.91. In the first class of models, the cosmological term scales as Lambda(t) similar to R(-n). The particular case n = 0 describes the standard Lambda CDM model whereas n = 2 stands for the Chen and Wu model. For an estimated age of 2 Gyr, it is found that the power index has a lower limit n > 0.21, whereas for 3 Gyr the limit is n > 0.6. Since n can not be so large as similar to 0.81, the Lambda CDM and Chen and Wu models are also ruled out by this analysis. The second class of models is the one recently proposed by Wang and Meng which describes several Lambda(t)CDM cosmologies discussed in the literature. By assuming that the true age is 2 Gyr it is found that the epsilon parameter satisfies the lower bound epsilon > 0.11 while for 3 Gyr, a lower limit of epsilon > 0.52 is obtained. Such limits are slightly modified when the baryonic component is included.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evolutionary change in New World Monkey (NWM) skulls occurred primarily along the line of least resistance defined by size (including allometric) variation (g(max)). Although the direction of evolution was aligned with this axis, it was not clear whether this macroevolutionary pattern results from the conservation of within population genetic covariance patterns (long-term constraint) or long-term selection along a size dimension, or whether both, constraints and selection, were inextricably involved. Furthermore, G-matrix stability can also be a consequence of selection, which implies that both, constraints embodied in g(max) and evolutionary changes observed on the trait averages, would be influenced by selection Here, we describe a combination of approaches that allows one to test whether any particular instance of size evolution is a correlated by-product due to constraints (g(max)) or is due to direct selection on size and apply it to NWM lineages as a case study. The approach is based on comparing the direction and amount of evolutionary change produced by two different simulated sets of net-selection gradients (beta), a size (isometric and allometric size) and a nonsize set. Using this approach it is possible to distinguish between the two hypotheses (indirect size evolution due to constraints or direct selection on size), because although both may produce an evolutionary response aligned with g(max), the amount of change produced by random selection operating through the variance/covariance patterns (constraints hypothesis) will be much smaller than that produced by selection on size (selection hypothesis). Furthermore, the alignment of simulated evolutionary changes with g(max) when selection is not on size is not as tight as when selection is actually on size, allowing a statistical test of whether a particular observed case of evolution along the line of least resistance is the result of selection along it or not. Also, with matrix diagonalization (principal components [PC]) it is possible to calculate directly the net-selection gradient on size alone (first PC [PC1]) by dividing the amount of phenotypic difference between any two populations by the amount of variation in PC1, which allows one to benchmark whether selection was on size or not

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we propose a 0-1 optimization model to determine a crop rotation schedule for each plot in a cropping area. The rotations have the same duration in all the plots and the crops are selected to maximize plot occupation. The crops may have different production times and planting dates. The problem includes planting constraints for adjacent plots and also for sequences of crops in the rotations. Moreover, cultivating crops for green manuring and fallow periods are scheduled into each plot. As the model has, in general, a great number of constraints and variables, we propose a heuristics based on column generation. To evaluate the performance of the model and the method, computational experiments using real-world data were performed. The solutions obtained indicate that the method generates good results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes and compares three heuristics for a variant of the Steiner tree problem with revenues, which includes budget and hop constraints. First, a greedy method which obtains good approximations in short computational times is proposed. This initial solution is then improved by means of a destroy-and-repair method or a tabu search algorithm. Computational results compare the three methods in terms of accuracy and speed. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine different phenomenological interaction models for Dark Energy and Dark Matter by performing statistical joint analysis with observational data arising from the 182 Gold type la supernova samples, the shift parameter of the Cosmic Microwave Background given by the three-year Wilkinson Microwave Anisotropy Probe observations, the baryon acoustic oscillation measurement from the Sloan Digital Sky Survey and age estimates of 35 galaxies. Including the time-dependent observable, we add sensitivity of measurement and give complementary results for the fitting. The compatibility among three different data sets seem to imply that the coupling between dark energy and dark matter is a small positive value, which satisfies the requirement to solve the coincidence problem and the second law of thermodynamics, being compatible with previous estimates. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss an interacting tachyonic dark energy model in the context of the holographic principle. The potential of the holographic tachyon field in interaction with dark matter is constructed. The model results are compared with CMB shift parameter, baryonic acoustic oscilations, lookback time and the Constitution supernovae sample. The coupling constant of the model is compatible with zero, but dark energy is not given by a cosmological constant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We perform an analysis of the electroweak precision observables in the Lee-Wick Standard Model. The most stringent restrictions come from the S and T parameters that receive important tree level and one loop contributions. In general the model predicts a large positive S and a negative T. To reproduce the electroweak data, if all the Lee-Wick masses are of the same order, the Lee-Wick scale is of order 5 TeV. We show that it is possible to find some regions in the parameter space with a fermionic state as light as 2.4-3.5 TeV, at the price of rising all the other masses to be larger than 5-8 TeV. To obtain a light Higgs with such heavy resonances a fine-tuning of order a few per cent, at least, is needed. We also propose a simple extension of the model including a fourth generation of Standard Model fermions with their Lee-Wick partners. We show that in this case it is possible to pass the electroweak constraints with Lee-Wick fermionic masses of order 0.4-1.5 TeV and Lee-Wick gauge masses of order 3 TeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present work, the effects of spatial constraints on the efficiency of task execution in systems underlain by geographical complex networks are investigated, where the probability of connection decreases with the distance between the nodes. The investigation considers several configurations of the parameters defining the network connectivity, and the Barabasi-Albert network model is also considered for comparisons. The results show that the effect of connectivity is significant only for shorter tasks, the locality of connection simplied by the spatial constraints reduces efficiency, and the addition of edges can improve the efficiency of the execution, although with increasing locality of the connections the improvement is small.