72 resultados para variable parameters
Resumo:
Projecte de recerca elaborat a partir d’una estada a l’Institut National de la Recherche Agronomique, França, entre 2007 i 2009. Saccharomyces cerevisiae ha estat el llevat utilitzat durant mil.lenis en l'elaboració de vins. Tot i així, es té poc coneixement sobre les pressions de selecció que han actuat en la modelització del genoma dels llevats vínics. S’ha seqüenciat el genoma d'una soca vínica comercial, EC1118, obtenint 31 supercontigs que cobreixen el 97% del genoma de la soca de referència, S288c. S’ha trobat que el genoma de la soca vínica es diferencia bàsicament en la possessió de 3 regions úniques que contenen 34 gens implicats en funcions claus per al procés fermentatiu. A banda, s’han dut a terme estudis de filogènia i synteny (ordre dels gens) que mostren que una d'aquestes tres regions és pròxima a una espècie relacionada amb el gènere Saccharomyces, mentre que les altres dos regions tenen un origen no-Saccharomyces. S’ha identificat mitjançant PCR i seqüenciació a Zygosaccharomyces bailii, una espècie contaminant de les fermentacions víniques, com a espècie donadora d'una de les dues regions. Les hibridacions naturals entre soques de diferents espècies dins del grup Saccharomyces sensu stricto ja han estat descrites. El treball és el primer que presenta hibridacions entre espècies Saccharomyces i no-Saccharomyces (Z. bailii, en aquest cas). També s’assenyala que les noves regions es troben freqüent i diferencialment presents entre els clades de S. cerevisiae, trobant-se de manera gairebé exclusiva en el grup de les soques víniques, suggerint que es tracta d'una adquisició recent de transferència gènica. En general, les dades demostren que el genoma de les soques víniques pateix una constant remodelació mitjançant l'adquisició de gens exògens. Els resultats suggereixen que aquests processos estan afavorits per la proximitat ecològica i estan implicats en l'adaptació molecular de les soques víniques a les condicions d'elevada concentració en sucres, poc nitrogen i elevades concentracions en etanol.
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation
Resumo:
La recerca té l’objectiu de conèixer quines són les variables que expliquen i prediuen la victimització mitjançant anàlisis de regressió logística. L’estudi s’ha dut a terme a partir d’una mostra de 39.517 registres procedents de 17 països industrialitzats (inclosa Catalunya), que pertanyen a 17 enquestes sobre la victimització l’any 1999, totes elles realitzades amb els mateixos paràmetres metodològics. Les variables dependents (o tipus de victimització) que s’estudien són: robatori del/en el cotxe, robatori o temptativa de robatori en el domicili, delictes menors, delictes contra la propietat, delictes amb violència, agressions sexuals i delictes de contacte. Les variables independents són: país, hàbits de sortida nocturns, edat, nombre d’habitants de la ciutat o municipi, ocupació, anys d’estudi, ingressos, estat civil i sexe. Algunes conclusions són: (1) les variables país i edat són les que amb més força expliquen la victimització; (2)quant a les agressions sexuals, la variable que més explica la victimització és l’estat civil, seguit de l’edat i el país; (3) la variable país està present en totes i cada una de les equacions obtingudes de les regressions logístiques, la qual cosa vol dir que en tots els casos explica la victimització i, és més, té la capacitat de predir-la; (4) l’estat civil i nombre d’habitants estan presents en totes les equacions de regressió logística llevat de la referida als delictes contra els cotxes; (5) l’edat està present en 6 de les 8 equacions de regressió logística, no es presenta en els delictes contra els domicilis ni en els delictes amb violència, per la qual cosa no és útil a l’hora de predir-ne la victimització; (5) pel que fa al país, el fet de viure a Catalunya és un factor de protecció envers el delicte, llevat dels fets contra els vehicles.
Resumo:
Business processes designers take into account the resources that the processes would need, but, due to the variable cost of certain parameters (like energy) or other circumstances, this scheduling must be done when business process enactment. In this report we formalize the energy aware resource cost, including time and usage dependent rates. We also present a constraint programming approach and an auction-based approach to solve the mentioned problem including a comparison of them and a comparison of the proposed algorithms for solving them
Resumo:
Agents use their knowledge on the history of the economy in orderto choose what is the optimal action to take at any given moment of time,but each individual observes history with some noise. This paper showsthat the amount of information available on the past evolution of theeconomy is an endogenous variable, and that this leads to overconcentrationof the investment, which can be interpreted as underinvestment in research.It presents a model in which agents have to invest at each period in one of$K$ sectors, each of them paying an exogenous return that follows a welldefined stochastic path. At any moment of time each agent receives an unbiasednoisy signal on the payoff of each sector. The signals differ across agents,but all of them have the same variance, which depends on the aggregate investmentin that particular sector (so that if almost everybody invests in it theperceptions of everybody will be very accurate, but if almost nobody doesthe perceptions of everybody will be very noisy). The degree of hetereogeneityacross agents is then an endogenous variable, evolving across time determining,and being determined by, the amount of information disclosed.As long as both the level of social interaction and the underlying precisionof the observations are relatively large agents behave in a very preciseway. This behavior is unmodified for a huge range of informational parameters,and it is characterized by an excessive concentration of the investment ina few sectors. Additionally the model shows that generalized improvements in thequality of the information that each agent gets may lead to a worse outcomefor all the agents due to the overconcentration of the investment that thisproduces.
Resumo:
We consider a dynamic multifactor model of investment with financing imperfections,adjustment costs and fixed and variable capital. We use the model to derive a test offinancing constraints based on a reduced form variable capital equation. Simulation resultsshow that this test correctly identifies financially constrained firms even when the estimationof firms investment opportunities is very noisy. In addition, the test is well specified inthe presence of both concave and convex adjustment costs of fixed capital. We confirmempirically the validity of this test on a sample of small Italian manufacturing companies.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
The absolute K magnitudes and kinematic parameters of about 350 oxygen-rich Long-Period Variable stars are calibrated, by means of an up-to-date maximum-likelihood method, using HIPPARCOS parallaxes and proper motions together with radial velocities and, as additional data, periods and V-K colour indices. Four groups, differing by their kinematics and mean magnitudes, are found. For each of them, we also obtain the distributions of magnitude, period and de-reddened colour of the base population, as well as de-biased period-luminosity-colour relations and their two-dimensional projections. The SRa semiregulars do not seem to constitute a separate class of LPVs. The SRb appear to belong to two populations of different ages. In a PL diagram, they constitute two evolutionary sequences towards the Mira stage. The Miras of the disk appear to pulsate on a lower-order mode. The slopes of their de-biased PL and PC relations are found to be very different from the ones of the Oxygen Miras of the LMC. This suggests that a significant number of so-called Miras of the LMC are misclassified. This also suggests that the Miras of the LMC do not constitute a homogeneous group, but include a significant proportion of metal-deficient stars, suggesting a relatively smooth star formation history. As a consequence, one may not trivially transpose the LMC period-luminosity relation from one galaxy to the other.
Resumo:
uvby H-beta photometry has been obtained for a sample of 93 selected main sequence A stars. The purpose was to determine accurate effective temperatures, surface gravities, and absolute magnitudes for an individual determination of ages and parallaxes, which have to be included in a more extensive work analyzing the kinematic properties of A V stars. Several calibrations and methods to determine the above mentioned parameters have been reviewed, allowing the design of a new algorithm for their determination. The results obtained using this procedure were tested in a previous paper using uvby H-beta data from the Hauck and Mermilliod catalogue, and comparing the rusulting temperatures, surface gravities and absolute magnitudes with empirical determinations of these parameters.
Resumo:
We present I-band deep CCD exposures of the fields of galactic plane radio variables. An optical counterpart, based on positional coincidence, has been found for 15 of the 27 observed program objects. The Johnson I magnitude of the sources identified is in the range 18-21.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
A considerable fraction of the -ray sources discovered with the Energetic Gamma-Ray Experiment Telescope (EGRET) remain unidentified. The EGRET sources that have been properly identified are either pulsars or variable sources at both radio and gamma-ray wavelengths. Most of the variable sources are strong radio blazars. However, some low galactic-latitude EGRET sources, with highly variable -ray emission, lack any evident counterpart according to the radio data available until now. Aims. The primary goal of this paper is to identify and characterise the potential radio counterparts of four highly variable -ray sources in the galactic plane through mapping the radio surroundings of the EGRET confidence contours and determining the variable radio sources in the field whenever possible. Methods. We have carried out a radio exploration of the fields of the selected EGRET sources using the Giant Metrewave Radio Telescope (GMRT) interferometer at 21 cm wavelength, with pointings being separated by months. Results. We detected a total of 151 radio sources. Among them, we identified a few radio sources whose flux density has apparently changed on timescales of months. Despite the limitations of our search, their possible variability makes these objects a top-priority target for multiwavelength studies of the potential counterparts of highly variable, unidentified gamma-ray sources.