956 resultados para Multinomial logit models with random coefficients (RCL)
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
BACKGROUND: Replicative phenotypic HIV resistance testing (rPRT) uses recombinant infectious virus to measure viral replication in the presence of antiretroviral drugs. Due to its high sensitivity of detection of viral minorities and its dissecting power for complex viral resistance patterns and mixed virus populations rPRT might help to improve HIV resistance diagnostics, particularly for patients with multiple drug failures. The aim was to investigate whether the addition of rPRT to genotypic resistance testing (GRT) compared to GRT alone is beneficial for obtaining a virological response in heavily pre-treated HIV-infected patients. METHODS: Patients with resistance tests between 2002 and 2006 were followed within the Swiss HIV Cohort Study (SHCS). We assessed patients' virological success after their antiretroviral therapy was switched following resistance testing. Multilevel logistic regression models with SHCS centre as a random effect were used to investigate the association between the type of resistance test and virological response (HIV-1 RNA <50 copies/mL or ≥1.5 log reduction). RESULTS: Of 1158 individuals with resistance tests 221 with GRT+rPRT and 937 with GRT were eligible for analysis. Overall virological response rates were 85.1% for GRT+rPRT and 81.4% for GRT. In the subgroup of patients with >2 previous failures, the odds ratio (OR) for virological response of GRT+rPRT compared to GRT was 1.45 (95% CI 1.00-2.09). Multivariate analyses indicate a significant improvement with GRT+rPRT compared to GRT alone (OR 1.68, 95% CI 1.31-2.15). CONCLUSIONS: In heavily pre-treated patients rPRT-based resistance information adds benefit, contributing to a higher rate of treatment success.
Resumo:
This paper analyzes the nature of health care provider choice inthe case of patient-initiated contacts, with special reference toa National Health Service setting, where monetary prices are zeroand general practitioners act as gatekeepers to publicly financedspecialized care. We focus our attention on the factors that mayexplain the continuously increasing use of hospital emergencyvisits as opposed to other provider alternatives. An extendedversion of a discrete choice model of demand for patient-initiatedcontacts is presented, allowing for individual and town residencesize differences in perceived quality (preferences) betweenalternative providers and including travel and waiting time asnon-monetary costs. Results of a nested multinomial logit model ofprovider choice are presented. Individual choice betweenalternatives considers, in a repeated nested structure, self-care,primary care, hospital and clinic emergency services. Welfareimplications and income effects are analyzed by computingcompensating variations, and by simulating the effects of userfees by levels of income. Results indicate that compensatingvariation per visit is higher than the direct marginal cost ofemergency visits, and consequently, emergency visits do not appearas an inefficient alternative even for non-urgent conditions.
Resumo:
The context where the university admissions exams are performed is presented and the main concerns about this exams are outlined and discussed from a statistical point of view. The paper offers an illustration of the use of random coefficient models in the study of educational data. The association between two individual scores (one internal and the other external to the school) and the effect of the school in the external exam is analized by a regression model with random intercept and fixed slope. A variance component model for the analysis of the grading process is also presented. The paper ends with an outline of the main findings and the presentation of some specific proposals to improve and control the equity of the system. Some pedagogic reflections are also included.
Resumo:
Since ethical concerns are calling for more attention within OperationalResearch, we present three approaches to combine Operational Researchmodels with ethics. Our intention is to clarify the trade-offs faced bythe OR community, in particular the tension between the scientificlegitimacy of OR models (ethics outside OR models) and the integrationof ethics within models (ethics within OR models). Presenting anddiscussing an approach that combines OR models with the process of OR(ethics beyond OR models), we suggest rigorous ways to express the relationbetween ethics and OR models. As our work is exploratory, we are trying toavoid a dogmatic attitude and call for further research. We argue thatthere are interesting avenues for research at the theoretical,methodological and applied levels and that the OR community can contributeto an innovative, constructive and responsible social dialogue about itsethics.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
OBJECTIVES: To examine trends in the prevalence of congenital heart defects (CHDs) in Europe and to compare these trends with the recent decrease in the prevalence of CHDs in Canada (Quebec) that was attributed to the policy of mandatory folic acid fortification. STUDY DESIGN: We used data for the period 1990-2007 for 47 508 cases of CHD not associated with a chromosomal anomaly from 29 population-based European Surveillance of Congenital Anomalies registries in 16 countries covering 7.3 million births. We estimated trends for all CHDs combined and separately for 3 severity groups using random-effects Poisson regression models with splines. RESULTS: We found that the total prevalence of CHDs increased during the 1990s and the early 2000s until 2004 and decreased thereafter. We found essentially no trend in total prevalence of the most severe group (group I), whereas the prevalence of severity group II increased until about 2000 and decreased thereafter. Trends for severity group III (the most prevalent group) paralleled those for all CHDs combined. CONCLUSIONS: The prevalence of CHDs decreased in recent years in Europe in the absence of a policy for mandatory folic acid fortification. One possible explanation for this decrease may be an as-yet-undocumented increase in folic acid intake of women in Europe following recommendations for folic acid supplementation and/or voluntary fortification. However, alternative hypotheses, including reductions in risk factors of CHDs (eg, maternal smoking) and improved management of maternal chronic health conditions (eg, diabetes), must also be considered for explaining the observed decrease in the prevalence of CHDs in Europe or elsewhere.
Resumo:
Site-specific regression coefficient values are essential for erosion prediction with empirical models. With the objective to investigate the surface-soilconsolidation factor, Cf, linked to the RUSLE's prior-land-use subfactor, PLU, an erosion experiment using simulated rainfall on a 0.075 m m-1 slope, sandy loam Paleudult soil, was conducted at the Agriculture Experimental Station of the Federal University of Rio Grande do Sul (EEA/UFRGS), in Eldorado do Sul, State of Rio Grande do Sul, Brazil. Firstly, a row-cropped area was excluded from cultivation (March 1995), the existing crop residue removed from the field, and the soil kept clean-tilled the rest of the year (to get a degraded soil condition for the intended purpose of this research). The soil was then conventional-tilled for the last time (except for a standard plot which was kept continuously cleantilled for comparison purposes), in January 1996, and the following treatments were established and evaluated for soil reconsolidation and soil erosion until May 1998, on duplicated 3.5 x 11.0 m erosion plots: (a) fresh-tilled soil, continuously in clean-tilled fallow (unit plot); (b) reconsolidating soil without cultivation; and (c) reconsolidating soil with cultivation (a crop sequence of three corn- and two black oats cycles, continuously in no-till, removing the crop residues after each harvest for rainfall application and redistributing them on the site after that). Simulated rainfall was applied with a Swanson's type, rotating-boom rainfall simulator, at 63.5 mm h-1 intensity and 90 min duration, six times during the two-and-half years of experimental period (at the beginning of the study and after each crop harvest, with the soil in the unit plot being retilled before each rainfall test). The soil-surface-consolidation factor, Cf, was calculated by dividing soil loss values from the reconsolidating soil treatments by the average value from the fresh-tilled soil treatment (unit plot). Non-linear regression was used to fit the Cf = e b.t model through the calculated Cf-data, where t is time in days since last tillage. Values for b were -0.0020 for the reconsolidating soil without cultivation and -0.0031 for the one with cultivation, yielding Cf-values equal to 0.16 and 0.06, respectively, after two-and-half years of tillage discontinuation, compared to 1.0 for fresh-tilled soil. These estimated Cf-values correspond, respectively, to soil loss reductions of 84 and 94 %, in relation to soil loss from the fresh-tilled soil, showing that the soil surface reconsolidated intenser with cultivation than without it. Two distinct treatmentinherent soil surface conditions probably influenced the rapid decay-rate of Cf values in this study, but, as a matter of a fact, they were part of the real environmental field conditions. Cf-factor curves presented in this paper are therefore useful for predicting erosion with RUSLE, but their application is restricted to situations where both soil type and particular soil surface condition are similar to the ones investigate in this study.
Resumo:
The integrity of central and peripheral nervous system myelin is affected in numerous lipid metabolism disorders. This vulnerability was so far mostly attributed to the extraordinarily high level of lipid synthesis that is required for the formation of myelin, and to the relative autonomy in lipid synthesis of myelinating glial cells because of blood barriers shielding the nervous system from circulating lipids. Recent insights from analysis of inherited lipid disorders, especially those with prevailing lipid depletion and from mouse models with glia-specific disruption of lipid metabolism, shed new light on this issue. The particular lipid composition of myelin, the transport of lipid-associated myelin proteins, and the necessity for timely assembly of the myelin sheath all contribute to the observed vulnerability of myelin to perturbed lipid metabolism. Furthermore, the uptake of external lipids may also play a role in the formation of myelin membranes. In addition to an improved understanding of basic myelin biology, these data provide a foundation for future therapeutic interventions aiming at preserving glial cell integrity in metabolic disorders.
Resumo:
Understanding how communities of living organisms assemble has been a central question in ecology since the early days of the discipline. Disentangling the different processes involved in community assembly is not only interesting in itself but also crucial for an understanding of how communities will behave under future environmental scenarios. The traditional concept of assembly rules reflects the notion that species do not co-occur randomly but are restricted in their co-occurrence by interspecific competition. This concept can be redefined in a more general framework where the co-occurrence of species is a product of chance, historical patterns of speciation and migration, dispersal, abiotic environmental factors, and biotic interactions, with none of these processes being mutually exclusive. Here we present a survey and meta-analyses of 59 papers that compare observed patterns in plant communities with null models simulating random patterns of species assembly. According to the type of data under study and the different methods that are applied to detect community assembly, we distinguish four main types of approach in the published literature: species co-occurrence, niche limitation, guild proportionality and limiting similarity. Results from our meta-analyses suggest that non-random co-occurrence of plant species is not a widespread phenomenon. However, whether this finding reflects the individualistic nature of plant communities or is caused by methodological shortcomings associated with the studies considered cannot be discerned from the available metadata. We advocate that more thorough surveys be conducted using a set of standardized methods to test for the existence of assembly rules in data sets spanning larger biological and geographical scales than have been considered until now. We underpin this general advice with guidelines that should be considered in future assembly rules research. This will enable us to draw more accurate and general conclusions about the non-random aspect of assembly in plant communities.
Resumo:
1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.
Resumo:
We have systematically analyzed six different reticular models with quenched disorder and no thermal fluctuations exhibiting a field-driven first-order phase transition. We have studied the nonequilibrium transition, appearing when varying the amount of disorder, characterized by the change from a discontinuous hysteresis cycle (with one or more large avalanches) to a smooth one (with only tiny avalanches). We have computed critical exponents using finite size scaling techniques and shown that they are consistent with universal values depending only on the space dimensionality d.
Resumo:
We study charmed baryon resonances that are generated dynamically within a unitary meson-baryon coupled-channel model that treats the heavy pseudoscalar and vector mesons on equal footing as required by heavy-quark symmetry. It is an extension of recent SU(4) models with t-channel vector-meson exchanges to an SU(8) spin-flavor scheme, but differs considerably from the SU(4) approach in how the strong breaking of the flavor symmetry is implemented. Some of our dynamically generated states can be readily assigned to recently observed baryon resonances, while others do not have a straightforward identification and require the compilation of more data as well as an extension of the model to d-wave meson-baryon interactions and p-wave coupling in the neglected s- and u-channel diagrams. Of several novelties, we find that the Delta c(2595), which emerged as a ND quasibound state within the SU(4) approaches, becomes predominantly a ND* quasibound state in the present SU(8) scheme.
Resumo:
We study whether the neutron skin thickness Δrnp of 208Pb originates from the bulk or from the surface of the nucleon density distributions, according to the mean-field models of nuclear structure, and find that it depends on the stiffness of the nuclear symmetry energy. The bulk contribution to Δrnp arises from an extended sharp radius of neutrons, whereas the surface contribution arises from different widths of the neutron and proton surfaces. Nuclear models where the symmetry energy is stiff, as typical of relativistic models, predict a bulk contribution in Δrnp of 208Pb about twice as large as the surface contribution. In contrast, models with a soft symmetry energy like common nonrelativistic models predict that Δrnp of 208Pb is divided similarly into bulk and surface parts. Indeed, if the symmetry energy is supersoft, the surface contribution becomes dominant. We note that the linear correlation of Δrnp of 208Pb with the density derivative of the nuclear symmetry energy arises from the bulk part of Δrnp. We also note that most models predict a mixed-type (between halo and skin) neutron distribution for 208Pb. Although the halo-type limit is actually found in the models with a supersoft symmetry energy, the skin-type limit is not supported by any mean-field model. Finally, we compute parity-violating electron scattering in the conditions of the 208Pb parity radius experiment (PREX) and obtain a pocket formula for the parity-violating asymmetry in terms of the parameters that characterize the shape of the 208Pb nucleon densities.