930 resultados para Discrete Choice Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines a dataset which is modeled well by the Poisson-Log Normal process and by this process mixed with Log Normal data, which are both turned into compositions. This generates compositional data that has zeros without any need for conditional models or assuming that there is missing or censored data that needs adjustment. It also enables us to model dependence on covariates and within the composition

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the midst of health care reform, Colombia has succeeded in increasing health insurance coverage and the quality of health care. In spite of this, efficiency continues to be a matter of concern, and small-area variations in health care are one of the plausible causes of such inefficiencies. In order to understand this issue, we use individual data of all births from a Contributory-Regimen insurer in Colombia. We perform two different specifications of a multilevel logistic regression model. Our results reveal that hospitals account for 20% of variation on the probability of performing cesarean sections. Geographic area only explains 1/3 of the variance attributable to the hospital. Furthermore, some variables from both demand and supply sides are found to be also relevant on the probability of undergoing cesarean sections. This paper contributes to previous research by using a hierarchical model and by defining hospitals as cluster. Moreover, we also include clinical and supply induced demand variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Describes a method to code a decimated model of an isosurface on an octree representation while maintaining volume data if it is needed. The proposed technique is based on grouping the marching cubes (MC) patterns into five configurations according the topology and the number of planes of the surface that are contained in a cell. Moreover, the discrete number of planes on which the surface lays is fixed. Starting from a complete volume octree, with the isosurface codified at terminal nodes according to the new configuration, a bottom-up strategy is taken for merging cells. Such a strategy allows one to implicitly represent co-planar faces in the upper octree levels without introducing any error. At the end of this merging process, when it is required, a reconstruction strategy is applied to generate the surface contained in the octree intersected leaves. Some examples with medical data demonstrate that a reduction of up to 50% in the number of polygons can be achieved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesis pretende explorar acercamientos computacionalmente confiables y eficientes de contractivo MPC para sistemas de tiempo discreto. Dos tipos de contractivo MPC han sido estudiados: MPC con coacción contractiva obligatoria y MPC con una secuencia contractiva de conjuntos controlables. Las técnicas basadas en optimización convexa y análisis de intervalos son aplicadas para tratar MPC contractivo lineal y no lineal, respectivamente. El análisis de intervalos clásicos es ampliado a zonotopes en la geometría para diseñar un conjunto invariante de control terminal para el modo dual de MPC. También es ampliado a intervalos modales para tener en cuenta la modalidad al calcula de conjuntos controlables robustos con una interpretación semántica clara. Los instrumentos de optimización convexa y análisis de intervalos han sido combinados para mejorar la eficacia de contractive MPC para varias clases de sistemas de tiempo discreto inciertos no lineales limitados. Finalmente, los dos tipos dirigidos de contractivo MPC han sido aplicados para controlar un Torneo de Fútbol de Copa Mundial de Micro Robot (MiroSot) y un Tanque-Reactor de Mezcla Continua (CSTR), respectivamente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An operational dust forecasting model is developed by including the Met Office Hadley Centre climate model dust parameterization scheme, within a Met Office regional numerical weather prediction (NWP) model. The model includes parameterizations for dust uplift, dust transport, and dust deposition in six discrete size bins and provides diagnostics such as the aerosol optical depth. The results are compared against surface and satellite remote sensing measurements and against in situ measurements from the Facility for Atmospheric Airborne Measurements for a case study when a strong dust event was forecast. Comparisons are also performed against satellite and surface instrumentation for the entire month of August. The case study shows that this Saharan dust NWP model can provide very good guidance of dust events, as much as 42 h ahead. The analysis of monthly data suggests that the mean and variability in the dust model is also well represented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A physically motivated statistical model is used to diagnose variability and trends in wintertime ( October - March) Global Precipitation Climatology Project (GPCP) pentad (5-day mean) precipitation. Quasi-geostrophic theory suggests that extratropical precipitation amounts should depend multiplicatively on the pressure gradient, saturation specific humidity, and the meridional temperature gradient. This physical insight has been used to guide the development of a suitable statistical model for precipitation using a mixture of generalized linear models: a logistic model for the binary occurrence of precipitation and a Gamma distribution model for the wet day precipitation amount. The statistical model allows for the investigation of the role of each factor in determining variations and long-term trends. Saturation specific humidity q(s) has a generally negative effect on global precipitation occurrence and with the tropical wet pentad precipitation amount, but has a positive relationship with the pentad precipitation amount at mid- and high latitudes. The North Atlantic Oscillation, a proxy for the meridional temperature gradient, is also found to have a statistically significant positive effect on precipitation over much of the Atlantic region. Residual time trends in wet pentad precipitation are extremely sensitive to the choice of the wet pentad threshold because of increasing trends in low-amplitude precipitation pentads; too low a choice of threshold can lead to a spurious decreasing trend in wet pentad precipitation amounts. However, for not too small thresholds, it is found that the meridional temperature gradient is an important factor for explaining part of the long-term trend in Atlantic precipitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model catalysts of Pd nanoparticles and films on TiO2 (I 10) were fabricated by metal vapour deposition (MVD). Molecular beam measurements show that the particles are active for CO adsorption, with a global sticking probability of 0.25, but that they are deactivated by annealing above 600 K, an effect indicative of SMSI. The Pd nanoparticles are single crystals oriented with their (I 11) plane parallel to the surface plane of the titania. Analysis of the surface by atomic resolution STM shows that new structures have formed at the surface of the Pd nanoparticles and films after annealing above 800 K. There are only two structures, a zigzag arrangement and a much more complex "pinwheel" structure. The former has a unit cell containing 7 atoms, and the latter is a bigger unit cell containing 25 atoms. These new structures are due to an overlayer of titania that has appeared on the surface of the Pd nanoparticles after annealing, and it is proposed that the surface layer that causes the SMSI effect is a mixed alloy of Pd and Ti, with only two discrete ratios of atoms: Pd/Ti of 1: 1 (pinwheel) and 1:2 (zigzag). We propose that it is these structures that cause the SMSI effect. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban land surface schemes have been developed to model the distinct features of the urban surface and the associated energy exchange processes. These models have been developed for a range of purposes and make different assumptions related to the inclusion and representation of the relevant processes. Here, the first results of Phase 2 from an international comparison project to evaluate 32 urban land surface schemes are presented. This is the first large-scale systematic evaluation of these models. In four stages, participants were given increasingly detailed information about an urban site for which urban fluxes were directly observed. At each stage, each group returned their models' calculated surface energy balance fluxes. Wide variations are evident in the performance of the models for individual fluxes. No individual model performs best for all fluxes. Providing additional information about the surface generally results in better performance. However, there is clear evidence that poor choice of parameter values can cause a large drop in performance for models that otherwise perform well. As many models do not perform well across all fluxes, there is need for caution in their application, and users should be aware of the implications for applications and decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach is presented that simultaneously deals with Misreporting and Don't Know (DK) responses within a dichotomous-choice contingent valuation framework. Utilising a modification of the standard Bayesian Probit framework, a Gibbs with Metropolis-Hastings algorithm is used to estimate the posterior densities for the parameters of interest. Several model specifications are applied to two contingent valuation datasets: one on wolf management plans, and one on the US Fee Demonstration Program. We find that DKs are more likely to be from people who would be predicted to have positive utility for the bid. Therefore, a DK is more likely to be a YES than a NO. We also find evidence of misreporting, primarily in favour of the NO option. The inclusion of DK responses has an unpredictable impact on willingness-to-pay estimates, since it impacts differently on the results for the two datasets we examine. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the mixed logit (ML) using Bayesian methods was employed to examine willingness-to-pay (WTP) to consume bread produced with reduced levels of pesticides so as to ameliorate environmental quality, from data generated by a choice experiment. Model comparison used the marginal likelihood, which is preferable for Bayesian model comparison and testing. Models containing constant and random parameters for a number of distributions were considered, along with models in ‘preference space’ and ‘WTP space’ as well as those allowing for misreporting. We found: strong support for the ML estimated in WTP space; little support for fixing the price coefficient a common practice advocated and adopted in the environmental economics literature; and, weak evidence for misreporting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.