116 resultados para Intractable Likelihood


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We set up a dynamic model of firm investment in which liquidity constraintsenter explicity into the firm's maximization problem. The optimal policyrules are incorporated into a maximum likelihood procedure which estimatesthe structural parameters of the model. Investment is positively related tothe firm's internal financial position when the firm is relatively poor. This relationship disappears for wealthy firms, which can reach theirdesired level of investment. Borrowing is an increasing function of financial position for poor firms. This relationship is reversed as a firm's financial position improves, and large firms hold little debt.Liquidity constrained firms may be unused credits lines and the capacity toinvest further if they desire. However the fear that liquidity constraintswill become binding in the future induces them to invest only when internalresources increase.We estimate the structural parameters of the model and use them to quantifythe importance of liquidity constraints on firms' investment. We find thatliquidity constraints matter significantly for the investment decisions of firms. If firms can finance investment by issuing fresh equity, rather than with internal funds or debt, average capital stock is almost 35% higher overa period of 20 years. Transitory shocks to internal funds have a sustained effect on the capital stock. This effect lasts for several periods and ismore persistent for small firms than for large firms. A 10% negative shock to firm fundamentals reduces the capital stock of firms which face liquidityconstraints by almost 8% over a period as opposed to only 3.5% for firms which do not face these constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several estimators of the expectation, median and mode of the lognormal distribution are derived. They aim to be approximately unbiased, efficient, or have a minimax property in the class of estimators we introduce. The small-sample properties of these estimators are assessed by simulations and, when possible, analytically. Some of these estimators of the expectation are far more efficient than the maximum likelihood or the minimum-variance unbiased estimator, even for substantial samplesizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Youth is one of the phases in the life-cycle when some of the most decisivelife transitions take place. Entering the labour market or leaving parentalhome are events with important consequences for the economic well-beingof young adults. In this paper, the interrelationship between employment,residential emancipation and poverty dynamics is studied for eight Europeancountries by means of an econometric model with feedback effects. Resultsshow that youth poverty genuine state dependence is positive and highly significant.Evidence proves there is a strong causal effect between poverty andleaving home in Scandinavian countries, however, time in economic hardshipdoes not last long. In Southern Europe, instead, youth tend to leave theirparental home much later in order to avoid falling into a poverty state that ismore persistent. Past poverty has negative consequences on the likelihood ofemployment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the complexity of rationalizing choice behavior. We do so by analyzing two polar cases, and a number of intermediate ones. In our most structured case, that is where choice behavior is defined in universal choice domains and satisfies the "weak axiom of revealed preference," finding the complete preorder rationalizing choice behavior is a simple matter. In the polar case, where no restriction whatsoever is imposed, either on choice behavior or on choice domain, finding the complete preordersthat rationalize behavior turns out to be intractable. We show that the task of finding the rationalizing complete preorders is equivalent to a graph problem. This allows the search for existing algorithms in the graph theory literature, for the rationalization of choice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Foreign language skills represent a form of human capital that can be rewarded in the labor market. Drawing on data from the Adult Education Survey of 2007, this is the first study estimating returns to foreign language skills in Turkey. We contribute to the literature on the economic value of language knowledge, with a special focus on a country characterized by fast economic and social development. Although English is the most widely spoken foreign language in Turkey, we initially consider the economic value of different foreign languages among the employed males aged 25 to 65. We find positive and significant returns to proficiency in English and Russian, which increase with the level of competence. Knowledge of French and German also appears to be positively rewarded in the Turkish labor market, although their economic value seems mostly linked to an increased likelihood to hold specific occupations rather than increased earnings within occupations. Focusing on English, we also explore the heterogeneity in returns to different levels of proficiency by frequency of English use at work, birth-cohort, education, occupation and rural/urban location. The results are also robust to the endogenous specification of English language skills.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The absolute K magnitudes and kinematic parameters of about 350 oxygen-rich Long-Period Variable stars are calibrated, by means of an up-to-date maximum-likelihood method, using HIPPARCOS parallaxes and proper motions together with radial velocities and, as additional data, periods and V-K colour indices. Four groups, differing by their kinematics and mean magnitudes, are found. For each of them, we also obtain the distributions of magnitude, period and de-reddened colour of the base population, as well as de-biased period-luminosity-colour relations and their two-dimensional projections. The SRa semiregulars do not seem to constitute a separate class of LPVs. The SRb appear to belong to two populations of different ages. In a PL diagram, they constitute two evolutionary sequences towards the Mira stage. The Miras of the disk appear to pulsate on a lower-order mode. The slopes of their de-biased PL and PC relations are found to be very different from the ones of the Oxygen Miras of the LMC. This suggests that a significant number of so-called Miras of the LMC are misclassified. This also suggests that the Miras of the LMC do not constitute a homogeneous group, but include a significant proportion of metal-deficient stars, suggesting a relatively smooth star formation history. As a consequence, one may not trivially transpose the LMC period-luminosity relation from one galaxy to the other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The COMPTEL unidentified source GRO J1411-64 was observed by INTEGRAL, and its central part, also by XMM-Newton. The data analysis shows no hint for new detections at hard X-rays. The upper limits in flux herein presented constrain the energy spectrum of whatever was producing GRO J1411-64, imposing, in the framework of earlier COMPTEL observations, the existence of a peak in power output located somewhere between 300-700 keV for the so-called low state. The Circinus Galaxy is the only source detected within the 4$\sigma$ location error of GRO J1411-64, but can be safely excluded as the possible counterpart: the extrapolation of the energy spectrum is well below the one for GRO J1411-64 at MeV energies. 22 significant sources (likelihood $> 10$) were extracted and analyzed from XMM-Newton data. Only one of these sources, XMMU J141255.6-635932, is spectrally compatible with GRO J1411-64 although the fact the soft X-ray observations do not cover the full extent of the COMPTEL source position uncertainty make an association hard to quantify and thus risky. The unique peak of the power output at high energies (hard X-rays and gamma-rays) resembles that found in the SED seen in blazars or microquasars. However, an analysis using a microquasar model consisting on a magnetized conical jet filled with relativistic electrons which radiate through synchrotron and inverse Compton scattering with star, disk, corona and synchrotron photons shows that it is hard to comply with all observational constrains. This and the non-detection at hard X-rays introduce an a-posteriori question mark upon the physical reality of this source, which is discussed in some detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[cat] Utilitzant l’enquesta REFLEX/HEGESCO, aquest article explora la probabilitat de desajustament entre educació i treball a l’Europa de l’Est i Central. Classifiquem els països en dos grups segons la transparència dels títols educatius al mercat de treball. Polònia, la República Txeca i Eslovènia formen el grup amb més transparència, i Hongria, Lituània i Estònia formen el grup amb més opacitat. Analitzem tres tipus de desajustaments: el vertical (infra‐, sobre‐educació), l’horitzontal (desajustament del camp d’estudi) i el desajust en habilitats. Focalitzem l’anàlisi en l’efecte dels camps d’estudi i les competències dels individus en el desajustament del mercat laboral en aquests països. Els resultats mostren importants diferències entre els dos grups de països estudiats.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con este trabajo revisamos los Modelos de niveles de las tasas de intereses en Chile. Además de los Modelos de Nivel tradicionales por Chan, Karoly, Longstaff y Lijadoras (1992) en EE. UU, y Parisi (1998) en Chile, por el método de Probabilidad Maximun permitimos que la volatilidad condicional también incluya los procesos inesperados de la información (el modelo GARCH ) y también que la volatilidad sea la función del nivel de la tasa de intereses (modelo TVP-NIVELE) como en Brenner, Harjes y la Crona (1996). Para esto usamos producciones de mercado de bonos de reconocimiento, en cambio las producciones mensuales medias de subasta PDBC, y la ampliación del tamaño y la frecuencia de la muestra a 4 producciones semanales con términos(condiciones) diferentes a la madurez: 1 año, 5 años, 10 años y 15 años. Los resultados principales del estudio pueden ser resumidos en esto: la volatilidad de los cambios inesperados de las tarifas depende positivamente del nivel de las tarifas, sobre todo en el modelo de TVP-NIVEL. Obtenemos pruebas de reversión tacañas, tal que los incrementos en las tasas de intereses no eran independientes, contrariamente a lo obtenido por Brenner. en EE. UU. Los modelos de NIVELES no son capaces de ajustar apropiadamente la volatilidad en comparación con un modelo GARCH (1,1), y finalmente, el modelo de TVP-NIVEL no vence los resultados del modelo GARCH (1,1)