72 resultados para Model for bringing into play


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general objective of the study was to empirically test a reciprocal model of job satisfaction and life satisfaction while controlling for some social demographic variables. 827 employees working in 34 car dealerships in Northern Quebec (56% responses rate) were surveyed. The multiple item questionnaires were analysed using correlation analysis, chi square and ANOVAs. Results show interesting patterns emerging for the relationships between job and life satisfaction of which 49.2% of all individuals have spillover, 43.5% compensation, and 7.3% segmentation type of relationships. Results, nonetheless, are far richer and the model becomes much more refined when social demographic indicators are taken into account. Globally, social demographic variables demonstrate some effects on each satisfaction individually but also on the interrelation (nature of the relations) between life and work satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a model of shadow banking in which financial intermediaries originate and trade loans, assemble these loans into diversified portfolios, and then finance these portfolios externally with riskless debt. In this model: i) outside investor wealth drives the demand for riskless debt and indirectly for securitization, ii) intermediary assets and leverage move together as in Adrian and Shin (2010), and iii) intermediaries increase their exposure to systematic risk as they reduce their idiosyncratic risk through diversification, as in Acharya, Schnabl, and Suarez (2010). Under rational expectations, the shadow banking system is stable and improves welfare. When investors and intermediaries neglect tail risks, however, the expansion of risky lending and the concentration of risks in the intermediaries create financial fragility and fluctuations in liquidity over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use a dynamic monopolistic competition model to show that an economythat inherits a small range of specialized inputs can be trapped into alower stage of development. The limited availability of specialized inputsforces the final goods producers to use a labor intensive technology, whichin turn implies a small inducement to introduce new intermediate inputs. Thestart--up costs, which make the intermediate inputs producers subject todynamic increasing returns, and pecuniary externalities that result from thefactor substitution in the final goods sector, play essential roles in themodel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to analyse empirically entry decisions by generic firms intomarkets with tough regulation. Generic drugs might be a key driver of competitionand cost containment in pharmaceutical markets. The dynamics of reforms ofpatents and pricing across drug markets in Spain are useful to identify the impact ofregulations on generic entry. Estimates from a count data model using a panel of 86active ingredients during the 1999 2005 period show that the drivers of genericentry in markets with price regulations are similar to less regulated markets: genericfirms entries are positively affected by the market size and time trend, and negativelyaffected by the number of incumbent laboratories and the number of substitutesactive ingredients. We also find that contrary to what policy makers expected, thesystem of reference pricing restrains considerably the generic entry. Short run brandname drug price reductions are obtained by governments at the cost of long runbenefits from fostering generic entry and post-patent competition into the markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most facility location decision models ignore the fact that for a facility to survive it needs a minimum demand level to cover costs. In this paper we present a decision model for a firm thatwishes to enter a spatial market where there are several competitors already located. This market is such that for each outlet there is a demand threshold level that has to be achievedin order to survive. The firm wishes to know where to locate itsoutlets so as to maximize its market share taking into account the threshold level. It may happen that due to this new entrance, some competitors will not be able to meet the threshold and therefore will disappear. A formulation is presented together with a heuristic solution method and computational experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A polarizable quantum mechanics and molecular mechanics model has been extended to account for the difference between the macroscopic electric field and the actual electric field felt by the solute molecule. This enables the calculation of effective microscopic properties which can be related to macroscopic susceptibilities directly comparable with experimental results. By seperating the discrete local field into two distinct contribution we define two different microscopic properties, the so-called solute and effective properties. The solute properties account for the pure solvent effects, i.e., effects even when the macroscopic electric field is zero, and the effective properties account for both the pure solvent effects and the effect from the induced dipoles in the solvent due to the macroscopic electric field. We present results for the linear and nonlinear polarizabilities of water and acetonitrile both in the gas phase and in the liquid phase. For all the properties we find that the pure solvent effect increases the properties whereas the induced electric field decreases the properties. Furthermore, we present results for the refractive index, third-harmonic generation (THG), and electric field induced second-harmonic generation (EFISH) for liquid water and acetonitrile. We find in general good agreement between the calculated and experimental results for the refractive index and the THG susceptibility. For the EFISH susceptibility, however, the difference between experiment and theory is larger since the orientational effect arising from the static electric field is not accurately described

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of private gardens has increased in recent years, creating a more pleasant urban model, but not without having an environmental impact, including increased energy consumption, which is the focus of this study. The estimation of costs and energy consumption for the generic typology of private urban gardens is based on two simplifying assumptions: square geometry with surface areas from 25 to 500 m2 and hydraulic design with a single pipe. In total, eight sprinkler models have been considered, along with their possible working pressures, and 31 pumping units grouped into 5 series that adequately cover the range of required flow rates and pressures, resultin in 495 hydraulic designs repeated for two climatically different locations in the Spanish Mediterranean area (Girona and Elche). Mean total irrigation costs for the locality with lower water needs (Girona) and greater needs (Elche) were € 2,974 ha-¹ yr-¹ and € 3,383 ha-¹ yr-¹, respectively. Energy costs accounted for 11.4% of the total cost for the first location, and 23.0% for the second. While a suitable choice of the hydraulic elements of the setup is essential, as it may provide average energy savings of 77%, due to the low energy cost in relation to the cost of installation, the potential energy savings do not constitute a significant incentive for the irrigation system design. The low efficiency of the pumping units used in this type of garden is the biggest obstacle and constraint to achieving a high quality energy solution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The possible association between the microquasar LS 5039 and the EGRET source 3EG J1824-1514 suggests that microquasars could also be sources of high energy gamma-rays. In this paper, we explore, with a detailed numerical model, if this system can produce the emission detected by EGRET (>100 MeV) through inverse Compton (IC) scattering. Our numerical approach considers a population of relativistic electrons entrained in a cylindrical inhomogeneous jet, interacting with both the radiation and the magnetic fields, taking into account the Thomson and Klein-Nishina regimes of interaction. The computed spectrum reproduces the observed spectral characteristics at very high energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context.It has been proposed that the origin of the very high-energy photons emitted from high-mass X-ray binaries with jet-like features, so-called microquasars (MQs), is related to hadronic interactions between relativistic protons in the jet and cold protons of the stellar wind. Leptonic secondary emission should be calculated in a complete hadronic model that includes the effects of pairs from charged pion decays inside the jets and the emission from pairs generated by gamma-ray absorption in the photosphere of the system. Aims.We aim at predicting the broadband spectrum from a general hadronic microquasar model, taking into account the emission from secondaries created by charged pion decay inside the jet. Methods.The particle energy distribution for secondary leptons injected along the jets is consistently derived taking the energy losses into account. The spectral energy distribution resulting from these leptons is calculated after assuming different values of the magnetic field inside the jets. We also compute the spectrum of the gamma-rays produced by neutral pion-decay and processed by electromagnetic cascades under the stellar photon field. Results.We show that the secondary emission can dominate the spectral energy distribution at low energies (~1 MeV). At high energies, the production spectrum can be significantly distorted by the effect of electromagnetic cascades. These effects are phase-dependent, and some variability modulated by the orbital period is predicted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tomato (Solanum lycopersicum) is a major crop plant and a model system for fruit development. Solanum is one of the largest angiosperm genera1 and includes annual and perennial plants from diverse habitats. Here we present a high-quality genome sequence of domesticated tomato, a draft sequence of its closest wild relative, Solanum pimpinellifolium2, and compare them to each other and to the potato genome (Solanum tuberosum). The two tomato genomes show only 0.6% nucleotide divergence and signs of recent admixture, but show more than 8% divergence from potato, with nine large and several smaller inversions. In contrast to Arabidopsis, but similar to soybean, tomato and potato small RNAs map predominantly to gene-rich chromosomal regions, including gene promoters. The Solanum lineage has experienced two consecutive genome triplications: one that is ancient and shared with rosids, and a more recent one. These triplications set the stage for the neofunctionalization of genes controlling fruit characteristics, such as colour and fleshiness.