9 resultados para Random parameter Logit Model
em Cochin University of Science
Resumo:
This thesis is devoted to the study of some stochastic models in inventories. An inventory system is a facility at which items of materials are stocked. In order to promote smooth and efficient running of business, and to provide adequate service to the customers, an inventory materials is essential for any enterprise. When uncertainty is present, inventories are used as a protection against risk of stock out. It is advantageous to procure the item before it is needed at a lower marginal cost. Again, by bulk purchasing, the advantage of price discounts can be availed. All these contribute to the formation of inventory. Maintaining inventories is a major expenditure for any organization. For each inventory, the fundamental question is how much new stock should be ordered and when should the orders are replaced. In the present study, considered several models for single and two commodity stochastic inventory problems. The thesis discusses two models. In the first model, examined the case in which the time elapsed between two consecutive demand points are independent and identically distributed with common distribution function F(.) with mean (assumed finite) and in which demand magnitude depends only on the time elapsed since the previous demand epoch. The time between disasters has an exponential distribution with parameter . In Model II, the inter arrival time of disasters have general distribution (F.) with mean ( ) and the quantity destructed depends on the time elapsed between disasters. Demands form compound poison processes with inter arrival times of demands having mean 1/. It deals with linearly correlated bulk demand two
Commodity inventory problem, where each arrival demands a random number of items of each commodity C1 and C2, the maximum quantity demanded being a (< S1) and b(
Resumo:
In this article it is proved that the stationary Markov sequences generated by minification models are ergodic and uniformly mixing. These results are used to establish the optimal properties of estimators for the parameters in the model. The problem of estimating the parameters in the exponential minification model is discussed in detail.
Resumo:
This thesis Entitled Bayesian inference in Exponential and pareto populations in the presence of outliers. The main theme of the present thesis is focussed on various estimation problems using the Bayesian appraoch, falling under the general category of accommodation procedures for analysing Pareto data containing outlier. In Chapter II. the problem of estimation of parameters in the classical Pareto distribution specified by the density function. In Chapter IV. we discuss the estimation of (1.19) when the sample contain a known number of outliers under three different data generating mechanisms, viz. the exchangeable model. Chapter V the prediction of a future observation based on a random sample that contains one contaminant. Chapter VI is devoted to the study of estimation problems concerning the exponential parameters under a k-outlier model.
Resumo:
Nature is full of phenomena which we call "chaotic", the weather being a prime example. What we mean by this is that we cannot predict it to any significant accuracy, either because the system is inherently complex, or because some of the governing factors are not deterministic. However, during recent years it has become clear that random behaviour can occur even in very simple systems with very few number of degrees of freedom, without any need for complexity or indeterminacy. The discovery that chaos can be generated even with the help of systems having completely deterministic rules - often models of natural phenomena - has stimulated a lo; of research interest recently. Not that this chaos has no underlying order, but it is of a subtle kind, that has taken a great deal of ingenuity to unravel. In the present thesis, the author introduce a new nonlinear model, a ‘modulated’ logistic map, and analyse it from the view point of ‘deterministic chaos‘.
Resumo:
Nanocrystalline Fe–Ni thin films were prepared by partial crystallization of vapour deposited amorphous precursors. The microstructure was controlled by annealing the films at different temperatures. X-ray diffraction, transmission electron microscopy and energy dispersive x-ray spectroscopy investigations showed that the nanocrystalline phase was that of Fe–Ni. Grain growth was observed with an increase in the annealing temperature. X-ray photoelectron spectroscopy observations showed the presence of a native oxide layer on the surface of the films. Scanning tunnelling microscopy investigations support the biphasic nature of the nanocrystalline microstructure that consists of a crystalline phase along with an amorphous phase. Magnetic studies using a vibrating sample magnetometer show that coercivity has a strong dependence on grain size. This is attributed to the random magnetic anisotropy characteristic of the system. The observed coercivity dependence on the grain size is explained using a modified random anisotropy model
Resumo:
Magnetic properties of nano-crystalline soft magnetic alloys have usually been correlated to structural evolution with heat treatment. However, literature reports pertaining to the study of nano-crystalline thin films are less abundant. Thin films of Fe40Ni38B18Mo4 were deposited on glass substrates under a high vacuum of ≈ 10−6 Torr by employing resistive heating. They were annealed at various temperatures ranging from 373 to 773K based on differential scanning calorimetric studies carried out on the ribbons. The magnetic characteristics were investigated using vibrating sample magnetometry. Morphological characterizations were carried out using atomic force microscopy (AFM), and magnetic force microscopy (MFM) imaging is used to study the domain characteristics. The variation of magnetic properties with thermal annealing is also investigated. From AFM and MFM images it can be inferred that the crystallization temperature of the as-prepared films are lower than their bulk counterparts. Also there is a progressive evolution of coercivity up to 573 K, which is an indication of the lowering of nano-crystallization temperature in thin films. The variation of coercivity with the structural evolution of the thin films with annealing is discussed and a plausible explanation is provided using the modified random anisotropy model
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
In many situations probability models are more realistic than deterministic models. Several phenomena occurring in physics are studied as random phenomena changing with time and space. Stochastic processes originated from the needs of physicists.Let X(t) be a random variable where t is a parameter assuming values from the set T. Then the collection of random variables {X(t), t ∈ T} is called a stochastic process. We denote the state of the process at time t by X(t) and the collection of all possible values X(t) can assume, is called state space