963 resultados para continuous-time models
Resumo:
This paper presents gamma stochastic volatility models and investigates its distributional and time series properties. The parameter estimators obtained by the method of moments are shown analytically to be consistent and asymptotically normal. The simulation results indicate that the estimators behave well. The insample analysis shows that return models with gamma autoregressive stochastic volatility processes capture the leptokurtic nature of return distributions and the slowly decaying autocorrelation functions of squared stock index returns for the USA and UK. In comparison with GARCH and EGARCH models, the gamma autoregressive model picks up the persistence in volatility for the US and UK index returns but not the volatility persistence for the Canadian and Japanese index returns. The out-of-sample analysis indicates that the gamma autoregressive model has a superior volatility forecasting performance compared to GARCH and EGARCH models.
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.
Resumo:
The thesis entitled Analysis of Some Stochastic Models in Inventories and Queues. This thesis is devoted to the study of some stochastic models in Inventories and Queues which are physically realizable, though complex. It contains a detailed analysis of the basic stochastic processes underlying these models. In this thesis, (s,S) inventory systems with nonidentically distributed interarrival demand times and random lead times, state dependent demands, varying ordering levels and perishable commodities with exponential life times have been studied. The queueing system of the type Ek/Ga,b/l with server vacations, service systems with single and batch services, queueing system with phase type arrival and service processes and finite capacity M/G/l queue when server going for vacation after serving a random number of customers are also analysed. The analogy between the queueing systems and inventory systems could be exploited in solving certain models. In vacation models, one important result is the stochastic decomposition property of the system size or waiting time. One can think of extending this to the transient case. In inventory theory, one can extend the present study to the case of multi-item, multi-echelon problems. The study of perishable inventory problem when the commodities have a general life time distribution would be a quite interesting problem. The analogy between the queueing systems and inventory systems could be exploited in solving certain models.
Resumo:
This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
In classical field theory, the ordinary potential V is an energy density for that state in which the field assumes the value ¢. In quantum field theory, the effective potential is the expectation value of the energy density for which the expectation value of the field is ¢o. As a result, if V has several local minima, it is only the absolute minimum that corresponds to the true ground state of the theory. Perturbation theory remains to this day the main analytical tool in the study of Quantum Field Theory. However, since perturbation theory is unable to uncover the whole rich structure of Quantum Field Theory, it is desirable to have some method which, on one hand, must go beyond both perturbation theory and classical approximation in the points where these fail, and at that time, be sufficiently simple that analytical calculations could be performed in its framework During the last decade a nonperturbative variational method called Gaussian effective potential, has been discussed widely together with several applications. This concept was described as a means of formalizing our intuitive understanding of zero-point fluctuation effects in quantum mechanics in a way that carries over directly to field theory.
Resumo:
This thesis analyses certain problems in Inventories and Queues. There are many situations in real-life where we encounter models as described in this thesis. It analyses in depth various models which can be applied to production, storag¢, telephone traffic, road traffic, economics, business administration, serving of customers, operations of particle counters and others. Certain models described here is not a complete representation of the true situation in all its complexity, but a simplified version amenable to analysis. While discussing the models, we show how a dependence structure can be suitably introduced in some problems of Inventories and Queues. Continuous review, single commodity inventory systems with Markov dependence structure introduced in the demand quantities, replenishment quantities and reordering levels are considered separately. Lead time is assumed to be zero in these models. An inventory model involving random lead time is also considered (Chapter-4). Further finite capacity single server queueing systems with single/bulk arrival, single/bulk services are also discussed. In some models the server is assumed to go on vacation (Chapters 7 and 8). In chapters 5 and 6 a sort of dependence is introduced in the service pattern in some queuing models.
Resumo:
In this thesis we attempt to make a probabilistic analysis of some physically realizable, though complex, storage and queueing models. It is essentially a mathematical study of the stochastic processes underlying these models. Our aim is to have an improved understanding of the behaviour of such models, that may widen their applicability. Different inventory systems with randon1 lead times, vacation to the server, bulk demands, varying ordering levels, etc. are considered. Also we study some finite and infinite capacity queueing systems with bulk service and vacation to the server and obtain the transient solution in certain cases. Each chapter in the thesis is provided with self introduction and some important references
Resumo:
The objective of the study of \Queueing models with vacations and working vacations" was two fold; to minimize the server idle time and improve the e ciency of the service system. Keeping this in mind we considered queueing models in di erent set up in this thesis. Chapter 1 introduced the concepts and techniques used in the thesis and also provided a summary of the work done. In chapter 2 we considered an M=M=2 queueing model, where one of the two heterogeneous servers takes multiple vacations. We studied the performance of the system with the help of busy period analysis and computation of mean waiting time of a customer in the stationary regime. Conditional stochastic decomposition of queue length was derived. To improve the e ciency of this system we came up with a modi ed model in chapter 3. In this model the vacationing server attends the customers, during vacation at a slower service rate. Chapter 4 analyzed a working vacation queueing model in a more general set up. The introduction of N policy makes this MAP=PH=1 model di erent from all working vacation models available in the literature. A detailed analysis of performance of the model was provided with the help of computation of measures such as mean waiting time of a customer who gets service in normal mode and vacation mode.
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
L-Glutamine amidohydrolase (L-glutaminase, EC 3.5.1.2) is a therapeutically and industrially important enzyme. Because it is a potent antileukemic agent and a flavor-enhancing agent used in the food industry, many researchers have focused their attention on L-glutaminase. In this article, we report the continuous production of extracellular L-glutaminase by the marine fungus Beauveria bassiana BTMF S-10 in a packed-bed reactor. Parameters influencing bead production and performance under batch mode were optimized in the order-support (Na-alginate) concentration, concentration of CaCl2 for bead preparation, curing time of beads, spore inoculum concentration, activation time, initial pH of enzyme production medium, temperature of incubation, and retention time. Parameters optimized under batch mode for L-glutaminase production were incorporated into the continuous production studies. Beads with 12 × 108 spores/g of beads were activated in a solution of 1% glutamine in seawater for 15 h, and the activated beads were packed into a packed-bed reactor. Enzyme production medium (pH 9.0) was pumped through the bed, and the effluent was collected from the top of the column. The effect of flow rate of the medium, substrate concentration, aeration, and bed height on continuous production of L-glutaminase was studied. Production was monitored for 5 h in each case, and the volumetric productivity was calculated. Under the optimized conditions for continuous production, the reactor gave a volumetric productivity of 4.048 U/(mL·h), which indicates that continuous production of the enzyme by Ca-alginate-immobilizedspores is well suited for B. bassiana and results in a higher yield of enzyme within a shorter time. The results indicate the scope of utilizing immobilized B. bassiana for continuous commercial production of L-glutaminase
Resumo:
Inthis paper,we define partial moments for a univariate continuous random variable. A recurrence relationship for the Pearson curve using the partial moments is established. The interrelationship between the partial moments and other reliability measures such as failure rate, mean residual life function are proved. We also prove some characterization theorems using the partial moments in the context of length biased models and equilibrium distributions
Resumo:
In this paper the class of continuous bivariate distributions that has form-invariant weighted distribution with weight function w(x1, x2) ¼ xa1 1 xa2 2 is identified. It is shown that the class includes some well known bivariate models. Bayesian inference on the parameters of the class is considered and it is shown that there exist natural conjugate priors for the parameters