886 resultados para Multiple state models
Resumo:
This study is about the analysis of some queueing models related to N-policy.The optimal value the queue size has to attain in order to turn on a single server, assuming that the policy is to turn on a single server when the queue size reaches a certain number, N, and turn him off when the system is empty.The operating policy is the usual N-policy, but with random N and in model 2, a system similar to the one described here.This study analyses “ Tandem queue with two servers”.Here assume that the first server is a specialized one.In a queueing system,under N-policy ,the server will be on vacation until N units accumulate for the first time after becoming idle.A modified version of the N-policy for an M│M│1 queueing system is considered here.The novel feature of this model is that a busy service unit prevents the access of new customers to servers further down the line.It is deals with a queueing model consisting of two servers connected in series with a finite intermediate waiting room of capacity k.Here assume that server I is a specialized server.For this model ,the steady state probability vector and the stability condition are obtained using matrix – geometric method.
Resumo:
This thesis Entitled “modelling and analysis of recurrent event data with multiple causes.Survival data is a term used for describing data that measures the time to occurrence of an event.In survival studies, the time to occurrence of an event is generally referred to as lifetime.Recurrent event data are commonly encountered in longitudinal studies when individuals are followed to observe the repeated occurrences of certain events. In many practical situations, individuals under study are exposed to the failure due to more than one causes and the eventual failure can be attributed to exactly one of these causes.The proposed model was useful in real life situations to study the effect of covariates on recurrences of certain events due to different causes.In Chapter 3, an additive hazards model for gap time distributions of recurrent event data with multiple causes was introduced. The parameter estimation and asymptotic properties were discussed .In Chapter 4, a shared frailty model for the analysis of bivariate competing risks data was presented and the estimation procedures for shared gamma frailty model, without covariates and with covariates, using EM algorithm were discussed. In Chapter 6, two nonparametric estimators for bivariate survivor function of paired recurrent event data were developed. The asymptotic properties of the estimators were studied. The proposed estimators were applied to a real life data set. Simulation studies were carried out to find the efficiency of the proposed estimators.
Resumo:
In this paper we study the evolution of the kinetic features of the martensitic transition in a Cu-Al-Mn single crystal under thermal cycling. The use of several experimental techniques including optical microscopy, calorimetry, and acoustic emission, has enabled us to perform an analysis at multiple scales. In particular, we have focused on the analysis of avalanche events (associated with the nucleation and growth of martensitic domains), which occur during the transition. There are significant differences between the kinetics at large and small length scales. On the one hand, at small length scales, small avalanche events tend to sum to give new larger events in subsequent loops. On the other hand, at large length scales the large domains tend to split into smaller ones on thermal cycling. We suggest that such different behavior is the necessary ingredient that leads the system to the final critical state corresponding to a power-law distribution of avalanches.
Resumo:
Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.
Resumo:
The thesis entitled Analysis of Some Stochastic Models in Inventories and Queues. This thesis is devoted to the study of some stochastic models in Inventories and Queues which are physically realizable, though complex. It contains a detailed analysis of the basic stochastic processes underlying these models. In this thesis, (s,S) inventory systems with nonidentically distributed interarrival demand times and random lead times, state dependent demands, varying ordering levels and perishable commodities with exponential life times have been studied. The queueing system of the type Ek/Ga,b/l with server vacations, service systems with single and batch services, queueing system with phase type arrival and service processes and finite capacity M/G/l queue when server going for vacation after serving a random number of customers are also analysed. The analogy between the queueing systems and inventory systems could be exploited in solving certain models. In vacation models, one important result is the stochastic decomposition property of the system size or waiting time. One can think of extending this to the transient case. In inventory theory, one can extend the present study to the case of multi-item, multi-echelon problems. The study of perishable inventory problem when the commodities have a general life time distribution would be a quite interesting problem. The analogy between the queueing systems and inventory systems could be exploited in solving certain models.
Resumo:
The Comment affirms that no phase transition occurs in spin-glass systems with an applied magnetic field. However, only according to the droplet model is this result expected. Other models do not predict this result and, consequently, it is under current discussion. In addition, we show how the experimental results obtained in our system correspond to a cluster glass rather than to a true spin glass.
Resumo:
The objective of this thesis is to study the time dependent behaviour of some complex queueing and inventory models. It contains a detailed analysis of the basic stochastic processes underlying these models. In the theory of queues, analysis of time dependent behaviour is an area.very little developed compared to steady state theory. Tine dependence seems certainly worth studying from an application point of view but unfortunately, the analytic difficulties are considerable. Glosod form solutions are complicated even for such simple models as M/M /1. Outside M/>M/1, time dependent solutions have been found only in special cases and involve most often double transforms which provide very little insight into the behaviour of the queueing systems themselves. In inventory theory also There is not much results available giving the time dependent solution of the system size probabilities. Our emphasis is on explicit results free from all types of transforms and the method used may be of special interest to a wide variety of problems having regenerative structure.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
In classical field theory, the ordinary potential V is an energy density for that state in which the field assumes the value ¢. In quantum field theory, the effective potential is the expectation value of the energy density for which the expectation value of the field is ¢o. As a result, if V has several local minima, it is only the absolute minimum that corresponds to the true ground state of the theory. Perturbation theory remains to this day the main analytical tool in the study of Quantum Field Theory. However, since perturbation theory is unable to uncover the whole rich structure of Quantum Field Theory, it is desirable to have some method which, on one hand, must go beyond both perturbation theory and classical approximation in the points where these fail, and at that time, be sufficiently simple that analytical calculations could be performed in its framework During the last decade a nonperturbative variational method called Gaussian effective potential, has been discussed widely together with several applications. This concept was described as a means of formalizing our intuitive understanding of zero-point fluctuation effects in quantum mechanics in a way that carries over directly to field theory.
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
The research of this thesis dissertation covers developments and applications of short-and long-term climate predictions. The short-term prediction emphasizes monthly and seasonal climate, i.e. forecasting from up to the next month over a season to up to a year or so. The long-term predictions pertain to the analysis of inter-annual- and decadal climate variations over the whole 21st century. These two climate prediction methods are validated and applied in the study area, namely, Khlong Yai (KY) water basin located in the eastern seaboard of Thailand which is a major industrial zone of the country and which has been suffering from severe drought and water shortage in recent years. Since water resources are essential for the further industrial development in this region, a thorough analysis of the potential climate change with its subsequent impact on the water supply in the area is at the heart of this thesis research. The short-term forecast of the next-season climate, such as temperatures and rainfall, offers a potential general guideline for water management and reservoir operation. To that avail, statistical models based on autoregressive techniques, i.e., AR-, ARIMA- and ARIMAex-, which includes additional external regressors, and multiple linear regression- (MLR) models, are developed and applied in the study region. Teleconnections between ocean states and the local climate are investigated and used as extra external predictors in the ARIMAex- and the MLR-model and shown to enhance the accuracy of the short-term predictions significantly. However, as the ocean state – local climate teleconnective relationships provide only a one- to four-month ahead lead time, the ocean state indices can support only a one-season-ahead forecast. Hence, GCM- climate predictors are also suggested as an additional predictor-set for a more reliable and somewhat longer short-term forecast. For the preparation of “pre-warning” information for up-coming possible future climate change with potential adverse hydrological impacts in the study region, the long-term climate prediction methodology is applied. The latter is based on the downscaling of climate predictions from several single- and multi-domain GCMs, using the two well-known downscaling methods SDSM and LARS-WG and a newly developed MLR-downscaling technique that allows the incorporation of a multitude of monthly or daily climate predictors from one- or several (multi-domain) parent GCMs. The numerous downscaling experiments indicate that the MLR- method is more accurate than SDSM and LARS-WG in predicting the recent past 20th-century (1971-2000) long-term monthly climate in the region. The MLR-model is, consequently, then employed to downscale 21st-century GCM- climate predictions under SRES-scenarios A1B, A2 and B1. However, since the hydrological watershed model requires daily-scale climate input data, a new stochastic daily climate generator is developed to rescale monthly observed or predicted climate series to daily series, while adhering to the statistical and geospatial distributional attributes of observed (past) daily climate series in the calibration phase. Employing this daily climate generator, 30 realizations of future daily climate series from downscaled monthly GCM-climate predictor sets are produced and used as input in the SWAT- distributed watershed model, to simulate future streamflow and other hydrological water budget components in the study region in a multi-realization manner. In addition to a general examination of the future changes of the hydrological regime in the KY-basin, potential future changes of the water budgets of three main reservoirs in the basin are analysed, as these are a major source of water supply in the study region. The results of the long-term 21st-century downscaled climate predictions provide evidence that, compared with the past 20th-reference period, the future climate in the study area will be more extreme, particularly, for SRES A1B. Thus, the temperatures will be higher and exhibit larger fluctuations. Although the future intensity of the rainfall is nearly constant, its spatial distribution across the region is partially changing. There is further evidence that the sequential rainfall occurrence will be decreased, so that short periods of high intensities will be followed by longer dry spells. This change in the sequential rainfall pattern will also lead to seasonal reductions of the streamflow and seasonal changes (decreases) of the water storage in the reservoirs. In any case, these predicted future climate changes with their hydrological impacts should encourage water planner and policy makers to develop adaptation strategies to properly handle the future water supply in this area, following the guidelines suggested in this study.
Resumo:
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.
Resumo:
Interviews with more than 40 leaders in the Boston area health care industry have identified a range of broadly-felt critical problems. This document synthesizes these problems and places them in the context of work and family issues implicit in the organization of health care workplaces. It concludes with questions about possible ways to address such issues. The defining circumstance for the health care industry nationally as well as regionally at present is an extraordinary reorganization, not yet fully negotiated, in the provision and financing of health care. Hoped-for controls on increased costs of medical care – specifically the widespread replacement of indemnity insurance by market-based managed care and business models of operation--have fallen far short of their promise. Pressures to limit expenditures have produced dispiriting conditions for the entire healthcare workforce, from technicians and aides to nurses and physicians. Under such strains, relations between managers and workers providing care are uneasy, ranging from determined efforts to maintain respectful cooperation to adversarial negotiation. Taken together, the interviews identify five key issues affecting a broad cross-section of occupational groups, albeit in different ways: Staffing shortages of various kinds throughout the health care workforce create problems for managers and workers and also for the quality of patient care. Long work hours and inflexible schedules place pressure on virtually every part of the healthcare workforce, including physicians. Degraded and unsupportive working conditions, often the result of workplace "deskilling" and "speed up," undercut previous modes of clinical practice. Lack of opportunities for training and advancement exacerbate workforce problems in an industry where occupational categories and terms of work are in a constant state of flux. Professional and employee voices are insufficiently heard in conditions of rapid institutional reorganization and consolidation. Interviewees describe multiple impacts of these issues--on the operation of health care workplaces, on the well being of the health care workforce, and on the quality of patient care. Also apparent in the interviews, but not clearly named and defined, is the impact of these issues on the ability of workers to attend well to the needs of their families--and the reciprocal impact of workers' family tensions on workplace performance. In other words, the same things that affect patient care also affect families, and vice versa. Some workers describe feeling both guilty about raising their own family issues when their patients' needs are at stake, and resentful about the exploitation of these feelings by administrators making workplace policy. The different institutions making up the health care system have responded to their most pressing issues with a variety of specific stratagems but few that address the complexities connecting relations between work and family. The MIT Workplace Center proposes a collaborative exploration of next steps to probe these complications and to identify possible locations within the health care system for workplace experimentation with outcomes benefiting all parties.
Resumo:
Co-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present.
Resumo:
We address the problem of jointly determining shipment planning and scheduling decisions with the presence of multiple shipment modes. We consider long lead time, less expensive sea shipment mode, and short lead time but expensive air shipment modes. Existing research on multiple shipment modes largely address the short term scheduling decisions only. Motivated by an industrial problem where planning decisions are independent of the scheduling decisions, we investigate the benefits of integrating the two sets of decisions. We develop sequence of mathematical models to address the planning and scheduling decisions. Preliminary computational results indicate improved performance of the integrated approach over some of the existing policies used in real-life situations.