13 resultados para independent random variables with a commondensity
em Cochin University of Science
Resumo:
The present study gave emphasis on characterizing continuous probability distributions and its weighted versions in univariate set up. Therefore a possible work in this direction is to study the properties of weighted distributions for truncated random variables in discrete set up. The problem of extending the measures into higher dimensions as well as its weighted versions is yet to be examined. As the present study focused attention to length-biased models, the problem of studying the properties of weighted models with various other weight functions and their functional relationships is yet to be examined.
Resumo:
In this article, we study reliability measures such as geometric vitality function and conditional Shannon’s measures of uncertainty proposed by Ebrahimi (1996) and Sankaran and Gupta (1999), respectively, for the doubly (interval) truncated random variables. In survival analysis and reliability engineering, these measures play a significant role in studying the various characteristics of a system/component when it fails between two time points. The interrelationships among these uncertainty measures for various distributions are derived and proved characterization theorems arising out of them
Resumo:
In this paper, we study the relationship between the failure rate and the mean residual life of doubly truncated random variables. Accordingly, we develop characterizations for exponential, Pareto 11 and beta distributions. Further, we generalize the identities for fire Pearson and the exponential family of distributions given respectively in Nair and Sankaran (1991) and Consul (1995). Applications of these measures in file context of lengthbiased models are also explored
Resumo:
In many situations probability models are more realistic than deterministic models. Several phenomena occurring in physics are studied as random phenomena changing with time and space. Stochastic processes originated from the needs of physicists.Let X(t) be a random variable where t is a parameter assuming values from the set T. Then the collection of random variables {X(t), t ∈ T} is called a stochastic process. We denote the state of the process at time t by X(t) and the collection of all possible values X(t) can assume, is called state space
Resumo:
The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.
Resumo:
Occupational stress is becoming a major issue in both corporate and social agenda .In industrialized countries, there have been quite dramatic changes in the conditions at work, during the last decade ,caused by economic, social and technical development. As a consequence, the people today at work are exposed to high quantitative and qualitative demands as well as hard competition caused by global economy. A recent report says that ailments due to work related stress is likely to cost India’s exchequer around 72000 crores between 2009 and 2015. Though India is a fast developing country, it is yet to create facilities to mitigate the adverse effects of work stress, more over only little efforts have been made to assess the work related stress.In the absence of well defined standards to assess the work related stress in India, an attempt is made in this direction to develop the factors for the evaluation of work stress. Accordingly, with the help of existing literature and in consultation with the safety experts, seven factors for the evaluation of work stress is developed. An instrument ( Questionnaire) was developed using these seven factors for the evaluation of work stress .The validity , and unidimensionality of the questionnaire was ensured by confirmatory factor analysis. The reliability of the questionnaire was ensured before administration. While analyzing the relation ship between the variables, it is noted that no relationship exists between them, and hence the above factors are treated as independent factors/ variables for the purpose of research .Initially five profit making manufacturing industries, under public sector in the state of Kerala, were selected for the study. The influence of factors responsible for work stress is analyzed in these industries. These industries were classified in to two types, namely chemical and heavy engineering ,based on the product manufactured and work environment and the analysis is further carried out for these two categories.The variation of work stress with different age , designation and experience of the employees are analyzed by means of one-way ANOVA. Further three different type of modelling of work stress, namely factor modelling, structural equation modelling and multinomial logistic regression modelling was done to analyze the association of factors responsible for work stress. All these models are found equally good in predicting the work stress.The present study indicates that work stress exists among the employees in public sector industries in Kerala. Employees belonging to age group 40-45yrs and experience groups 15-20yrs had relatively higher work demand ,low job control, and low support at work. Low job control was noted among lower designation levels, particularly at the worker level in these industries. Hence the instrument developed using the seven factors namely demand, control, manager support, peer support, relationship, role and change can be effectively used for the evaluation of work stress in industries.
Resumo:
The present work is intended to discuss various properties and reliability aspects of higher order equilibrium distributions in continuous, discrete and multivariate cases, which contribute to the study on equilibrium distributions. At first, we have to study and consolidate the existing literature on equilibrium distributions. For this we need some basic concepts in reliability. These are being discussed in the 2nd chapter, In Chapter 3, some identities connecting the failure rate functions and moments of residual life of the univariate, non-negative continuous equilibrium distributions of higher order and that of the baseline distribution are derived. These identities are then used to characterize the generalized Pareto model, mixture of exponentials and gamma distribution. An approach using the characteristic functions is also discussed with illustrations. Moreover, characterizations of ageing classes using stochastic orders has been discussed. Part of the results of this chapter has been reported in Nair and Preeth (2009). Various properties of equilibrium distributions of non-negative discrete univariate random variables are discussed in Chapter 4. Then some characterizations of the geo- metric, Waring and negative hyper-geometric distributions are presented. Moreover, the ageing properties of the original distribution and nth order equilibrium distribu- tions are compared. Part of the results of this chapter have been reported in Nair, Sankaran and Preeth (2012). Chapter 5 is a continuation of Chapter 4. Here, several conditions, in terms of stochastic orders connecting the baseline and its equilibrium distributions are derived. These conditions can be used to rede_ne certain ageing notions. Then equilibrium distributions of two random variables are compared in terms of various stochastic orders that have implications in reliability applications. In Chapter 6, we make two approaches to de_ne multivariate equilibrium distribu- tions of order n. Then various properties including characterizations of higher order equilibrium distributions are presented. Part of the results of this chapter have been reported in Nair and Preeth (2008). The Thesis is concluded in Chapter 7. A discussion on further studies on equilib- rium distributions is also made in this chapter.
Resumo:
Nonlinear dynamics has emerged into a prominent area of research in the past few Decades.Turbulence, Pattern formation,Multistability etc are some of the important areas of research in nonlinear dynamics apart from the study of chaos.Chaos refers to the complex evolution of a deterministic system, which is highly sensitive to initial conditions. The study of chaos theory started in the modern sense with the investigations of Edward Lorentz in mid 60's. Later developments in this subject provided systematic development of chaos theory as a science of deterministic but complex and unpredictable dynamical systems. This thesis deals with the effect of random fluctuations with its associated characteristic timescales on chaos and synchronization. Here we introduce the concept of noise, and two familiar types of noise are discussed. The classifications and representation of white and colored noise are introduced. Based on this we introduce the concept of randomness that we deal with as a variant of the familiar concept of noise. The dynamical systems introduced are the Rossler system, directly modulated semiconductor lasers and the Harmonic oscillator. The directly modulated semiconductor laser being not a much familiar dynamical system, we have included a detailed introduction to its relevance in Chaotic encryption based cryptography in communication. We show that the effect of a fluctuating parameter mismatch on synchronization is to destroy the synchronization. Further we show that the relation between synchronization error and timescales can be found empirically but there are also cases where this is not possible. Studies show that under the variation of the parameters, the system becomes chaotic, which appears to be the period doubling route to chaos.
Resumo:
A bivariate semi-Pareto distribution is introduced and characterized using geometric minimization. Autoregressive minification models for bivariate random vectors with bivariate semi-Pareto and bivariate Pareto distributions are also discussed. Multivariate generalizations of the distributions and the processes are briefly indicated.
Resumo:
It has been shown recently that systems driven with random pulses show the signature of chaos ,even without non linear dynamics.This shows that the relation between randomness and chaos is much closer than it was understood earlier .The effect of random perturbations on synchronization can be also different. In some cases identical random perturbations acting on two different chaotic systems induce synchronizations. However most commonly ,the effect of random fluctuations on the synchronizations of chaotic system is to destroy synchronization. This thesis deals with the effect of random fluctuations with its associated characteristic timescales on chaos and synchronization. The author tries to unearth yet another manifestation of randomness on chaos and sychroniztion. This thesis is organized into six chapters.
Resumo:
In this article we introduce some structural relationships between weighted and original variables in the context of maintainability function and reversed repair rate. Furthermore, we prove some characterization theorems for specific models such as power, exponential, Pareto II, beta, and Pearson system of distributions using the relationships between the original and weighted random variables
Resumo:
The cumulative effects of global change, including climate change, increased population density and domestic waste disposal, effluent discharges from industrial processes, agriculture and aquaculture will likely continue and increases the process of eutrophication in estuarine environments. Eutrophication is one of the leading causes of degraded water quality, water column hypoxia/anoxia, harmful algal bloom (HAB) and loss of habitat and species diversity in the estuarine environment. The present study attempts to characterize the trophic condition of coastal estuary using a simple tool; trophic index (TRIX) based on a linear combination of the log of four state variables with supplementary index Efficiency Coefficient (Eff. Coeff.) as a discriminating tool. Numerically, the index TRIX is scaled from 0 to10, covering a wide range of trophic conditions from oligotrophic to eutrophic. Study area Kodungallur-Azhikode Estuary (KAE) was comparatively shallow in nature with average depth of 3.6±0.2 m. Dissolve oxygen regime in the water column was ranged from 4.7±1.3 mgL−1 in Station I to 5.9±1.4 mgL−1 in Station IV. The average nitrate-nitrogen (NO3-N) of KAE water was 470 mg m−3; values ranged from Av. 364.4 mg m−3 at Station II to Av. 626.6 mg m−3at Station VII. The mean ammonium-nitrogen (NH4 +-N) varied from 54.1 mg m−3 at Station VII to 101 mg m−3 at Station III. The average Chl-a for the seven stations of KAE was 6.42±3.91 mg m−3. Comparisons over different spatial and temporal scales in the KAE and study observed that, estuary experiencing high productivity by the influence of high degree of eutrophication; an annual average of 6.91 TRIX was noticed in the KAE and seasonal highest was observed during pre monsoon period (7.15) and lowest during post monsoon period (6.51). In the spatial scale station V showed high value 7.37 and comparatively low values in the station VI (6.93) and station VII (6.96) and which indicates eutrophication was predominant in land cover area with comparatively high water residence time. Eff. Coeff. values in the KAE ranges from −2.74 during monsoon period to the lowest of −1.98 in pre monsoon period. Present study revealed that trophic state of the estuary under severe stress and the restriction of autochthonous and allochthonous nutrient loading should be keystone in mitigate from eutrophication process
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.