993 resultados para 230201 Probability Theory
Resumo:
Mode of access: Internet.
Resumo:
Bibliography: p. 146.
Resumo:
"This work was supported by the Office of Naval Reseaarch under Contract Nonr. 404(16)."
Multivariate analyses of variance and covariance for simulation studies involving normal time series
Resumo:
Photocopy.
Resumo:
Bibliography: leaves 41-43.
Resumo:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
Resumo:
This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.
Resumo:
This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements
Resumo:
A cross-country pipeline construction project is exposed to an uncertain environment due to its enormous size (physical, manpower requirement and financial value), complexity in design technology and involvement of external factors. These uncertainties can lead to several changes in project scope during the process of project execution. Unless the changes are properly controlled, the time, cost and quality goals of the project may never be achieved. A methodology is proposed for project control through risk analysis, contingency allocation and hierarchical planning models. Risk analysis is carried out through the analytic hierarchy process (AHP) due to the subjective nature of risks in construction projects. The results of risk analysis are used to determine the logical contingency for project control with the application of probability theory. Ultimate project control is carried out by hierarchical planning model which enables decision makers to take vital decisions during the changing environment of the construction period. Goal programming (GP), a multiple criteria decision-making technique, is proposed for model formulation because of its flexibility and priority-base structure. The project is planned hierarchically in three levels—project, work package and activity. GP is applied separately at each level. Decision variables of each model are different planning parameters of the project. In this study, models are formulated from the owner's perspective and its effectiveness in project control is demonstrated.
Resumo:
Projects that are exposed to uncertain environments can be effectively controlled with the application of risk analysis during the planning stage. The Analytic Hierarchy Process, a multiattribute decision-making technique, can be used to analyse and assess project risks which are objective or subjective in nature. Among other advantages, the process logically integrates the various elements in the planning process. The results from risk analysis and activity analysis are then used to develop a logical contingency allowance for the project through the application of probability theory. The contingency allowance is created in two parts: (a) a technical contingency, and (b) a management contingency. This provides a basis for decision making in a changing project environment. Effective control of the project is made possible by the limitation of the changes within the monetary contingency allowance for the work package concerned, and the utilization of the contingency through proper appropriation. The whole methodology is applied to a pipeline-laying project in India, and its effectiveness in project control is demonstrated.
Resumo:
The paper is dedicated to the theory which describes physical phenomena in non-constant statistical conditions. The theory is a new direction in probability theory and mathematical statistics that gives new possibilities for presentation of physical world by hyper-random models. These models take into consideration the changing of object’s properties, as well as uncertainty of statistical conditions.
Resumo:
A range of physical and engineering systems exhibit an irregular complex dynamics featuring alternation of quiet and burst time intervals called the intermittency. The intermittent dynamics most popular in laser science is the on-off intermittency [1]. The on-off intermittency can be understood as a conversion of the noise in a system close to an instability threshold into effective time-dependent fluctuations which result in the alternation of stable and unstable periods. The on-off intermittency has been recently demonstrated in semiconductor, Erbium doped and Raman lasers [2-5]. Recently demonstrated random distributed feedback (random DFB) fiber laser has an irregular dynamics near the generation threshold [6,7]. Here we show the intermittency in the cascaded random DFB fiber laser. We study intensity fluctuations in a random DFB fiber laser based on nitrogen doped fiber. The laser generates first and second Stokes components 1120 nm and 1180 nm respectively under an appropriate pumping. We study the intermittency in the radiation of the second Stokes wave. The typical time trace near the generation threshold of the second Stokes wave (Pth) is shown at Fig. 1a. From the number of long enough time-traces we calculate statistical distribution between major spikes in time dynamics, Fig. 1b. To eliminate contribution of high frequency components of spikes we use a low pass filter along with the reference value of the output power. Experimental data is fitted by power law,
Resumo:
2000 Mathematics Subject Classification: Primary 60J45, 60J50, 35Cxx; Secondary 31Cxx.
Resumo:
The existence of an inverse limit of an inverse system of (probability) measure spaces has been investigated since the very beginning of the birth of the modern probability theory. Results from Kolmogorov [10], Bochner [2], Choksi [5], Metivier [14], Bourbaki [3] among others have paved the way of the deep understanding of the problem under consideration. All the above results, however, call for some topological concepts, or at least ones which are closely related topological ones. In this paper we investigate purely measurable inverse systems of (probability) measure spaces, and give a sucient condition for the existence of a unique inverse limit. An example for the considered purely measurable inverse systems of (probability) measure spaces is also given.
Resumo:
Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.