408 resultados para Probability distribution
Resumo:
Consider the concept combination ‘pet human’. In word association experiments, human subjects produce the associate ‘slave’ in relation to this combination. The striking aspect of this associate is that it is not produced as an associate of ‘pet’, or ‘human’ in isolation. In other words, the associate ‘slave’ seems to be emergent. Such emergent associations sometimes have a creative character and cognitive science is largely silent about how we produce them. Departing from a dimensional model of human conceptual space, this article will explore concept combinations, and will argue that emergent associations are a result of abductive reasoning within conceptual space, that is, below the symbolic level of cognition. A tensor-based approach is used to model concept combinations allowing such combinations to be formalized as interacting quantum systems. Free association norm data is used to motivate the underlying basis of the conceptual space. It is shown by analogy how some concept combinations may behave like quantum-entangled (non-separable) particles. Two methods of analysis were presented for empirically validating the presence of non-separable concept combinations in human cognition. One method is based on quantum theory and another based on comparing a joint (true theoretic) probability distribution with another distribution based on a separability assumption using a chi-square goodness-of-fit test. Although these methods were inconclusive in relation to an empirical study of bi-ambiguous concept combinations, avenues for further refinement of these methods are identified.
Resumo:
The question of under what conditions conceptual representation is compositional remains debatable within cognitive science. This paper proposes a well developed mathematical apparatus for a probabilistic representation of concepts, drawing upon methods developed in quantum theory to propose a formal test that can determine whether a specific conceptual combination is compositional, or not. This test examines a joint probability distribution modeling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered non-compositionally.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
The available wind power is stochastic and requires appropriate tools in the OPF model for economic and reliable power system operation. This paper exhibit the OPF formulation with factors involved in the intermittency of wind power. Weibull distribution is adopted to find the stochastic wind speed and power distribution. The reserve requirement is evaluated based on the wind distribution and risk of under/over estimation of the wind power. In addition, the Wind Energy Conversion System (WECS) is represented by Doubly Fed Induction Generator (DFIG) based wind farms. The reactive power capability for DFIG based wind farm is also analyzed. The study is performed on IEEE-30 bus system with wind farm located at different buses and with different wind profiles. Also the reactive power capacity to be installed in the wind farm to maintain a satisfactory voltage profile under the various wind flow scenario is demonstrated.
Resumo:
Context: Anti-Müllerian hormone (AMH) concentration reflects ovarian aging and is argued to be a useful predictor of age at menopause (AMP). It is hypothesized that AMH falling below a critical threshold corresponds to follicle depletion, which results in menopause. With this threshold, theoretical predictions of AMP can be made. Comparisons of such predictions with observed AMP from population studies support the role for AMH as a forecaster of menopause. Objective: The objective of the study was to investigate whether previous relationships between AMH and AMP are valid using a much larger data set. Setting: AMH was measured in 27 563 women attending fertility clinics. Study Design: From these data a model of age-related AMH change was constructed using a robust regression analysis. Data on AMP from subfertile women were obtained from the population-based Prospect-European Prospective Investigation into Cancer and Nutrition (Prospect- EPIC) cohort (n � 2249). By constructing a probability distribution of age at which AMH falls below a critical threshold and fitting this to Prospect-EPIC menopausal age data using maximum likelihood, such a threshold was estimated. Main Outcome: The main outcome was conformity between observed and predicted AMP. Results: To get a distribution of AMH-predicted AMP that fit the Prospect-EPIC data, we found the critical AMH threshold should vary among women in such a way that women with low age-specific AMH would have lower thresholds, whereas women with high age-specific AMH would have higher thresholds (mean 0.075 ng/mL; interquartile range 0.038–0.15 ng/mL). Such a varying AMH threshold for menopause is a novel and biologically plausible finding. AMH became undetectable (�0.2 ng/mL) approximately 5 years before the occurrence of menopause, in line with a previous report. Conclusions: The conformity of the observed and predicted distributions of AMP supports the hypothesis that declining population averages of AMH are associated with menopause, making AMH an excellent candidate biomarker for AMP prediction. Further research will help establish the accuracy of AMH levels to predict AMP within individuals.
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
In this paper we present truncated differential analysis of reduced-round LBlock by computing the differential distribution of every nibble of the state. LLR statistical test is used as a tool to apply the distinguishing and key-recovery attacks. To build the distinguisher, all possible differences are traced through the cipher and the truncated differential probability distribution is determined for every output nibble. We concatenate additional rounds to the beginning and end of the truncated differential distribution to apply the key-recovery attack. By exploiting properties of the key schedule, we obtain a large overlap of key bits used in the beginning and final rounds. This allows us to significantly increase the differential probabilities and hence reduce the attack complexity. We validate the analysis by implementing the attack on LBlock reduced to 12 rounds. Finally, we apply single-key and related-key attacks on 18 and 21-round LBlock, respectively.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.
Resumo:
We consider the problem of combining opinions from different experts in an explicitly model-based way to construct a valid subjective prior in a Bayesian statistical approach. We propose a generic approach by considering a hierarchical model accounting for various sources of variation as well as accounting for potential dependence between experts. We apply this approach to two problems. The first problem deals with a food risk assessment problem involving modelling dose-response for Listeria monocytogenes contamination of mice. Two hierarchical levels of variation are considered (between and within experts) with a complex mathematical situation due to the use of an indirect probit regression. The second concerns the time taken by PhD students to submit their thesis in a particular school. It illustrates a complex situation where three hierarchical levels of variation are modelled but with a simpler underlying probability distribution (log-Normal).
Resumo:
A probabilistic method is proposed to evaluate voltage quality of grid-connected photovoltaic (PV) power systems. The random behavior of solar irradiation is described in statistical terms and the resulting voltage fluctuation probability distribution is then derived. Reactive power capabilities of the PV generators are then analyzed and their operation under constant power factor mode is examined. By utilizing the reactive power capability of the PV-generators to the full, it is shown that network voltage quality can be greatly enhanced.
Resumo:
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilised in everyday language. While the systematicity and productivity of language provide a strong argument in favour of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. Compositionality is equated with a joint probability distribution modelling how the constituent concepts in the combination are interpreted. Marginal selectivity is emphasised as a pivotal probabilistic constraint for the application of the Bell/CH and CHSH systems of inequalities (referred to collectively as Bell-type). Non-compositionality is then equated with either a failure of marginal selectivity, or, in the presence of marginal selectivity, with a violation of Bell-type inequalities. In both non-compositional scenarios, the conceptual combination cannot be modelled using a joint probability distribution with variables corresponding to the interpretation of the individual concepts. The framework is demonstrated by applying it to an empirical scenario of twenty-four non-lexicalised conceptual combinations.
Resumo:
State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.
Resumo:
In this paper, the placement of sectionalizers, as well as, a cross-connection is optimally determined so that the objective function is minimized. The objective function employed in this paper consists of two main parts, the switch cost and the reliability cost. The switch cost is composed of the cost of sectionalizers and cross-connection and the reliability cost is assumed to be proportional to a reliability index, SAIDI. To optimize the allocation of sectionalizers and cross-connection problem realistically, the cost related to each element is considered as discrete. In consequence of binary variables for the availability of sectionalizers, the problem is extremely discrete. Therefore, the probability of local minimum risk is high and a heuristic-based optimization method is needed. A Discrete Particle Swarm Optimization (DPSO) is employed in this paper to deal with this discrete problem. Finally, a testing distribution system is used to validate the proposed method.