967 resultados para Bayesian hypothesis testing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider an LTE network where a secondary user acts as a relay, transmitting data to the primary user using a decode-and-forward mechanism, transparent to the base-station (eNodeB). Clearly, the relay can decode symbols more reliably if the employed precoder matrix indicators (PMIs) are known. However, for closed loop spatial multiplexing (CLSM) transmit mode, this information is not always embedded in the downlink signal, leading to a need for effective methods to determine the PMI. In this thesis, we consider 2x2 MIMO and 4x4 MIMO downlink channels corresponding to CLSM and formulate two techniques to estimate the PMI at the relay using a hypothesis testing framework. We evaluate their performance via simulations for various ITU channel models over a range of SNR and for different channel quality indicators (CQIs). We compare them to the case when the true PMI is known at the relay and show that the performance of the proposed schemes are within 2 dB at 10% block error rate (BLER) in almost all scenarios. Furthermore, the techniques add minimal computational overhead over existent receiver structure. Finally, we also identify scenarios when using the proposed precoder detection algorithms in conjunction with the cooperative decode-and-forward relaying mechanism benefits the PUE and improves the BLER performance for the PUE. Therefore, we conclude from this that the proposed algorithms as well as the cooperative relaying mechanism at the CMR can be gainfully employed in a variety of real-life scenarios in LTE networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background and Purpose - Stroke has global importance and it causes an increasing amount of human suffering and economic burden, but its management is far from optimal. The unsuccessful outcome of several research programs highlights the need for reliable data on which to plan future clinical trials. The Virtual International Stroke Trials Archive aims to aid the planning of clinical trials by collating and providing access to a rich resource of patient data to perform exploratory analyses. Methods - Data were contributed by the principal investigators of numerous trials from the past 16 years. These data have been centrally collated and are available for anonymized analysis and hypothesis testing. Results - ”Currently, the Virtual International Stroke Trials Archive contains 21 trials. There are data on 15 000 patients with both ischemic and hemorrhagic stroke. Ages range between 18 and 103 years, with a mean age of 6912 years. Outcome measures include the Barthel Index, Scandinavian Stroke Scale, National Institutes of Health Stroke Scale, Orgogozo Scale, and modified Rankin Scale. Medical history and onset-to-treatment time are readily available, and computed tomography lesion data are available for selected trials. Conclusions - This resource has the potential to influence clinical trial design and implementation through data analyses that inform planning. (Stroke. 2007;38:1905-1910.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Scharff-technique is used for eliciting information from human sources. At the very core of the technique is the “illusion of knowing it all” tactic, which aims to inflate a source's perception of how much knowledge an interviewer holds about the event to be discussed. For the current study, we mapped the effects following two different ways of introducing this particular tactic; a traditional way of implementation where the interviewer explicitly states that s/he already knows most of the important information (the traditional condition), and a new way of implementation where the interviewer just starts to present the information that s/he holds (the just start condition). The two versions were compared in two separate experiments. In Experiment 1 (N = 60), we measured the participants’ perceptions of the interviewer's knowledge, and in Experiment 2 (N = 60), the participants’ perceptions of the interviewer's knowledge gaps. We found that participants in the just start condition (a) believed the interviewer had more knowledge (Experiment 1), and (b) searched less actively for gaps in the interviewer's knowledge (Experiment 2), compared to the traditional condition. We will discuss the current findings and how sources test and perceive the knowledge his or her interviewer possesses within a framework of social hypothesis testing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To compare the eficacy and safety of 4 mg of ondansetron vs. 4 mg of nalbuphine for the treatment of neuraxial morphine-induced pruritus, in patients at the “Dr. José Eleuterio González” University Hospital from September 2012 to August 2013. Material and methods: A controlled, prospective, randomized study of 28 patients (14 per group) receiving neuraxial morphine analgesia was conducted, which was registered and approved by the ethics Committee of the Institution and patients agreed to participate in the study under informed consent. The results were segmented and contrasted (according to drug) by hypothesis testing; the association was determined by X2 with a 95% conidence interval (CI). Results: Pruritus was effectively resolved in both groups and no signiicant difference was found in the rest of the variables. An increase in the visual analogue scale (eVA) was observed at 6 and 12 hours for the ondansetron group, which was statistically signiicant (p≤0.05), however both groups had an eVA of less than 3. Conclusions: When comparing the eficacy and safety of ondansetron 4 mg vs. nalbuphine 4 mg for the treatment of neuraxial morphine induced pruritus, the only signiicant difference found was the mean eVA at 6 and 12 hours, favoring the ondansetron group. However, both groups scored less than 3 on the eVA. Therefore, we consider that both treatments are effective and safe in the treatment of pruritus caused by neuraxial morphine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Face ao paradigma atual onde são constantemente impostas às entidades públicas medidas para a racionalização de recursos, os Estabelecimentos de Ensino Superior Público Universitário Militar não são exceção tornando-se cada vez mais premente a aposta numa gestão eficiente e eficaz. Neste âmbito, a Contabilidade Analítica assume de forma crescente um papel dominante na análise e controlo dos custos por atividade. O presente Trabalho de Investigação Aplicada encontra-se subordinado ao tema “A Formação de Oficiais de Administração: Oportunidades, Especificidades e Contingências na senda de uma Carreira Profissional”. Assim, o objetivo geral do presente trabalho passa pelo cálculo do custos de formação dos alunos de Administração dos três ramos das Forças Armadas e desta forma, optar pelo modelo mais rentável economicamente. Para o cálculo do custo, de entre as inúmeras opções existentes relativamente a sistemas de custeio, baseámo-nos no método das Secções Homogéneas ou Centros de Custos. A estrutura do trabalho pode ser dividida em duas partes, a primeira de cariz teórico e a segunda uma vertente prática. A metodologia adotada teve como referência o método de investigação em Ciência Sociais, isto é, partindo de uma pergunta central de investigação, que origina perguntas derivadas, procuram-se respostas através da formulação, exploração e teste de hipóteses. De acordo com os resultados do presente estudo podemos verificar que é o modelo de formação utilizada na Academia Militar o mais rentável economicamente. Desta forma, dadas as evidentes afinidades científicas existentes entre os cursos seria pertinente uma reconfiguração da estrutura científica, durações e do perfil formativo dos diferentes cursos. Assim, uma reorganização que elimine redundâncias e promova a partilha de recursos possibilitará ganhos de eficiência na gestão e consequentemente redução de custos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps rubella vaccine and autism controversy in the UK to illustrate a number of threshold concepts underlying good study design and interpretation of scientific evidence. These include the necessity of sufficient sample size, representative and random sampling, appropriate controls and inferring causation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La actividad física regular desempeña un papel fundamental en la prevención y control de los desórdenes musculo esqueléticos, dentro de la actividad laboral del profesor de educación física. Objetivo: El propósito del estudio fue determinar la relación entre los niveles de actividad física y la prevalencia de los desórdenes musculo esqueléticos, en profesores de educación física de 42 instituciones educativas oficiales de Bogotá-Colombia. Métodos. Se trata de un estudio de corte transversal en 262 profesores de educación física, de 42 instituciones educativas oficiales de Bogotá - Colombia. Se aplicó de manera auto-diligenciada el Cuestionario Nórdico de desórdenes músculos esqueléticos y el Cuestionario IPAQ versión corta para identificar los niveles de actividad física. Se obtuvieron medidas de tendencia central y de dispersión para variables cuantitativas y frecuencias relativas para variables cualitativas. Se calculó la prevalencia de vida y el porcentaje de reubicación laboral en los docentes que habían padecido diferentes tipo de dolor. Para estimar la relación entre el dolor y las variables sociodemográficas de los docentes, se utilizó un modelo de regresión logística binaria simple. Los análisis fueron realizados en SPSS versión 20 y se consideró como significativo un valor p < 0.05 para el contraste de hipótesis y un nivel de confianza para la estimación de parámetros. Resultados: El porcentaje de respuesta fue del 83.9%, se consideraron válidos 262 registros, 22.5% eran de género femenino, la mayor cantidad de docentes de educación física se encuentraon entre 25 y 35 años (43,9%), frente a los desórdenes musculo esqueléticos, el 16.9% de los profesores reporto haberlos sufrido alguna vez molestias en el cuello, el 17,2% en el hombro, 27,9% espalda, 7.93% brazo y en mano el 8.4%. Los profesores con mayores niveles de actividad física, reportaron una prevalencia menor de alteraciones musculo esqueléticas de 16,9 % para cuello; 27.7% para dorsal/lumbar frente a los sujetos con niveles bajos de actividad física. La presencia de los desórdenes se asoció a los años de experiencia (OR 3.39 IC95% 1.41-7.65), a pertenecer al género femenino (OR 4.94 IC95% 1.94-12.59), a la edad (OR 5.06 IC95% 1.25-20.59), y al atender más de 400 estudiantes a cargo dentro de la jornada laboral (OR 4.50 IC95% 1.74-11.62). Conclusiones: En los profesores de Educación Física no sé encontró una relación estadísticamente significativa entre los niveles de actividad física y los desórdenes musculo esqueléticos medidos por auto reporte.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent developments in evolutionary physiology have seen many of the long-held assumptions within comparative physiology receive rigorous experimental analysis. Studies of the adaptive significance of physiological acclimation exemplify this new evolutionary approach. The beneficial acclimation hypothesis (BAH) was proposed to describe the assumption that all acclimation changes enhance the physiological performance or fitness of an individual organism. To the surprise of most physiologists, all empirical examinations of the BAH have rejected its generality. However, we suggest that these examinations are neither direct nor complete tests of the functional benefit of acclimation. We consider them to be elegant analyses of the adaptive significance of developmental plasticity, a type of phenotypic plasticity that is very different from the traditional concept of acclimation that is used by comparative physiologists.