988 resultados para clustered binary data
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
It is well known that dichotomizing continuous data has the effect to decrease statistical power when the goal is to test for a statistical association between two variables. Modern researchers however are focusing not only on statistical significance but also on an estimation of the "effect size" (i.e., the strength of association between the variables) to judge whether a significant association is also clinically relevant. In this article, we are interested in the consequences of dichotomizing continuous data on the value of an effect size in some classical settings. It turns out that the conclusions will not be the same whether using a correlation or an odds ratio to summarize the strength of association between the variables: Whereas the value of a correlation is typically decreased by a factor pi/2 after each dichotomization, the value of an odds ratio is at the same time raised to the power 2. From a descriptive statistical point of view, it is thus not clear whether dichotomizing continuous data leads to a decrease or to an increase in the effect size, as illustrated using a data set to investigate the relationship between motor and intellectual functions in children and adolescents
Resumo:
The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.
Resumo:
A role for sequential test procedures is emerging in genetic and epidemiological studies using banked biological resources. This stems from the methodology's potential for improved use of information relative to comparable fixed sample designs. Studies in which cost, time and ethics feature prominently are particularly suited to a sequential approach. In this paper sequential procedures for matched case–control studies with binary data will be investigated and assessed. Design issues such as sample size evaluation and error rates are identified and addressed. The methodology is illustrated and evaluated using both real and simulated data sets.
Resumo:
This work develops a new methodology in order to discriminate models for interval-censored data based on bootstrap residual simulation by observing the deviance difference from one model in relation to another, according to Hinde (1992). Generally, this sort of data can generate a large number of tied observations and, in this case, survival time can be regarded as discrete. Therefore, the Cox proportional hazards model for grouped data (Prentice & Gloeckler, 1978) and the logistic model (Lawless, 1982) can befitted by means of generalized linear models. Whitehead (1989) considered censoring to be an indicative variable with a binomial distribution and fitted the Cox proportional hazards model using complementary log-log as a link function. In addition, a logistic model can be fitted using logit as a link function. The proposed methodology arises as an alternative to the score tests developed by Colosimo et al. (2000), where such models can be obtained for discrete binary data as particular cases from the Aranda-Ordaz distribution asymmetric family. These tests are thus developed with a basis on link functions to generate such a fit. The example that motivates this study was the dataset from an experiment carried out on a flax cultivar planted on four substrata susceptible to the pathogen Fusarium oxysoprum. The response variable, which is the time until blighting, was observed in intervals during 52 days. The results were compared with the model fit and the AIC values.
Resumo:
An interim analysis is usually applied in later phase II or phase III trials to find convincing evidence of a significant treatment difference that may lead to trial termination at an earlier point than planned at the beginning. This can result in the saving of patient resources and shortening of drug development and approval time. In addition, ethics and economics are also the reasons to stop a trial earlier. In clinical trials of eyes, ears, knees, arms, kidneys, lungs, and other clustered treatments, data may include distribution-free random variables with matched and unmatched subjects in one study. It is important to properly include both subjects in the interim and the final analyses so that the maximum efficiency of statistical and clinical inferences can be obtained at different stages of the trials. So far, no publication has applied a statistical method for distribution-free data with matched and unmatched subjects in the interim analysis of clinical trials. In this simulation study, the hybrid statistic was used to estimate the empirical powers and the empirical type I errors among the simulated datasets with different sample sizes, different effect sizes, different correlation coefficients for matched pairs, and different data distributions, respectively, in the interim and final analysis with 4 different group sequential methods. Empirical powers and empirical type I errors were also compared to those estimated by using the meta-analysis t-test among the same simulated datasets. Results from this simulation study show that, compared to the meta-analysis t-test commonly used for data with normally distributed observations, the hybrid statistic has a greater power for data observed from normally, log-normally, and multinomially distributed random variables with matched and unmatched subjects and with outliers. Powers rose with the increase in sample size, effect size, and correlation coefficient for the matched pairs. In addition, lower type I errors were observed estimated by using the hybrid statistic, which indicates that this test is also conservative for data with outliers in the interim analysis of clinical trials.^
Resumo:
The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^
Resumo:
This collection contains measurements of abundance and diversity of different groups of aboveground invertebrates sampled on the plots of the different sub-experiments at the field site of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. The following series of datasets are contained in this collection: 1. Measurements of ant abundance (number of individuals attracted to baits) and ant occurrence (binary data) in the Main Experiment in 2006 and 2013. Ants where sampled using two types of baited traps receiving ~10g of Tuna or ~10g of honey/Sucrose. After 30min the occurrence (presence = 1 / absence = 0) and abundance (number) of ants at the two types of baits was recorded and pooled per plot.
Resumo:
The viscosity of ionic liquids (ILs) has been modeled as a function of temperature and at atmospheric pressure using a new method based on the UNIFAC–VISCO method. This model extends the calculations previously reported by our group (see Zhao et al. J. Chem. Eng. Data 2016, 61, 2160–2169) which used 154 experimental viscosity data points of 25 ionic liquids for regression of a set of binary interaction parameters and ion Vogel–Fulcher–Tammann (VFT) parameters. Discrepancies in the experimental data of the same IL affect the quality of the correlation and thus the development of the predictive method. In this work, mathematical gnostics was used to analyze the experimental data from different sources and recommend one set of reliable data for each IL. These recommended data (totally 819 data points) for 70 ILs were correlated using this model to obtain an extended set of binary interaction parameters and ion VFT parameters, with a regression accuracy of 1.4%. In addition, 966 experimental viscosity data points for 11 binary mixtures of ILs were collected from literature to establish this model. All the binary data consist of 128 training data points used for the optimization of binary interaction parameters and 838 test data points used for the comparison of the pure evaluated values. The relative average absolute deviation (RAAD) for training and test is 2.9% and 3.9%, respectively.
Resumo:
A secure communication system based on the error-feedback synchronization of the electronic model of the particle-in-a-box system is proposed. This circuit allows a robust and simple electronic emulation of the mechanical behavior of the collisions of a particle inside a box, exhibiting rich chaotic behavior. The required nonlinearity to emulate the box walls is implemented in a simple way when compared with other analog electronic chaotic circuits. A master/slave synchronization of two circuits exhibiting a rich chaotic behavior demonstrates the potentiality of this system to secure communication. In this system, binary data stream information modulates the bifurcation parameter of the particle-in-a-box electronic circuit in the transmitter. In the receiver circuit, this parameter is estimated using Pecora-Carroll synchronization and error-feedback synchronization. The performance of the demodulation process is verified through the eye pattern technique applied on the recovered bit stream. During the demodulation process, the error-feedback synchronization presented better performance compared with the Pecora-Carroll synchronization. The application of the particle-in-a-box electronic circuit in a secure communication system is demonstrated.
Resumo:
Nos últimos anos, o processo de ensino e aprendizagem tem sofrido significativas alterações graças ao aparecimento da Internet. Novas ferramentas para apoio ao ensino têm surgido, nas quais se destacam os laboratórios remotos. Atualmente, muitas instituições de ensino disponibilizam laboratórios remotos nos seus cursos, que permitem, a professores e alunos, a realização de experiências reais através da Internet. Estes são implementados por diferentes arquiteturas e infraestruturas, suportados por vários módulos de laboratório acessíveis remotamente (e.g. instrumentos de medição). No entanto, a sua inclusão no ensino é ainda deficitária, devido: i) à falta de meios e competências técnicas das instituições de ensino para os desenvolverem, ii) à dificuldade na partilha dos módulos de laboratório por diferentes infraestruturas e, iii) à reduzida capacidade de os reconfigurar com esses módulos. Para ultrapassar estas limitações, foi idealizado e desenvolvido no âmbito de um trabalho de doutoramento [1] um protótipo, cuja arquitetura é baseada na norma IEEE 1451.0 e na tecnologia de FPGAs. Para além de garantir o desenvolvimento e o acesso de forma normalizada a um laboratório remoto, este protótipo promove ainda a partilha de módulos de laboratório por diferentes infraestruturas. Nesse trabalho explorou-se a capacidade de reconfiguração de FPGAs para embutir na infraestrutura do laboratório vários módulos, todos descritos em ficheiros, utilizando linguagens de descrição de hardware estruturados de acordo com a norma IEEE 1451.0. A definição desses módulos obriga à criação de estruturas de dados binárias (Transducer Electronic Data Sheets, TEDSs), bem como de outros ficheiros que possibilitam a sua interligação com a infraestrutura do laboratório. No entanto, a criação destes ficheiros é bastante complexa, uma vez que exige a realização de vários cálculos e conversões. Tendo em consideração essa mesma complexidade, esta dissertação descreve o desenvolvimento de uma aplicação Web para leitura e escrita dos TEDSs. Para além de um estudo sobre os laboratórios remotos, é efetuada uma descrição da norma IEEE 1451.0, com particular atenção para a sua arquitetura e para a estrutura dos diferentes TEDSs. Com o objetivo de enquadrar a aplicação desenvolvida, efetua-se ainda uma breve apresentação de um protótipo de um laboratório remoto reconfigurável, cuja reconfiguração é apoiada por esta aplicação. Por fim, é descrita a verificação da aplicação Web, de forma a tirar conclusões sobre o seu contributo para a simplificação dessa reconfiguração.
Resumo:
The objective of this work was to characterize morphologically and molecularly the genetic diversity of cassava accessions, collected from different regions in Brazil. A descriptive analysis was made for 12 morphological traits in 419 accessions. Data was transformed into binary data for cluster analysis and analysis of molecular variance. A higher proportion of white or cream (71%) root cortex color was found, while flesh colors were predominantly white (49%) and cream (42%). Four accession groups were classified by the cluster analysis, but they were not grouped according to their origin, which indicates that diversity is not structured in space. The variation was greater within regions (95.6%). Sixty genotypes were also evaluated using 14 polymorphic microsatellite markers. Molecular results corroborated the morphological ones, showing the same random distribution of genotypes, with no grouping according to origin. Diversity indices were high for each region, and a greater diversity was found within regions, with: a mean number of alleles per locus of 3.530; observed and expected heterozygosity of 0.499 and 0.642, respectively; and Shannon index of 1.03. The absence of spatial structure among cassava genotypes according to their origins shows the anthropic influence in the distribution and movement of germplasm, both within and among regions.
Approximation de la distribution a posteriori d'un modèle Gamma-Poisson hiérarchique à effets mixtes
Resumo:
La méthode que nous présentons pour modéliser des données dites de "comptage" ou données de Poisson est basée sur la procédure nommée Modélisation multi-niveau et interactive de la régression de Poisson (PRIMM) développée par Christiansen et Morris (1997). Dans la méthode PRIMM, la régression de Poisson ne comprend que des effets fixes tandis que notre modèle intègre en plus des effets aléatoires. De même que Christiansen et Morris (1997), le modèle étudié consiste à faire de l'inférence basée sur des approximations analytiques des distributions a posteriori des paramètres, évitant ainsi d'utiliser des méthodes computationnelles comme les méthodes de Monte Carlo par chaînes de Markov (MCMC). Les approximations sont basées sur la méthode de Laplace et la théorie asymptotique liée à l'approximation normale pour les lois a posteriori. L'estimation des paramètres de la régression de Poisson est faite par la maximisation de leur densité a posteriori via l'algorithme de Newton-Raphson. Cette étude détermine également les deux premiers moments a posteriori des paramètres de la loi de Poisson dont la distribution a posteriori de chacun d'eux est approximativement une loi gamma. Des applications sur deux exemples de données ont permis de vérifier que ce modèle peut être considéré dans une certaine mesure comme une généralisation de la méthode PRIMM. En effet, le modèle s'applique aussi bien aux données de Poisson non stratifiées qu'aux données stratifiées; et dans ce dernier cas, il comporte non seulement des effets fixes mais aussi des effets aléatoires liés aux strates. Enfin, le modèle est appliqué aux données relatives à plusieurs types d'effets indésirables observés chez les participants d'un essai clinique impliquant un vaccin quadrivalent contre la rougeole, les oreillons, la rub\'eole et la varicelle. La régression de Poisson comprend l'effet fixe correspondant à la variable traitement/contrôle, ainsi que des effets aléatoires liés aux systèmes biologiques du corps humain auxquels sont attribués les effets indésirables considérés.
Resumo:
A statistical technique for fault analysis in industrial printing is reported. The method specifically deals with binary data, for which the results of the production process fall into two categories, rejected or accepted. The method is referred to as logistic regression, and is capable of predicting future fault occurrences by the analysis of current measurements from machine parts sensors. Individual analysis of each type of fault can determine which parts of the plant have a significant influence on the occurrence of such faults; it is also possible to infer which measurable process parameters have no significant influence on the generation of these faults. Information derived from the analysis can be helpful in the operator's interpretation of the current state of the plant. Appropriate actions may then be taken to prevent potential faults from occurring. The algorithm is being implemented as part of an applied self-learning expert system.