958 resultados para multivariate binary data
Resumo:
Abstract Background Banana cultivars are mostly derived from hybridization between wild diploid subspecies of Musa acuminata (A genome) and M. balbisiana (B genome), and they exhibit various levels of ploidy and genomic constitution. The Embrapa ex situ Musa collection contains over 220 accessions, of which only a few have been genetically characterized. Knowledge regarding the genetic relationships and diversity between modern cultivars and wild relatives would assist in conservation and breeding strategies. Our objectives were to determine the genomic constitution based on Internal Transcribed Spacer (ITS) regions polymorphism and the ploidy of all accessions by flow cytometry and to investigate the population structure of the collection using Simple Sequence Repeat (SSR) loci as co-dominant markers based on Structure software, not previously performed in Musa. Results From the 221 accessions analyzed by flow cytometry, the correct ploidy was confirmed or established for 212 (95.9%), whereas digestion of the ITS region confirmed the genomic constitution of 209 (94.6%). Neighbor-joining clustering analysis derived from SSR binary data allowed the detection of two major groups, essentially distinguished by the presence or absence of the B genome, while subgroups were formed according to the genomic composition and commercial classification. The co-dominant nature of SSR was explored to analyze the structure of the population based on a Bayesian approach, detecting 21 subpopulations. Most of the subpopulations were in agreement with the clustering analysis. Conclusions The data generated by flow cytometry, ITS and SSR supported the hypothesis about the occurrence of homeologue recombination between A and B genomes, leading to discrepancies in the number of sets or portions from each parental genome. These phenomenons have been largely disregarded in the evolution of banana, as the “single-step domestication” hypothesis had long predominated. These findings will have an impact in future breeding approaches. Structure analysis enabled the efficient detection of ancestry of recently developed tetraploid hybrids by breeding programs, and for some triploids. However, for the main commercial subgroups, Structure appeared to be less efficient to detect the ancestry in diploid groups, possibly due to sampling restrictions. The possibility of inferring the membership among accessions to correct the effects of genetic structure opens possibilities for its use in marker-assisted selection by association mapping.
Resumo:
The aim of this work is to carry out an applicative, comparative and exhaustive study between several entropy based indicators of independence and correlation. We considered some indicators characterized by a wide and consolidate literature, like mutual information, joint entropy, relative entropy or Kullback Leibler distance, and others, more recently introduced, like Granger, Maasoumi and racine entropy, also called Sρ, or utilized in more restricted domains, like Pincus approximate entropy or ApEn. We studied the behaviour of such indicators applying them to binary series. The series was designed to simulate a wide range of situations in order to characterize indicators limit and capability and to identify, case by case, the more useful and trustworthy ones. Our target was not only to study if such indicators were able to discriminate between dependence and independence because, especially for mutual information and Granger, Maasoumi and Racine, that was already demonstrated and reported in literature, but also to verify if and how they were able to provide information about structure, complexity and disorder of the series they were applied to. Special attention was paid on Pincus approximate entropy, that is said by the author to be able to provide information regarding the level of randomness, regularity and complexity of a series. By means of a focused and extensive research, we furthermore tried to clear the meaning of ApEn applied to a couple of different series. In such situation the indicator is named in literature as cross-ApEn. The cross-ApEn meaning and the interpretation of its results is often not simple nor univocal and the matter is scarcely delved into by literature, thereby users can easily leaded up to a misleading conclusion, especially if the indicator is employed, as often unfortunately it happens, in uncritical manner. In order to plug some cross-ApEn gaps and limits clearly brought out during the experimentation, we developed and applied to the already considered cases a further indicator we called “correspondence index”. The correspondence index is perfectly integrated into the cross-ApEn computational algorithm and it is able to provide, at least for binary data, accurate information about the intensity and the direction of an eventual correlation, even not linear, existing between two different series allowing, in the meanwhile, to detect an eventual condition of independence between the series themselves.
Resumo:
In epidemiological work, outcomes are frequently non-normal, sample sizes may be large, and effects are often small. To relate health outcomes to geographic risk factors, fast and powerful methods for fitting spatial models, particularly for non-normal data, are required. We focus on binary outcomes, with the risk surface a smooth function of space. We compare penalized likelihood models, including the penalized quasi-likelihood (PQL) approach, and Bayesian models based on fit, speed, and ease of implementation. A Bayesian model using a spectral basis representation of the spatial surface provides the best tradeoff of sensitivity and specificity in simulations, detecting real spatial features while limiting overfitting and being more efficient computationally than other Bayesian approaches. One of the contributions of this work is further development of this underused representation. The spectral basis model outperforms the penalized likelihood methods, which are prone to overfitting, but is slower to fit and not as easily implemented. Conclusions based on a real dataset of cancer cases in Taiwan are similar albeit less conclusive with respect to comparing the approaches. The success of the spectral basis with binary data and similar results with count data suggest that it may be generally useful in spatial models and more complicated hierarchical models.
Resumo:
We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.
Resumo:
Corporate Social Responsibility (CSR) addresses the responsibility of companies for their impacts on society. The concept of strategic CSR is becoming increasingly mainstreamed in the forest industry, but there is, however, little consensus on the definition and implementation of CSR. The objective of this research is to build knowledge on the characteristics of CSR and to provide insights on the emerging trend to increase the credibility and legitimacy of CSR through standardization. The study explores how the sustainability managers of European and North American forest companies perceive CSR and the recently released ISO 26000 guidance standard on social responsibility. The conclusions were drawn from an analysis of two data sets; multivariate survey data based on one subset of 30 European and 13 North American responses, and data obtained through in-depth interviewing of 10 sustainability managers that volunteered for an hour long phone discussion about social responsibility practices at their company. The analysis concluded that there are no major differences in the characteristics of cross-Atlantic CSR. Hence, the results were consistent with previous research that suggests that CSR is a case- and company-specific concept. Regarding the components of CSR, environmental issues and organizational governance were key priorities in both regions. Consumer issues, human rights, and financial issues were among the least addressed categories. The study reveals that there are varying perceptions on the ISO 26000 guidance standard, both positive and negative. Moreover, sustainability managers of European and North American forest companies are still uncertain regarding the applicability of the ISO 26000 guidance standard to the forest industry. This study is among the first to provide a preliminary review of the practical implications of the ISO 26000 standard in the forest sector. The results may be utilized by sustainability managers interested in the best practices on CSR, and also by a variety of forest industrial stakeholders interested in the practical outcomes of the long-lasting CSR debate.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
BACKGROUND This study evaluated whether risk factors for sternal wound infections vary with the type of surgical procedure in cardiac operations. METHODS This was a university hospital surveillance study of 3,249 consecutive patients (28% women) from 2006 to 2010 (median age, 69 years [interquartile range, 60 to 76]; median additive European System for Cardiac Operative Risk Evaluation score, 5 [interquartile range, 3 to 8]) after (1) isolated coronary artery bypass grafting (CABG), (2) isolated valve repair or replacement, or (3) combined valve procedures and CABG. All other operations were excluded. Univariate and multivariate binary logistic regression were conducted to identify independent predictors for development of sternal wound infections. RESULTS We detected 122 sternal wound infections (3.8%) in 3,249 patients: 74 of 1,857 patients (4.0%) after CABG, 19 of 799 (2.4%) after valve operations, and 29 of 593 (4.9%) after combined procedures. In CABG patients, bilateral internal thoracic artery harvest, procedural duration exceeding 300 minutes, diabetes, obesity, chronic obstructive pulmonary disease, and female sex (model 1) were independent predictors for sternal wound infection. A second model (model 2), using the European System for Cardiac Operative Risk Evaluation, revealed bilateral internal thoracic artery harvest, diabetes, obesity, and the second and third quartiles of the European System for Cardiac Operative Risk Evaluation were independent predictors. In valve patients, model 1 showed only revision for bleeding as an independent predictor for sternal infection, and model 2 yielded both revision for bleeding and diabetes. For combined valve and CABG operations, both regression models demonstrated revision for bleeding and duration of operation exceeding 300 minutes were independent predictors for sternal infection. CONCLUSIONS Risk factors for sternal wound infections after cardiac operations vary with the type of surgical procedure. In patients undergoing valve operations or combined operations, procedure-related risk factors (revision for bleeding, duration of operation) independently predict infection. In patients undergoing CABG, not only procedure-related risk factors but also bilateral internal thoracic artery harvest and patient characteristics (diabetes, chronic obstructive pulmonary disease, obesity, female sex) are predictive of sternal wound infection. Preventive interventions may be justified according to the type of operation.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
In human systemic lupus erythematosus (SLE), diverse autoantibodies accumulate over years before disease manifestation. Unaffected relatives of SLE patients frequently share a sustained production of autoantibodies with indiscriminable specificity, usually without ever acquiring the disease. We studied relations of IgG autoantibody profiles and peripheral blood activated regulatory T-cells (aTregs), represented by CD4(+)CD25(bright) T-cells that were regularly 70-90% Foxp3(+). We found consistent positive correlations of broad-range as well as specific SLE-associated IgG with aTreg frequencies within unaffected relatives, but not patients or unrelated controls. Our interpretation: unaffected relatives with shared genetic factors compensated pathogenic effects by aTregs engaged in parallel with the individual autoantibody production. To study this further, we applied a novel analytic approach named coreferentiality that tests the indirect relatedness of parameters in respect to multivariate phenotype data. Results show that independently of their direct correlation, aTreg frequencies and specific SLE-associated IgG were likely functionally related in unaffected relatives: they significantly parallelled each other in their relations to broad-range immunoblot autoantibody profiles. In unaffected relatives, we also found coreferential effects of genetic variation in the loci encoding IL-2 and CD25. A model of CD25 functional genetic effects constructed by coreferentiality maximization suggests that IL-2-CD25 interaction, likely stimulating aTregs in unaffected relatives, had an opposed effect in SLE patients, presumably triggering primarily T-effector cells in this group. Coreferentiality modeling as we do it here could also be useful in other contexts, particularly to explore combined functional genetic effects.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.
Resumo:
Pharmacodynamics (PD) is the study of the biochemical and physiological effects of drugs. The construction of optimal designs for dose-ranging trials with multiple periods is considered in this paper, where the outcome of the trial (the effect of the drug) is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given amount of time. The carryover effect of each dose from one period to the next is assumed to be proportional to the direct effect. It is shown for a logistic regression model that the efficiency of optimal parallel (single-period) or crossover (two-period) design is substantially greater than a balanced design. The optimal designs are also shown to be robust to misspecification of the value of the parameters. Finally, the parallel and crossover designs are combined to provide the experimenter with greater flexibility.
Resumo:
A visualization plot of a data set of molecular data is a useful tool for gaining insight into a set of molecules. In chemoinformatics, most visualization plots are of molecular descriptors, and the statistical model most often used to produce a visualization is principal component analysis (PCA). This paper takes PCA, together with four other statistical models (NeuroScale, GTM, LTM, and LTM-LIN), and evaluates their ability to produce clustering in visualizations not of molecular descriptors but of molecular fingerprints. Two different tasks are addressed: understanding structural information (particularly combinatorial libraries) and relating structure to activity. The quality of the visualizations is compared both subjectively (by visual inspection) and objectively (with global distance comparisons and local k-nearest-neighbor predictors). On the data sets used to evaluate clustering by structure, LTM is found to perform significantly better than the other models. In particular, the clusters in LTM visualization space are consistent with the relationships between the core scaffolds that define the combinatorial sublibraries. On the data sets used to evaluate clustering by activity, LTM again gives the best performance but by a smaller margin. The results of this paper demonstrate the value of using both a nonlinear projection map and a Bernoulli noise model for modeling binary data.
Resumo:
Visual mental imagery is a complex process that may be influenced by the content of mental images. Neuropsychological evidence from patients with hemineglect suggests that in the imagery domain environments and objects may be represented separately and may be selectively affected by brain lesions. In the present study, we used functional magnetic resonance imaging (fMRI) to assess the possibility of neural segregation among mental images depicting parts of an object, of an environment (imagined from a first-person perspective), and of a geographical map, using both a mass univariate and a multivariate approach. Data show that different brain areas are involved in different types of mental images. Imagining an environment relies mainly on regions known to be involved in navigational skills, such as the retrosplenial complex and parahippocampal gyrus, whereas imagining a geographical map mainly requires activation of the left angular gyrus, known to be involved in the representation of categorical relations. Imagining a familiar object mainly requires activation of parietal areas involved in visual space analysis in both the imagery and the perceptual domain. We also found that the pattern of activity in most of these areas specifically codes for the spatial arrangement of the parts of the mental image. Our results clearly demonstrate a functional neural segregation for different contents of mental images and suggest that visuospatial information is coded by different patterns of activity in brain areas involved in visual mental imagery. Hum Brain Mapp 36:945-958, 2015.