239 resultados para NUCLEAR DATA COLLECTIONS
Resumo:
Qualitative data analysis (QDA) is often a time-consuming and laborious process usually involving the management of large quantities of textual data. Recently developed computer programs offer great advances in the efficiency of the processes of QDA. In this paper we report on an innovative use of a combination of extant computer software technologies to further enhance and simplify QDA. Used in appropriate circumstances, we believe that this innovation greatly enhances the speed with which theoretical and descriptive ideas can be abstracted from rich, complex, and chaotic qualitative data. © 2001 Human Sciences Press, Inc.
Resumo:
Much progress has been made on inferring population history from molecular data. However, complex demographic scenarios have been considered rarely or have proved intractable. The serial introduction of the South-Central American cane Load Bufo marinas in various Caribbean and Pacific islands involves four major phases: a possible genetic admixture during the first introduction, a bottleneck associated with founding, a transitory, population boom, and finally, a demographic stabilization. A large amount of historical and demographic information is available for those introductions and can be combined profitably with molecular data. We used a Bayesian approach to combine this information With microsatellite (10 loci) and enzyme (22 loci) data and used a rejection algorithm to simultaneously estimate the demographic parameters describing the four major phases of the introduction history,. The general historical trends supported by microsatellites and enzymes were similar. However, there was a stronger support for a larger bottleneck at introductions for microsatellites than enzymes and for a more balanced genetic admixture for enzymes than for microsatellites. Verb, little information was obtained from either marker about the transitory population boom observed after each introduction. Possible explanations for differences in resolution of demographic events and discrepancies between results obtained with microsatellites and enzymes were explored. Limits Of Our model and method for the analysis of nonequilibrium populations were discussed.
Resumo:
Objectives: This study examines human scalp electroencephalographic (EEG) data for evidence of non-linear interdependence between posterior channels. The spectral and phase properties of those epochs of EEG exhibiting non-linear interdependence are studied. Methods: Scalp EEG data was collected from 40 healthy subjects. A technique for the detection of non-linear interdependence was applied to 2.048 s segments of posterior bipolar electrode data. Amplitude-adjusted phase-randomized surrogate data was used to statistically determine which EEG epochs exhibited non-linear interdependence. Results: Statistically significant evidence of non-linear interactions were evident in 2.9% (eyes open) to 4.8% (eyes closed) of the epochs. In the eyes-open recordings, these epochs exhibited a peak in the spectral and cross-spectral density functions at about 10 Hz. Two types of EEG epochs are evident in the eyes-closed recordings; one type exhibits a peak in the spectral density and cross-spectrum at 8 Hz. The other type has increased spectral and cross-spectral power across faster frequencies. Epochs identified as exhibiting non-linear interdependence display a tendency towards phase interdependencies across and between a broad range of frequencies. Conclusions: Non-linear interdependence is detectable in a small number of multichannel EEG epochs, and makes a contribution to the alpha rhythm. Non-linear interdependence produces spatially distributed activity that exhibits phase synchronization between oscillations present at different frequencies. The possible physiological significance of these findings are discussed with reference to the dynamical properties of neural systems and the role of synchronous activity in the neocortex. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Differentiated dendritic cells (DC) have been identified by the presence of nuclear RelB (nRelB) and HLA-DR, and the absence of CD20 or high levels of CD68, in lymph nodes and active rheumatoid arthritis synovial tissue. The current studies aimed to identify conditions in which nRelB is expressed in human tissues, by single and double immunohistochemistry of formalin-fixed peripheral and lymphoid tissue. Normal peripheral tissue did not contain nRelB(+) cells. nRelB(+) DC were located only in T- or B-cell areas of lymphoid tissue associated with normal organs or peripheral tissues, including tonsil, colon, spleen and thymus, or in association with T cells in inflamed peripheral tissue. Inflamed sites included skin delayed-type hypersensitivity reaction, and a wide range of tissues affected by autoimmune disease. Nuclear RelB(+) -HLA-DR- follicular DC were located in B-cell follicles in lymphoid organs and in lymphoid-like follicles of some tissues affected by autoimmune disease. Lymphoid tissue T-cell areas also contained nRelB(-) -HLA-DR+ cells, some of which expressed CD123 and/or CD68. Nuclear RelB(+) cells are found in normal lymphoid organs and in peripheral tissue in the context of inflammation, but not under normal resting conditions.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.
Resumo:
Life history has been implicated as a determinant of variation in rate of molecular evolution amongst vertebrate species because of a negative correlation between bode size and substitution rate for many Molecular data sets. Both the generality and the cause of the negative bode size trend have been debated, and the validity of key studies has been questioned (particularly concerning the failure to account for phylogenetic bias). In this study, a comparative method has been used to test for an association between a range of life-history variables-such as body size age at maturity, and clutch size-and DNA substitution rate for three genes (NADH4, cytochrome b, and c-mos). A negative relationship between body size and rate of molecular evolution was found for phylogenetically independent pairs of reptile species spanning turtles. lizards. snakes, crocodile, and tuatara. Although this Study was limited by the number of comparisons for which both sequence and lite-history data were available, the results, suggest that a negative bode size trend in rate of molecular evloution may be a general feature of reptile molecular evolution. consistent with similar studies of mammals and birds. This observation has important implications for uncovering the mechanisms of molecular evolution and warns against assuming that related lineages will share the same substitution rate (a local molecular clock) in order to date evolutionary divergences from DNA sequences.
Resumo:
Nuclear receptors are a superfamily of metazoan transcription factors that have been shown to be involved in a wide range of developmental and physiological processes. A PCR-based survey of genomic DNA and developmental cDNAs from the ascidian Herdmania identifies eight members of this multigene family. Sequence comparisons and phylogenetic analyses reveal that these ascidian nuclear receptors are representative of five of the six previously defined nuclear receptor subfamilies and are apparent homologues of retinoic acid [NR1B], retinoid X [NR2B], peroxisome proliferator-activated [NR1C], estrogen related [NR3B], neuron-derived orphan (NOR) [NR4A3], nuclear orphan [NR4A], TR2 orphan [NR2C1] and COUP orphan [NR2F3] receptors. Phylogenetic analyses that include the ascidian genes produce topologically distinct trees that suggest a redefinition of some nuclear receptor subfamilies. These trees also suggest that extensive gene duplication occurred after the vertebrates split from invertebrate chordates. These ascidian nuclear receptor genes are expressed differentially during embryogenesis and metamorphosis.
Resumo:
Paget disease of bone (PDB) is characterized by increased osteoclast activity and localized abnormal bone remodeling. PDB has a significant genetic component, with evidence of linkage to chromosomes 6p21.3 (PDB1) and 18q21-22 (PDB2) in some pedigrees. There is evidence of genetic heterogeneity, with other pedigrees showing negative linkage to these regions. TNFRSF11A, a gene that is essential for osteoclast formation and that encodes receptor activator of nuclear factor-kappa B (RANK), has been mapped to the PDB2 region. TNFRSF11A mutations that segregate in pedigrees with either familial expansile osteolysis or familial PDB have been identified; however, linkage studies and mutation screening have excluded the involvement of RANK in the majority of patients with PDB. We have excluded linkage, both to PDB1 and to PDB2, in a large multigenerational pedigree with multiple family members affected by PDB. We have conducted a genomewide scan of this pedigree, followed by fine mapping and multipoint analysis in regions of interest. The peak two-point LOD scores from the genomewide scan were 2.75, at D7S507, and 1.76, at D18S70. Multipoint and haplotype analysis of markers flanking D7S507 did not support linkage to this region. Haplotype analysis of markers flanking D18S70 demonstrated a haplotype segregating with PDB in a large subpedigree. This subpedigree had a significantly lower age at diagnosis than the rest of the pedigree (51.2 +/- 8.5 vs. 64.2 +/- 9.7 years; P = .0012). Linkage analysis of this subpedigree demonstrated a peak two-point LOD score of 4.23, at marker D18S1390 (theta = 0), and a peak multipoint LOD score of 4.71, at marker D18S70. Our data are consistent with genetic heterogeneity within the pedigree and indicate that 18q23 harbors a novel susceptibility gene for PDB.
Resumo:
Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less magical, and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature real-world problem relating to the lengths of paths in a network.