131 resultados para Lossless data hiding


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cost and risk associated with mineral exploration in Australia increases significantly as companies move into deeper regolith-covered terrain. The ability to map the bedrock and the depth of weathering within an area has the potential to decrease this risk and increase the effectiveness of exploration programs. This paper is the second in a trilogy concerning the Grant's Patch area of the Eastern Goldfields. The recent development of the VPmg potential field inversion program in conjunction with the acquisition of high-resolution gravity data over an area with extensive drilling provided an opportunity to evaluate three-dimensional gravity inversion as a bedrock and regolith mapping tool. An apparent density model of the study area was constructed, with the ground represented as adjoining 200 m by 200 m vertical rectangular prisms. During inversion VPmg incrementally adjusted the density of each prism until the free-air gravity response of the model replicated the observed data. For the Grant's Patch study area, this image of the apparent density values proved easier to interpret than the Bouguer gravity image. A regolith layer was introduced into the model and realistic fresh-rock densities assigned to each basement prism according to its interpreted lithology. With the basement and regolith densities fixed, the VPmg inversion algorithm adjusted the depth to fresh basement until the misfit between the calculated and observed gravity response was minimised. The resulting geometry of the bedrock/regolith contact largely replicated the base of weathering indicated by drilling with predicted depth of weathering values from gravity inversion typically within 15% of those logged during RAB and RC drilling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 16S rRNA gene (16S rDNA) is currently the most widely used gene for estimating the evolutionary history of prokaryotes, To date, there are more than 30 000 16S rDNA sequences available from the core databases, GenBank, EMBL and DDBJ, This great number may cause a dilemma when composing datasets for phylogenetic analysis, since the choice and number of reference organisms are known to affect the resulting tree topology. A group of sequences appearing monophyletic in one dataset may not be so in another. This can be especially problematic when establishing the relationships of distantly related sequences at the division (phylum) level. In this study, a multiple-outgroup approach to resolving division-level phylogenetic relationships is suggested using 16S rDNA data. The approach is illustrated by two case studies concerning the monophyly of two recently proposed bacterial divisions, OP9 and OP10.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Qualitative data analysis (QDA) is often a time-consuming and laborious process usually involving the management of large quantities of textual data. Recently developed computer programs offer great advances in the efficiency of the processes of QDA. In this paper we report on an innovative use of a combination of extant computer software technologies to further enhance and simplify QDA. Used in appropriate circumstances, we believe that this innovation greatly enhances the speed with which theoretical and descriptive ideas can be abstracted from rich, complex, and chaotic qualitative data. © 2001 Human Sciences Press, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Much progress has been made on inferring population history from molecular data. However, complex demographic scenarios have been considered rarely or have proved intractable. The serial introduction of the South-Central American cane Load Bufo marinas in various Caribbean and Pacific islands involves four major phases: a possible genetic admixture during the first introduction, a bottleneck associated with founding, a transitory, population boom, and finally, a demographic stabilization. A large amount of historical and demographic information is available for those introductions and can be combined profitably with molecular data. We used a Bayesian approach to combine this information With microsatellite (10 loci) and enzyme (22 loci) data and used a rejection algorithm to simultaneously estimate the demographic parameters describing the four major phases of the introduction history,. The general historical trends supported by microsatellites and enzymes were similar. However, there was a stronger support for a larger bottleneck at introductions for microsatellites than enzymes and for a more balanced genetic admixture for enzymes than for microsatellites. Verb, little information was obtained from either marker about the transitory population boom observed after each introduction. Possible explanations for differences in resolution of demographic events and discrepancies between results obtained with microsatellites and enzymes were explored. Limits Of Our model and method for the analysis of nonequilibrium populations were discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: This study examines human scalp electroencephalographic (EEG) data for evidence of non-linear interdependence between posterior channels. The spectral and phase properties of those epochs of EEG exhibiting non-linear interdependence are studied. Methods: Scalp EEG data was collected from 40 healthy subjects. A technique for the detection of non-linear interdependence was applied to 2.048 s segments of posterior bipolar electrode data. Amplitude-adjusted phase-randomized surrogate data was used to statistically determine which EEG epochs exhibited non-linear interdependence. Results: Statistically significant evidence of non-linear interactions were evident in 2.9% (eyes open) to 4.8% (eyes closed) of the epochs. In the eyes-open recordings, these epochs exhibited a peak in the spectral and cross-spectral density functions at about 10 Hz. Two types of EEG epochs are evident in the eyes-closed recordings; one type exhibits a peak in the spectral density and cross-spectrum at 8 Hz. The other type has increased spectral and cross-spectral power across faster frequencies. Epochs identified as exhibiting non-linear interdependence display a tendency towards phase interdependencies across and between a broad range of frequencies. Conclusions: Non-linear interdependence is detectable in a small number of multichannel EEG epochs, and makes a contribution to the alpha rhythm. Non-linear interdependence produces spatially distributed activity that exhibits phase synchronization between oscillations present at different frequencies. The possible physiological significance of these findings are discussed with reference to the dynamical properties of neural systems and the role of synchronous activity in the neocortex. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.