14 resultados para Geographic and dimension errors
em University of Queensland eSpace - Australia
Resumo:
Children aged between 3 and 7 years were taught simple and dimension-abstracted oddity discrimination using learning-set training techniques, in which isomorphic problems with varying content were presented with verbal explanation and feedback. Following the training phase, simple oddity (SO), dimension-abstracted oddity with one or two irrelevant dimensions, and non-oddity (NO) tasks were presented (without feedback) to determine the basis of solution. Although dimension-abstracted oddity requires discrimination based on a stimulus that is different from the others, which are all the same as each other on the relevant dimension, this was not the major strategy. The data were more consistent with use of a simple oddity strategy by 3- to 4-year-olds, and a most different strategy by 6- to 7-year-olds. These strategies are interpreted as reducing task complexity. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
We conducted a demographic and genetic study to investigate the effects of fragmentation due to the establishment of an exotic softwood plantation on populations of a small marsupial carnivore, the agile antechinus (Antechinus agilis), and the factors influencing the persistence of those populations in the fragmented habitat. The first aspect of the study was a descriptive analysis of patch occupancy and population size, in which we found a patch occupancy rate of 70% among 23 sites in the fragmented habitat compared to 100% among 48 sites with the same habitat characteristics in unfragmented habitat. Mark-recapture analyses yielded most-likely population size estimates of between 3 and 85 among the 16 occupied patches in the fragmented habitat. Hierarchical partitioning and model selection were used to identify geographic and habitat-related characteristics that influence patch occupancy and population size. Patch occupancy was primarily influenced by geographic isolation and habitat quality (vegetation basal area). The variance in population size among occupied sites was influenced primarily by forest type (dominant Eucalyptus species) and, to a lesser extent, by patch area and topographic context (gully sites had larger populations). A comparison of the sex ratios between the samples from the two habitat contexts revealed a significant deficiency of males in the fragmented habitat. We hypothesise that this is due to male-biased dispersal in an environment with increased dispersal-associated mortality. The population size and sex ratio data were incorporated into a simulation study to estimate the proportion of genetic diversity that would have been lost over the known timescale since fragmentation if the patch populations had been totally isolated. The observed difference in genetic diversity (gene diversity and allelic richness at microsatellite and mitochondrial markers) between 16 fragmented and 12 unfragmented sites was extremely low and inconsistent with the isolation of the patch populations. Our results show that although the remnant habitat patches comprise approximately 2% of the study area, they can support non-isolated populations. However, the distribution of agile antechinus populations in the fragmented system is dependent on habitat quality and patch connectivity. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In just over a decade, the use of molecular approaches for the recognition of parasites has become commonplace. For trematodes, the internal transcribed spacer region of ribosomal DNA (ITS rDNA) has become the default region of choice. Here, we review the findings of 63 studies that report ITS rDNA sequence data for about 155 digenean species from 19 families, and then review the levels of variation that have been reported and how the variation has been interpreted. Overall, complete ITS sequences (or ITS1 or ITS2 regions alone) usually distinguish trematode species clearly, including combinations for which morphology gives ambiguous results. Closely related species may have few base differences and in at least one convincing case the ITS2 sequences of two good species are identical. In some cases, the ITS1 region gives greater resolution than the ITS2 because of the presence of variable repeat units that are generally lacking in the ITS2. Intraspecific variation is usually low and frequently apparently absent. Information on geographical variation of digeneans is limited but at least some of the reported variation probably reflects the presence of multiple species. Despite the accepted dogma that concerted evolution makes the individual representative of the entire species, a significant number of studies have reported at least some intraspecific variation. The significance of such variation is difficult to assess a posteriori, but it seems likely that identification and sequencing errors account for some of it and failure to recognise separate species may also be significant. Some reported variation clearly requires further analysis. The use of a yardstick to determine when separate species should be recognised is flawed. Instead, we argue that consistent genetic differences that are associated with consistent morphological or biological traits should be considered the marker for separate species. We propose a generalised approach to the use of rDNA to distinguish trematode species.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
The published requirements for accurate measurement of heat transfer at the interface between two bodies have been reviewed. A strategy for reliable measurement has been established, based on the depth of the temperature sensors in the medium, on the inverse method parameters and on the time response of the sensors. Sources of both deterministic and stochastic errors have been investigated and a method to evaluate them has been proposed, with the help of a normalisation technique. The key normalisation variables are the duration of the heat input and the maximum heat flux density. An example of application of this technique in the field of high pressure die casting is demonstrated. The normalisation study, coupled with previous determination of the heat input duration, makes it possible to determine the optimum location for the sensors, along with an acceptable sampling rate and the thermocouples critical response-time (as well as eventual filter characteristics). Results from the gauge are used to assess the suitability of the initial design choices. In particular the unavoidable response time of the thermocouples is estimated by comparison with the normalised simulation. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Data on the occurrence of species are widely used to inform the design of reserve networks. These data contain commission errors (when a species is mistakenly thought to be present) and omission errors (when a species is mistakenly thought to be absent), and the rates of the two types of error are inversely related. Point locality data can minimize commission errors, but those obtained from museum collections are generally sparse, suffer from substantial spatial bias and contain large omission errors. Geographic ranges generate large commission errors because they assume homogenous species distributions. Predicted distribution data make explicit inferences on species occurrence and their commission and omission errors depend on model structure, on the omission of variables that determine species distribution and on data resolution. Omission errors lead to identifying networks of areas for conservation action that are smaller than required and centred on known species occurrences, thus affecting the comprehensiveness, representativeness and efficiency of selected areas. Commission errors lead to selecting areas not relevant to conservation, thus affecting the representativeness and adequacy of reserve networks. Conservation plans should include an estimation of commission and omission errors in underlying species data and explicitly use this information to influence conservation planning outcomes.
Resumo:
This paper presents load profiles of electricity customers, using the knowledge discovery in databases (KDD) procedure, a data mining technique, to determine the load profiles for different types of customers. In this paper, the current load profiling methods are compared using data mining techniques, by analysing and evaluating these classification techniques. The objective of this study is to determine the best load profiling methods and data mining techniques to classify, detect and predict non-technical losses in the distribution sector, due to faulty metering and billing errors, as well as to gather knowledge on customer behaviour and preferences so as to gain a competitive advantage in the deregulated market. This paper focuses mainly on the comparative analysis of the classification techniques selected; a forthcoming paper will focus on the detection and prediction methods.
Resumo:
Young people from refugee backgrounds face enormous challenges in the settlement process within Australia. They must locate themselves within a new social, cultural, geographic and adult space, yet also try to find security within the spaces of their own families and ethnic communities. Traumas of the past can mix with painful experiences of the present. The stressors in the lives of these young people can be both complex and diverse. This paper explores the nature a/these stressors among young people from refugee backgrounds living in Australia. [t is based On in-depth interviews with 76 young people from refugee backgrounds now living in Brisbane, Adelaide and Perth. A qualitative analysis of the impact of these stressors as well as the coping strategies employed are discussed It is argued that trauma exists
Resumo:
A quantum circuit implementing 5-qubit quantum-error correction on a linear-nearest-neighbor architecture is described. The canonical decomposition is used to construct fast and simple gates that incorporate the necessary swap operations allowing the circuit to achieve the same depth as the current least depth circuit. Simulations of the circuit's performance when subjected to discrete and continuous errors are presented. The relationship between the error rate of a physical qubit and that of a logical qubit is investigated with emphasis on determining the concatenated error correction threshold.
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed. © 2004 Elsevier Inc. All rights reserved.
Resumo:
Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The consensus from published studies is that plasma lipids are each influenced by genetic factors, and that this contributes to genetic variation in risk of cardiovascular disease. Heritability estimates for lipids and lipoproteins are in the range .48 to .87, when measured once per study participant. However, this ignores the confounding effects of biological variation measurement error and ageing, and a truer assessment of genetic effects on cardiovascular risk may be obtained from analysis of longitudinal twin or family data. We have analyzed information on plasma high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol, and triglycerides, from 415 adult twins who provided blood on two to five occasions over 10 to 17 years. Multivariate modeling of genetic and environmental contributions to variation within and across occasions was used to assess the extent to which genetic and environmental factors have long-term effects on plasma lipids. Results indicated that more than one genetic factor influenced HDL and LDL components of cholesterol, and triglycerides over time in all studies. Nonshared environmental factors did not have significant long-term effects except for HDL. We conclude that when heritability of lipid risk factors is estimated on only one occasion, the existence of biological variation and measurement errors leads to underestimation of the importance of genetic factors as a cause of variation in long-term risk within the population. In addition our data suggest that different genes may affect the risk profile at different ages.
Resumo:
A location-based search engine must be able to find and assign proper locations to Web resources. Host, content and metadata location information are not sufficient to describe the location of resources as they are ambiguous or unavailable for many documents. We introduce target location as the location of users of Web resources. Target location is content-independent and can be applied to all types of Web resources. A novel method is introduced which uses log files and IN to track the visitors of websites. The experiments show that target location can be calculated for almost all documents on the Web at country level and to the majority of them in state and city levels. It can be assigned to Web resources as a new definition and dimension of location. It can be used separately or with other relevant locations to define the geography of Web resources. This compensates insufficient geographical information on Web resources and would facilitate the design and development of location-based search engines.
Resumo:
Although managers consider accurate, timely, and relevant information as critical to the quality of their decisions, evidence of large variations in data quality abounds. Over a period of twelve months, the action research project reported herein attempted to investigate and track data quality initiatives undertaken by the participating organisation. The investigation focused on two types of errors: transaction input errors and processing errors. Whenever the action research initiative identified non-trivial errors, the participating organisation introduced actions to correct the errors and prevent similar errors in the future. Data quality metrics were taken quarterly to measure improvements resulting from the activities undertaken during the action research project. The action research project results indicated that for a mission-critical database to ensure and maintain data quality, commitment to continuous data quality improvement is necessary. Also, communication among all stakeholders is required to ensure common understanding of data quality improvement goals. The action research project found that to further substantially improve data quality, structural changes within the organisation and to the information systems are sometimes necessary. The major goal of the action research study is to increase the level of data quality awareness within all organisations and to motivate them to examine the importance of achieving and maintaining high-quality data.