33 resultados para quantization artifacts
em CentAUR: Central Archive University of Reading - UK
Resumo:
We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.
Resumo:
How does the manipulation of visual representations play a role in the practices of generating, evolving and exchanging knowledge? The role of visual representation in mediating knowledge work is explored in a study of design work of an architectural practice, Edward Cullinan Architects. The intensity of interactions with visual representations in the everyday activities on design projects is immediately striking. Through a discussion of observed design episodes, two ways are articulated in which visual representations act as 'artefacts of knowing'. As communication media they are symbolic representations, rich in meaning, through which ideas are articulated, developed and exchanged. Furthermore, as tangible artefacts they constitute material entities with which to interact and thereby develop knowledge. The communicative and interactive properties of visual representations constitute them as central elements of knowledge work. The paper explores emblematic knowledge practices supported by visual representation and concludes by pinpointing avenues for further research.
Resumo:
A Fractal Quantizer is proposed that replaces the expensive division operation for the computation of scalar quantization by more modest and available multiplication, addition and shift operations. Although the proposed method is iterative in nature, simulations prove a virtually undetectable distortion to the naked eve for JPEG compressed images using a single iteration. The method requires a change to the usual tables used in JPEG algorithins but of similar size. For practical purposes, performing quantization is reduced to a multiplication plus addition operation easily programmed in either low-end embedded processors and suitable for efficient and very high speed implementation in ASIC or FPGA hardware. FPGA hardware implementation shows up to x15 area-time savingscompared to standars solutions for devices with dedicated multipliers. The method can be also immediately extended to perform adaptive quantization(1).
Resumo:
The current study aims to assess the applicability of direct or indirect normalization for the analysis of fractional anisotropy (FA) maps in the context of diffusion-weighted images (DWIs) contaminated by ghosting artifacts. We found that FA maps acquired by direct normalization showed generally higher anisotropy than indirect normalization, and the disparities were aggravated by the presence of ghosting artifacts in DWIs. The voxel-wise statistical comparisons demonstrated that indirect normalization reduced the influence of artifacts and enhanced the sensitivity of detecting anisotropy differences between groups. This suggested that images contaminated with ghosting artifacts can be sensibly analyzed using indirect normalization.
Resumo:
Using a focused ion beam (FIB) instrument, electron-transparent samples (termed foils) have been cut from the naturally weathered surfaces of perthitic alkali feldspars recovered from soils overlying the Shap granite, northwest England. Characterization of these foils by transmission electron microscopy (TEM) has enabled determination of the crystallinity and chemical composition of near-surface regions of the feldspar and an assessment of the influence of intragranular microtextures on the microtopography of grain surfaces and development of etch pits. Damage accompanying implantation of the 30 kV Ga+ ions used for imaging and deposition of protective platinum prior to ion milling creates amorphous layers beneath outer grain surfaces, but can be overcome by coating grains with > 85 nm of gold before FIB work. The sidewalls of the foil and feldspar surrounding original voids are also partially amorphized during later stages of ion milling. No evidence was found for the presence of amorphous or crystalline weathering products or amorphous "leached layers" immediately beneath outer grain surfaces. The absence of a leached layer indicates that chemical weathering of feldspar in the Shap soils is stoichiometric, or if non-stoichiometric, either the layer is too thin to resolve by the TEM techniques used (i.e., <=similar to 2.5 nm) or an insufficient proportion of ions have been leached from near-surface regions so that feldspar crystallinity is maintained. No evidence was found for any difference in the mechanisms of weathering where a microbial filament rests on the feldspar surface. Sub-micrometer-sized steps on the grain surface have formed where subgrains and exsolution lamellae have influenced the propagation of fractures during physical weathering, whereas finer scale corrugations form due to compositional or strain-related differences in dissolution rates of albite platelets and enclosing tweed orthoclase. With progressive weathering, etch pits that initiated at the grain surface extend into grain interiors as etch tubes by exploiting preexisting networks of nanopores that formed during the igneous history of the grain. The combination of FIB and TEM techniques is an especially powerful way of exploring mechanisms of weathering within the "internal zone" beneath outer grain surfaces, but results must be interpreted with caution owing to the ease with which artifacts can be created by the high-energy ion and electron beams used in the preparation and characterization of the foils.
Resumo:
Although accuracy of digital elevation models (DEMs) can be quantified and measured in different ways, each is influenced by three main factors: terrain character, sampling strategy and interpolation method. These parameters, and their interaction, are discussed. The generation of DEMs from digitised contours is emphasised because this is the major source of DEMs, particularly within member countries of OEEPE. Such DEMs often exhibit unwelcome artifacts, depending on the interpolation method employed. The origin and magnitude of these effects and how they can be reduced to improve the accuracy of the DEMs are also discussed.
Resumo:
The principles of operation of an experimental prototype instrument known as J-SCAN are described along with the derivation of formulae for the rapid calculation of normalized impedances; the structure of the instrument; relevant probe design parameters; digital quantization errors; and approaches for the optimization of single frequency operation. An eddy current probe is used As the inductance element of a passive tuned-circuit which is repeatedly excited with short impulses. Each impulse excites an oscillation which is subject to decay dependent upon the values of the tuned-circuit components: resistance, inductance and capacitance. Changing conditions under the probe that affect the resistance and inductance of this circuit will thus be detected through changes in the transient response. These changes in transient response, oscillation frequency and rate of decay, are digitized, and then normalized values for probe resistance and inductance changes are calculated immediately in a micro processor. This approach coupled with a minimum analogue processing and maximum of digital processing has advantages compared with the conventional approaches to eddy current instruments. In particular there are: the absence of an out of balance condition and the flexibility and stability of digital data processing.
Resumo:
In the past decade, the amount of data in biological field has become larger and larger; Bio-techniques for analysis of biological data have been developed and new tools have been introduced. Several computational methods are based on unsupervised neural network algorithms that are widely used for multiple purposes including clustering and visualization, i.e. the Self Organizing Maps (SOM). Unfortunately, even though this method is unsupervised, the performances in terms of quality of result and learning speed are strongly dependent from the neuron weights initialization. In this paper we present a new initialization technique based on a totally connected undirected graph, that report relations among some intersting features of data input. Result of experimental tests, where the proposed algorithm is compared to the original initialization techniques, shows that our technique assures faster learning and better performance in terms of quantization error.
Resumo:
Techniques for obtaining quantitative values of the temperatures and concentrations of remote hot gaseous effluents from their measured passive emission spectra have been examined in laboratory experiments. The high sensitivity of the spectrometer in the vicinity of the 2397 cm-1 band head region of CO2 has allowed the gas temperature to be calculated from the relative intensity of the observed rotational lines. The spatial distribution of the CO2 in a methane flame has been reconstructed tomographically using a matrix inversion technique. The spectrometer has been calibrated against a black body source at different temperatures and a self absorption correction has been applied to the data avoiding the need to measure the transmission directly. Reconstruction artifacts have been reduced by applying a smoothing routine to the inversion matrix.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
This paper describes the design and manufacture of the filters and antireflection coatings used in the HIRDLS instrument. The multilayer design of the filters and coatings, choice of layer materials, and the deposition techniques adopted to ensure adequate layer thickness control is discussed. The spectral assessment of the filters and coatings is carried out using a FTIR spectrometer; some measurement results are presented together with discussion of measurement accuracy and the identification and avoidance of measurement artifacts. The post-deposition processing of the filters by sawing to size, writing of an identification code onto the coatings and the environmental testing of the finished filters are also described.
Resumo:
In this paper we discuss current work concerning Appearance-based and CAD-based vision; two opposing vision strategies. CAD-based vision is geometry based, reliant on having complete object centred models. Appearance-based vision builds view dependent models from training images. Existing CAD-based vision systems that work with intensity images have all used one and zero dimensional features, for example lines, arcs, points and corners. We describe a system we have developed for combining these two strategies. Geometric models are extracted from a commercial CAD library of industry standard parts. Surface appearance characteristics are then learnt automatically by observing actual object instances. This information is combined with geometric information and is used in hypothesis evaluation. This augmented description improves the systems robustness to texture, specularities and other artifacts which are hard to model with geometry alone, whilst maintaining the advantages of a geometric description.