69 resultados para Resolution algorithm
em University of Queensland eSpace - Australia
Resumo:
The cost and risk associated with mineral exploration in Australia increases significantly as companies move into deeper regolith-covered terrain. The ability to map the bedrock and the depth of weathering within an area has the potential to decrease this risk and increase the effectiveness of exploration programs. This paper is the second in a trilogy concerning the Grant's Patch area of the Eastern Goldfields. The recent development of the VPmg potential field inversion program in conjunction with the acquisition of high-resolution gravity data over an area with extensive drilling provided an opportunity to evaluate three-dimensional gravity inversion as a bedrock and regolith mapping tool. An apparent density model of the study area was constructed, with the ground represented as adjoining 200 m by 200 m vertical rectangular prisms. During inversion VPmg incrementally adjusted the density of each prism until the free-air gravity response of the model replicated the observed data. For the Grant's Patch study area, this image of the apparent density values proved easier to interpret than the Bouguer gravity image. A regolith layer was introduced into the model and realistic fresh-rock densities assigned to each basement prism according to its interpreted lithology. With the basement and regolith densities fixed, the VPmg inversion algorithm adjusted the depth to fresh basement until the misfit between the calculated and observed gravity response was minimised. The resulting geometry of the bedrock/regolith contact largely replicated the base of weathering indicated by drilling with predicted depth of weathering values from gravity inversion typically within 15% of those logged during RAB and RC drilling.
Resumo:
Nine individuals with complex language deficits following left-hemisphere cortical lesions and a matched control group (n 5 9) performed speeded lexical decisions on the third word of auditory word triplets containing a lexical ambiguity. The critical conditions were concordant (e.g., coin–bank–money), discordant (e.g., river–bank–money), neutral (e.g., day–bank– money), and unrelated (e.g., river–day–money). Triplets were presented with an interstimulus interval (ISI) of 100 and 1250 ms. Overall, the left-hemisphere-damaged subjects appeared able to exhaustively access meanings for lexical ambiguities rapidly, but were unable to reduce the level of activation for contextually inappropriate meanings at both short and long ISIs, unlike control subjects. These findings are consistent with a disruption of the proposed role of the left hemisphere in selecting and suppressing meanings via contextual integration and a sparing of the right-hemisphere mechanisms responsible for maintaining alternative meanings.
Resumo:
Spectral peak resolution was investigated in normal hearing (NH), hearing impaired (HI), and cochlear implant (CI) listeners. The task involved discriminating between two rippled noise stimuli in which the frequency positions of the log-spaced peaks and valleys were interchanged. The ripple spacing was varied adaptively from 0.13 to 11.31 ripples/octave, and the minimum ripple spacing at which a reversal in peak and trough positions could be detected was determined as the spectral peak resolution threshold for each listener. Spectral peak resolution was best, on average, in NH listeners, poorest in CI listeners, and intermediate for HI listeners. There was a significant relationship between spectral peak resolution and both vowel and consonant recognition in quiet across the three listener groups. The results indicate that the degree of spectral peak resolution required for accurate vowel and consonant recognition in quiet backgrounds is around 4 ripples/octave, and that spectral peak resolution poorer than around 1–2 ripples/octave may result in highly degraded speech recognition. These results suggest that efforts to improve spectral peak resolution for HI and CI users may lead to improved speech recognition
Resumo:
The differences in spectral shape resolution abilities among cochlear implant ~CI! listeners, and between CI and normal-hearing ~NH! listeners, when listening with the same number of channels ~12!, was investigated. In addition, the effect of the number of channels on spectral shape resolution was examined. The stimuli were rippled noise signals with various ripple frequency-spacings. An adaptive 4IFC procedure was used to determine the threshold for resolvable ripple spacing, which was the spacing at which an interchange in peak and valley positions could be discriminated. The results showed poorer spectral shape resolution in CI compared to NH listeners ~average thresholds of approximately 3000 and 400 Hz, respectively!, and wide variability among CI listeners ~range of approximately 800 to 8000 Hz!. There was a significant relationship between spectral shape resolution and vowel recognition. The spectral shape resolution thresholds of NH listeners increased as the number of channels increased from 1 to 16, while the CI listeners showed a performance plateau at 4–6 channels, which is consistent with previous results using speech recognition measures. These results indicate that this test may provide a measure of CI performance which is time efficient and non-linguistic, and therefore, if verified, may provide a useful contribution to the prediction of speech perception in adults and children who use CIs.
Resumo:
We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.
Resumo:
Recently Adams and Bischof (1994) proposed a novel region growing algorithm for segmenting intensity images. The inputs to the algorithm are the intensity image and a set of seeds - individual points or connected components - that identify the individual regions to be segmented. The algorithm grows these seed regions until all of the image pixels have been assimilated. Unfortunately the algorithm is inherently dependent on the order of pixel processing. This means, for example, that raster order processing and anti-raster order processing do not, in general, lead to the same tessellation. In this paper we propose an improved seeded region growing algorithm that retains the advantages of the Adams and Bischof algorithm fast execution, robust segmentation, and no tuning parameters - but is pixel order independent. (C) 1997 Elsevier Science B.V.
Resumo:
Bulk density of undisturbed soil samples can be measured using computed tomography (CT) techniques with a spatial resolution of about 1 mm. However, this technique may not be readily accessible. On the other hand, x-ray radiographs have only been considered as qualitative images to describe morphological features. A calibration procedure was set up to generate two-dimensional, high-resolution bulk density images from x-ray radiographs made with a conventional x-ray diffraction apparatus. Test bricks were made to assess the accuracy of the method. Slices of impregnated soil samples were made using hardsetting seedbeds that had been gamma scanned at 5-mm depth increments in a previous study. The calibration procedure involved three stages: (i) calibration of the image grey levels in terms of glass thickness using a staircase made from glass cover slips, (ii) measurement of ratio between the soil and resin mass attenuation coefficients and the glass mass attenuation coefficient, using compacted bricks of known thickness and bulk density, and (iii) image correction accounting for the heterogeneity of the irradiation field. The procedure was simple, rapid, and the equipment was easily accessible. The accuracy of the bulk density determination was good (mean relative error 0.015), The bulk density images showed a good spatial resolution, so that many structural details could be observed. The depth functions were consistent with both the global shrinkage and the gamma probe data previously obtained. The suggested method would be easily applied to the new fuzzy set approach of soil structure, which requires generation of bulk density images. Also, it would be an invaluable tool for studies requiring high-resolution bulk density measurement, such as studies on soil surface crusts.
Resumo:
Motivation: Prediction methods for identifying binding peptides could minimize the number of peptides required to be synthesized and assayed, and thereby facilitate the identification of potential T-cell epitopes. We developed a bioinformatic method for the prediction of peptide binding to MHC class II molecules. Results: Experimental binding data and expert knowledge of anchor positions and binding motifs were combined with an evolutionary algorithm (EA) and an artificial neural network (ANN): binding data extraction --> peptide alignment --> ANN training and classification. This method, termed PERUN, was implemented for the prediction of peptides that bind to HLA-DR4(B1*0401). The respective positive predictive values of PERUN predictions of high-, moderate-, low- and zero-affinity binder-a were assessed as 0.8, 0.7, 0.5 and 0.8 by cross-validation, and 1.0, 0.8, 0.3 and 0.7 by experimental binding. This illustrates the synergy between experimentation and computer modeling, and its application to the identification of potential immunotheraaeutic peptides.
Resumo:
Conotoxins are valuable probes of receptors and ion channels because of their small size and highly selective activity. alpha-Conotoxin EpI, a 16-residue peptide from the mollusk-hunting Conus episcopatus, has the amino acid sequence GCCSDPRCNMNNPDY(SO3H)C-NH2 and appears to be an extremely potent and selective inhibitor of the alpha 3 beta 2 and alpha 3 beta 4 neuronal subtypes of the nicotinic acetylcholine receptor (nAChR). The desulfated form of EpI ([Tyr(15)]EpI) has a potency and selectivity for the nAChR receptor similar to those of EpI. Here we describe the crystal structure of [Tyr(15)]EpI solved at a resolution of 1.1 Angstrom using SnB. The asymmetric unit has a total of 284 non-hydrogen atoms, making this one of the largest structures solved de novo try direct methods. The [Tyr(15)]EpI structure brings to six the number of alpha-conotoxin structures that have been determined to date. Four of these, [Tyr(15)]EpI, PnIA, PnIB, and MII, have an alpha 4/7 cysteine framework and are selective for the neuronal subtype of the nAChR. The structure of [Tyr(15)]EpI has the same backbone fold as the other alpha 4/7-conotoxin structures, supporting the notion that this conotoxin cysteine framework and spacing give rise to a conserved fold. The surface charge distribution of [Tyr(15)]EpI is similar to that of PnIA and PnIB but is likely to be different from that of MII, suggesting that [Tyr(15)]EpI and MII may have different binding modes for the same receptor subtype.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
OBJECTIVE: To evaluate a diagnostic algorithm for pulmonary tuberculosis based on smear microscopy and objective response to trial of antibiotics. SETTING: Adult medical wards, Hlabisa Hospital, South Africa, 1996-1997. METHODS: Adults with chronic chest symptoms and abnormal chest X-ray had sputum examined for Ziehl-Neelsen stained acid-fast bacilli by light microscopy. Those with negative smears were treated with amoxycillin for 5 days and assessed. Those who had not improved were treated with erythromycin for 5 days and reassessed. Response was compared with mycobacterial culture. RESULTS: Of 280 suspects who completed the diagnostic pathway, 160 (57%) had a positive smear, 46 (17%) responded to amoxycillin, 34 (12%) responded to erythromycin and 40 (14%) were treated as smear-negative tuberculosis. The sensitivity (89%) and specificity (84%) of the full algorithm for culture-positive tuberculosis were high. However, 11 patients (positive predictive value [PPV] 95%) were incorrectly diagnosed with tuberculosis, and 24 cases of tuberculosis (negative predictive value [NPV] 70%) were not identified. NPV improved to 75% when anaemia was included as a predictor. Algorithm performance was independent of human immunodeficiency virus status. CONCLUSION: Sputum smear microscopy plus trial of antibiotic algorithm among a selected group of tuberculosis suspects may increase diagnostic accuracy in district hospitals in developing countries.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
An equivalent algorithm is proposed to simulate thermal effects of the magma intrusion in geological systems, which are composed of porous rocks. Based on the physical and mathematical equivalence, the original magma solidification problem with a moving boundary between the rock and intruded magma is transformed into a new problem without the moving boundary but with a physically equivalent heat source. From the analysis of an ideal solidification model, the physically equivalent heat source has been determined in this paper. The major advantage in using the proposed equivalent algorithm is that the fixed finite element mesh with a variable integration time step can be employed to simulate the thermal effect of the intruded magma solidification using the conventional finite element method. The related numerical results have demonstrated the correctness and usefulness of the proposed equivalent algorithm for simulating the thermal effect of the intruded magma solidification in geological systems. (C) 2003 Elsevier B.V. All rights reserved.