88 resultados para Fecal steroids extraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the host-sensitivity and -specificity of JCV and BKV polyomaviruses were evaluated by testing wastewater/fecal samples from nine host groups in Southeast Queensland, Australia. The JCV and BKV polyomaviruses were detected in 48 human wastewater samples collected from the primary and secondary effluent suggesting high sensitivity of these viruses in human wastewater. Of the 81 animal wastewater/fecal samples tested, 80 were PCR negative for this marker. Only one sample from pig wastewater was positive. Nonetheless, the overall host-specificity of these viruses to differentiate between human and animal wastewater/fecal samples was 0.99. To our knowledge, this is the first study in Australia that reports the high specificity of JCV and BKV polyomaviruses. To evaluate the field application of these viruses to detect human fecal pollution, 20 environmental samples were collected from a coastal river. Of the 20 samples tested, 15% and 70% samples exceeded the regulatory guidelines for E. coli and enterococci levels for marine waters. In all, 5 (25%) samples were PCR positive for JCV and BKV indicated the presence of human fecal pollution in the studied river. The results suggest that JCV and BKV detection using PCR could be a useful tool for the identification of human sourced fecal pollution in coastal waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the host-specificity and -sensitivity of human- and bovine-specific adenoviruses (HS-AVs and BS-AVs) were evaluated by testing wastewater/fecal samples from various animal species in Southeast, Queensland, Australia. The overall specificity and sensitivity of the HS-AVs marker were 1.0 and 0.78, respectively. These figures for the BS-AVs were 1.0 and 0.73, respectively. Twenty environmental water samples were colleted during wet conditions and 20 samples were colleted during dry conditions from the Maroochy Coastal River and tested for the presence of fecal indicator bacteria (FIB), host-specific viral markers, zoonotic bacterial and protozoan pathogens using PCR/qPCR. The concentrations of FIB in water samples collected after wet conditions were generally higher compared to dry conditions. HS-AVs was detected in 20% water samples colleted during wet conditions and whereas BS-AVs was detected in both wet (i.e., 10%) and dry (i.e., 10%) conditions. Both, C. jejuni mapA and Salmonella invA genes were detected in 10% and 10% of samples, respectively collected during dry conditions. The concentrations of Salmonella invA ranged between 3.5 × 102 to 4.3 × 102 genomic copies per 500 ml of water G. lamblia β-giardin gene was detected only in one sample (5%) collected during the dry conditions. Weak or significant correlations were observed between FIB with viral markers and zoonotic pathogens. However, during dry conditions, no significant correlations were observed between FIB concentrations with viral markers and zoonotic pathogens. The prevalence of HS-AVs in samples collected from the study river suggests that the quality of water is affected by human fecal pollution and as well as bovine fecal pollution. The results suggest that HS-AVs and BS-AVs detection using PCR could be a useful tool for the identification of human sourced fecal pollution in coastal waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Aim: To investigate participation in a second round of colorectal cancer screening using a fecal occult blood test (FOBT) in an Australian rural community, and to assess the demographic characteristics and individual perspectives associated with repeat screening. ---------- Methods: Potential participants from round 1 (50–74 years of age) were sent an intervention package and asked to return a completed FOBT (n = 3406). Doctors of participants testing positive referred to colonoscopy as appropriate. Following screening, 119 participants completed qualitative telephone interviews. Multivariable logistic regression models evaluated the association between round-2 participation and other variables.---------- Results: Round-2 participation was 34.7%; the strongest predictor was participation in round 1. Repeat participants were more likely to be female; inconsistent screeners were more likely to be younger (aged 50–59 years). The proportion of positive FOBT was 12.7%, that of colonoscopy compliance was 98.6%, and the positive predictive value for cancer or adenoma of advanced pathology was 23.9%. Reasons for participation included testing as a precautionary measure or having family history/friends with colorectal cancer; reasons for non-participation included apathy or doctors’ advice against screening.---------- Conclusion: Participation was relatively low and consistent across rounds. Unless suitable strategies are identified to overcome behavioral trends and/or to screen out ineligible participants, little change in overall participation rates can be expected across rounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automated analysis of the sentiments presented in online consumer feedbacks can facilitate both organizations’ business strategy development and individual consumers’ comparison shopping. Nevertheless, existing opinion mining methods either adopt a context-free sentiment classification approach or rely on a large number of manually annotated training examples to perform context sensitive sentiment classification. Guided by the design science research methodology, we illustrate the design, development, and evaluation of a novel fuzzy domain ontology based contextsensitive opinion mining system. Our novel ontology extraction mechanism underpinned by a variant of Kullback-Leibler divergence can automatically acquire contextual sentiment knowledge across various product domains to improve the sentiment analysis processes. Evaluated based on a benchmark dataset and real consumer reviews collected from Amazon.com, our system shows remarkable performance improvement over the context-free baseline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust, affine covariant, feature extractors provide a means to extract correspondences between images captured by widely separated cameras. Advances in wide baseline correspondence extraction require looking beyond the robust feature extraction and matching approach. This study examines new techniques of extracting correspondences that take advantage of information contained in affine feature matches. Methods of improving the accuracy of a set of putative matches, eliminating incorrect matches and extracting large numbers of additional correspondences are explored. It is assumed that knowledge of the camera geometry is not available and not immediately recoverable. The new techniques are evaluated by means of an epipolar geometry estimation task. It is shown that these methods enable the computation of camera geometry in many cases where existing feature extractors cannot produce sufficient numbers of accurate correspondences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most web service discovery systems use keyword-based search algorithms and, although partially successful, sometimes fail to satisfy some users information needs. This has given rise to several semantics-based approaches that look to go beyond simple attribute matching and try to capture the semantics of services. However, the results reported in the literature vary and in many cases are worse than the results obtained by keyword-based systems. We believe the accuracy of the mechanisms used to extract tokens from the non-natural language sections of WSDL files directly affects the performance of these techniques, because some of them can be more sensitive to noise. In this paper three existing tokenization algorithms are evaluated and a new algorithm that outperforms all the algorithms found in the literature is introduced.