211 resultados para Binary Asteroids
Resumo:
A total of 214 rainwater samples from 82 tanks were collected in urban Southeast Queensland (SEQ) in Australia and analysed for the zoonotic bacterial and protozoan pathogen using real-time binary PCR and quantitative PCR (qPCR). Quantitative Microbial Risk Assessment (QMRA) analysis was used to quantify the risk of infection associated with the exposure to potential pathogens from potable and non-potable uses of roof-harvested rainwater. Of the 214 samples tested, 10.7%, 9.8%, and 5.6%, and 0.4% samples were positive for Salmonella invA, Giardia lamblia β-giardin , Legionella pneumophila mip, and Campylobacter jejuni mapA genes. Cryptosporidium parvum could not be detected. The estimated numbers of viable Salmonella spp., G. lamblia β-giradin, and L. pneumophila genes ranged from 1.6 × 101 to 9.5 × 101 cells, 1.4 × 10-1 to 9.0 × 10-1 cysts, and 1.5 × 101 to 4.3 × 101 per 1000 ml of water, respectively. Six risk scenarios were considered from exposure to Salmonella spp., G. lamblia and L. pneumophila. For Salmonella spp., and G. lamblia, these scenarios were: (1) liquid ingestion due to drinking of rainwater on a daily basis (2) accidental liquid ingestion due to garden hosing twice a week (3) aerosol ingestion due to showering on a daily basis, and (4) aerosol ingestion due to hosing twice a week. For L. pneumophila, these scenarios were: (5) aerosol inhalation due to showering on a daily basis, and (6) aerosol inhalation due to hosing twice a week. The risk of infection from Salmonella spp., G. lamblia, and L. pneumophila associated with the use of rainwater for showering and garden hosing was calculated to be well below the threshold value of one extra infection per 10,000 persons per year in urban SEQ. However, the risk of infection from ingesting Salmonella spp. and G. lamblia via drinking exceeds this threshold value, and indicates that if undisinfected rainwater were ingested by drinking, then the gastrointestinal diseases of Salmonellosis and Giardiasis is expected to range from 5.0 × 100 to 2.8 × 101 (Salmonellosis) and 1.0 × 101 to 6.4 × 101 (Giardiasis) cases per 10,000 persons per year, respectively. Since this health risk seems higher than that expected from the reported incidences of gastroenteritis, the assumptions used to estimate these infection risks are critically examined. Nonetheless, it would seem prudent to disinfect rainwater for potable use.
Resumo:
The use of appropriate features to characterize an output class or object is critical for all classification problems. This paper evaluates the capability of several spectral and texture features for object-based vegetation classification at the species level using airborne high resolution multispectral imagery. Image-objects as the basic classification unit were generated through image segmentation. Statistical moments extracted from original spectral bands and vegetation index image are used as feature descriptors for image objects (i.e. tree crowns). Several state-of-art texture descriptors such as Gray-Level Co-Occurrence Matrix (GLCM), Local Binary Patterns (LBP) and its extensions are also extracted for comparison purpose. Support Vector Machine (SVM) is employed for classification in the object-feature space. The experimental results showed that incorporating spectral vegetation indices can improve the classification accuracy and obtained better results than in original spectral bands, and using moments of Ratio Vegetation Index obtained the highest average classification accuracy in our experiment. The experiments also indicate that the spectral moment features also outperform or can at least compare with the state-of-art texture descriptors in terms of classification accuracy.
Resumo:
A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.
Resumo:
In automatic facial expression detection, very accurate registration is desired which can be achieved via a deformable model approach where a dense mesh of 60-70 points on the face is used, such as an active appearance model (AAM). However, for applications where manually labeling frames is prohibitive, AAMs do not work well as they do not generalize well to unseen subjects. As such, a more coarse approach is taken for person-independent facial expression detection, where just a couple of key features (such as face and eyes) are tracked using a Viola-Jones type approach. The tracked image is normally post-processed to encode for shift and illumination invariance using a linear bank of filters. Recently, it was shown that this preprocessing step is of no benefit when close to ideal registration has been obtained. In this paper, we present a system based on the Constrained Local Model (CLM) which is a generic or person-independent face alignment algorithm which gains high accuracy. We show these results against the LBP feature extraction on the CK+ and GEMEP datasets.
Resumo:
Background: It remains unclear whether it is possible to develop a spatiotemporal epidemic prediction model for cryptosporidiosis disease. This paper examined the impact of social economic and weather factors on cryptosporidiosis and explored the possibility of developing such a model using social economic and weather data in Queensland, Australia. ----- ----- Methods: Data on weather variables, notified cryptosporidiosis cases and social economic factors in Queensland were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Three-stage spatiotemporal classification and regression tree (CART) models were developed to examine the association between social economic and weather factors and monthly incidence of cryptosporidiosis in Queensland, Australia. The spatiotemporal CART model was used for predicting the outbreak of cryptosporidiosis in Queensland, Australia. ----- ----- Results: The results of the classification tree model (with incidence rates defined as binary presence/absence) showed that there was an 87% chance of an occurrence of cryptosporidiosis in a local government area (LGA) if the socio-economic index for the area (SEIFA) exceeded 1021, while the results of regression tree model (based on non-zero incidence rates) show when SEIFA was between 892 and 945, and temperature exceeded 32°C, the relative risk (RR) of cryptosporidiosis was 3.9 (mean morbidity: 390.6/100,000, standard deviation (SD): 310.5), compared to monthly average incidence of cryptosporidiosis. When SEIFA was less than 892 the RR of cryptosporidiosis was 4.3 (mean morbidity: 426.8/100,000, SD: 319.2). A prediction map for the cryptosporidiosis outbreak was made according to the outputs of spatiotemporal CART models. ----- ----- Conclusions: The results of this study suggest that spatiotemporal CART models based on social economic and weather variables can be used for predicting the outbreak of cryptosporidiosis in Queensland, Australia.
Resumo:
The rural two-lane highway in the southeastern United States is frequently associated with a disproportionate number of serious and fatal crashes and as such remains a focus of considerable safety research. The Georgia Department of Transportation spearheaded a regional fatal crash analysis to identify various safety performances of two-lane rural highways and to offer guidance for identifying suitable countermeasures with which to mitigate fatal crashes. The fatal crash data used in this study were compiled from Alabama, Georgia, Mississippi, and South Carolina. The database, developed for an earlier study, included 557 randomly selected fatal crashes from 1997 or 1998 or both (this varied by state). Each participating state identified the candidate crashes and performed physical or video site visits to construct crash databases with enhance site-specific information. Motivated by the hypothesis that single- and multiple-vehicle crashes arise from fundamentally different circumstances, the research team applied binary logit models to predict the probability that a fatal crash is a single-vehicle run-off-road fatal crash given roadway design characteristics, roadside environment features, and traffic conditions proximal to the crash site. A wide variety of factors appears to influence or be associated with single-vehicle fatal crashes. In a model transferability assessment, the authors determined that lane width, horizontal curvature, and ambient lighting are the only three significant variables that are consistent for single-vehicle run-off-road crashes for all study locations.
Resumo:
The queer studies field works to deconstruct dominant western discourses which cast gay men as hedonistic partygoers. Concurrently it examines the real social ramifications for some gay men for whom partying, illegal drugs and casual sex is an everyday reality. Another reality of gay male culture is HIV/AIDS and the legal prescribed medicines which accompany these conditions. Pleasure Consuming Medicine: The Queer Politics of Drugs explores these realities and the discourses surrounding them. Exploring the embodiments of illegal and prescription drug users, this book problematises the binary between prescription medicine use, where drug use is configured as a matter of consumer choice, and 'illicit' drug use which is heavily policed and condemned. Returning to the gay community it reviews community approaches to safe sex and drug use, and individual practices, to demonstrate alternative approaches to condemning drug usage.
Resumo:
The queer studies field works to deconstruct dominant western discourses which cast gay men as hedonistic partygoers. Concurrently it examines the real social ramifications for some gay men for whom partying, illegal drugs and casual sex is an everyday reality. Another reality of gay male culture is HIV/AIDS and the legal prescribed medicines which accompany these conditions. Pleasure Consuming Medicine: The Queer Politics of Drugs explores these realities and the discourses surrounding them. Exploring the embodiments of illegal and prescription drug users, this book problematises the binary between prescription medicine use, where drug use is configured as a matter of consumer choice, and 'illicit' drug use which is heavily policed and condemned. Returning to the gay community it reviews community approaches to safe sex and drug use, and individual practices, to demonstrate alternative approaches to condemning drug usage.
Resumo:
Transit Oriented Developments (TODs) are often designed to promote the use of sustainable modes of transport and reduce car usage. This paper investigates the effect of personal and transit characteristics on travel choices of TOD users. Binary logistic regression models were developed to determine the probability of choosing sustainable modes of transport including walking, cycling and public transport. Kelvin Grove Urban Village (KGUV) located in Brisbane, Australia was chosen as case study TOD. The modal splits for employees, students, shoppers and residents showed that 47% of employees, 84% of students, 71% of shoppers and 56% of residents used sustainable modes of transport.
Resumo:
This article presents the results of a study on the association between measured air pollutants and the respiratory health of resident women and children in Lao PDR, one of the least developed countries in Southeast Asia. The study, commissioned by the World Health Organisation, included PM10, CO and NO2 measurements made inside 181 dwellings in nine districts within two provinces in Lao PDR over a 5- month period (12/05–04/06), and respiratory health information (via questionnaires and peak expiratory flow rate (PEFR) measurements) for all residents in the same dwellings. Adjusted odds ratios were calculated separately for each health outcome using binary logistic regression. There was a strong and consistent positive association between NO2 and CO for almost all questionnaire-based health outcomes for both women and children. Women in dwellings with higher measured NO2 had more than triple of the odds of almost all of the health outcomes, and higher concentrations of NO2 and CO were significantly associated with lower PEFR. This study supports a growing literature confirming the role of indoor air pollution in the burden of respiratory disease in developing countries. The results will directly support changes in health and housing policy in Lao PDR.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that not all generalizations preserve the nice property of Bayes consistency. We provide a necessary and sufficient condition for consistency which applies to a large class of multiclass classification methods. The approach is illustrated by applying it to some multiclass methods proposed in the literature.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.