884 resultados para Feature vectors
Resumo:
We describe remarkable success in controlling dengue vectors, Aedes aegypti (L.) and Aedes albopictus (Skuse), in 6 communes with 11,675 households and 49,647 people in the northern provinces of Haiphong, Hung Yen, and Nam Dinh in Vietnam. The communes were selected for high-frequency use of large outdoor concrete tanks and wells. These were found to be the source of 49.6-98.4% of Ae. aegypti larvae, which were amenable to treatment with local Mesocyclops, mainly M. woutersi Van der Velde, M. aspericornis (Daday) and M. thermocyclopoides Harada. Knowledge, attitude, and practice surveys were performed to determine whether the communities viewed dengue and dengue hemorrhagic fever as a serious health threat; to determine their knowledge of the etiology, attitudes, and practices regarding control methods including Mesocyclops; and to determine their receptivity to various information methods. On the basis of the knowledge, attitude, and practice data, the community-based dengue control program comprised a system of local leaders, health volunteer teachers, and schoolchildren, supported by health professionals. Recycling of discards for economic gain was enhanced, where appropriate, and this, plus 37 clean-up campaigns, removed small containers unsuitable for Mesocyclops treatment. A previously successful eradication at Phan Boi village (Hung Yen province) was extended to 7 other villages forming Di Su commune (1,750 households) in the current study. Complete control was also achieved in Nghia Hiep (Hung Yen province) and in Xuan Phong (Nam Dinh province); control efficacy was greater than or equal to 99.7% in the other 3 communes (Lac Vien in Haiphong, Nghia Dong, and Xuan Kien in Nam Dinh). Although tanks and wells were the key container types of Ae. aegypti productivity, discarded materials were the source of 51% of the standing crop of Ae. albopictus. Aedes albopictus larvae were eliminated from the 3 Nam Dinh communes, and 86-98% control was achieved in the other 3 communes. Variable dengue attack rates made the clinical and serological comparison of control and untreated communes problematic, but these data indicate that clinical surveillance by itself is inadequate to monitor dengue transmission.
Resumo:
We have previously demonstrated the ability of the vaccine vectors based on replicon RNA of the Australian flavivirus Kunjin (KUN) to induce protective antiviral and anticancer CD8(+) T-cell responses using murine polyepitope as a model immunogen (I. Anraku, T. J. Harvey, R. Linedale, J. Gardner, D. Harrich, A. Suhrbier, and A. A. Khromykh, J. Virol. 76:3791-3799, 2002). Here we showed that immunization of BALB/c mice with KUN replicons encoding HIV-1 Gag antigen resulted in induction of both Gag-specific antibody and protective Gag-specific CD8(+) T-cell responses. Two immunizations with KUNgag replicons in the form of virus-like particles (VLPs) induced anti-Gag antibodies with titers of greater than or equal to1:10,000. Immunization with KUNgag replicons delivered as plasmid DNA, naked RNA, or VLPs induced potent Gag-specific CD8(+) T-cell responses, with one immunization of KUNgag VLPs inducing 4.5-fold-more CD8(+) T cells than the number induced after immunization with recombinant vaccinia virus carrying the gag gene (rVVgag). Two immunizations with KUNgag VLPs also provided significant protection against challenge with rVVgag. Importantly, KUN replicon VLP vaccinations induced long-lasting immune responses with CD8(+) T cells able to secrete gamma interferon and to mediate protection 6 to 10 months after immunization. These results illustrate the potential value of the KUN replicon vectors for human immunodeficiency virus vaccine design.
Resumo:
Since the pioneering work of Charles Nicolle in 1909 [see Gross (1996) Proc Natl Acad Sci USA 93:10539-10540] most medical officers and scientists have assumed that body lice are the sole vectors of Rickettsia prowazekii, the aetiological agent of louse-borne epidemic typhus (LBET). Here we review the evidence for the axiom that head lice are not involved in epidemics of LBET. Laboratory experiments demonstrate the ability of head lice to transmit R. prowazekii, but evidence for this in the field has not been reported. However, the assumption that head lice do not transmit R. prowazekii has meant that head lice have not been examined for R. prowazekii during epidemics of LBET. The strong association between obvious (high) infestations of body lice and LBET has contributed to this perception, but this association does not preclude head lice as vectors of R. prowazekii. Indeed, where the prevalence and intensity of body louse infections may be high (e.g. during epidemics of LBET), the prevalence and intensity of head louse infestations is generally high as well. This review of the epidemiology of head louse and body louse infestations, and of LBET, indicates that head lice are potential vectors of R. prowazekii in the field. Simple observations in the field would reveal whether or not head lice are natural vectors of this major human pathogen.
Resumo:
Cryptographic software development is a challenging eld: high performance must be achieved, while ensuring correctness and com- pliance with low-level security policies. CAO is a domain speci c language designed to assist development of cryptographic software. An important feature of this language is the design of a novel type system introducing native types such as prede ned sized vectors, matrices and bit strings, residue classes modulo an integer, nite elds and nite eld extensions, allowing for extensive static validation of source code. We present the formalisation, validation and implementation of this type system
Resumo:
In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention
Resumo:
In music genre classification, most approaches rely on statistical characteristics of low-level features computed on short audio frames. In these methods, it is implicitly considered that frames carry equally relevant information loads and that either individual frames, or distributions thereof, somehow capture the specificities of each genre. In this paper we study the representation space defined by short-term audio features with respect to class boundaries, and compare different processing techniques to partition this space. These partitions are evaluated in terms of accuracy on two genre classification tasks, with several types of classifiers. Experiments show that a randomized and unsupervised partition of the space, used in conjunction with a Markov Model classifier lead to accuracies comparable to the state of the art. We also show that unsupervised partitions of the space tend to create less hubs.
Resumo:
PURPOSE: Fatty liver disease (FLD) is an increasing prevalent disease that can be reversed if detected early. Ultrasound is the safest and ubiquitous method for identifying FLD. Since expert sonographers are required to accurately interpret the liver ultrasound images, lack of the same will result in interobserver variability. For more objective interpretation, high accuracy, and quick second opinions, computer aided diagnostic (CAD) techniques may be exploited. The purpose of this work is to develop one such CAD technique for accurate classification of normal livers and abnormal livers affected by FLD. METHODS: In this paper, the authors present a CAD technique (called Symtosis) that uses a novel combination of significant features based on the texture, wavelet transform, and higher order spectra of the liver ultrasound images in various supervised learning-based classifiers in order to determine parameters that classify normal and FLD-affected abnormal livers. RESULTS: On evaluating the proposed technique on a database of 58 abnormal and 42 normal liver ultrasound images, the authors were able to achieve a high classification accuracy of 93.3% using the decision tree classifier. CONCLUSIONS: This high accuracy added to the completely automated classification procedure makes the authors' proposed technique highly suitable for clinical deployment and usage.
Resumo:
In the last decade, local image features have been widely used in robot visual localization. To assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image to those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, we compare several candidate combiners with respect to their performance in the visual localization task. A deeper insight into the potential of the sum and product combiners is provided by testing two extensions of these algebraic rules: threshold and weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance. The voting method, whilst competitive to the algebraic rules in their standard form, is shown to be outperformed by both their modified versions.
Resumo:
Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.
Resumo:
In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.
Resumo:
Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
A laboratory study was conducted to test the toxicity of synthetic insecticides added to defibrinated sheep blood kept at room temperature and offered as food to the following triatomine species: Triatoma infestans, Panstrongylus megistus, Triatoma vitticeps, Triatoma pseudomaculata, Triatoma brasiliensis and Rhodnius prolixus. The insecticides used, at a concentration of 1g/l, were: HCH, DDT, Malathion and Trichlorfon, and the lethalithy observed at the end of a 7-day period varied according to the active principle of each. HCH was the most effective by the oral route, killing 100% of the insects, except P. megistus (95.7%) and T. pseudomaculata (94.1%). Trichlorfon killed the insects at rates ranging from 71.8% (T. vitticeps) to 98% (R. prolixus). Malathion was slightly less efficient, killing the insects at rates from 56.8% (T. vitticeps) to 97% (T.brasiliensis). DDT was the least effective, with a killing rate of 10% (T. vitticeps) to 75% (T.brasiliensis). Since the tests were performed at room temperature, we suggest that baits of this type should be tried for the control of triatomines in the field.