959 resultados para leprosy detection rate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Even though crashes between trains and road users are rare events at railway level crossings, they are one of the major safety concerns for the Australian railway industry. Nearmiss events at level crossings occur more frequently, and can provide more information about factors leading to level crossing incidents. In this paper we introduce a video analytic approach for automatically detecting and localizing vehicles from cameras mounted on trains for detecting near-miss events. To detect and localize vehicles at level crossings we extract patches from an image and classify each patch for detecting vehicles. We developed a region proposals algorithm for generating patches, and we use a Convolutional Neural Network (CNN) for classifying each patch. To localize vehicles in images we combine the patches that are classified as vehicles according to their CNN scores and positions. We compared our system with the Deformable Part Models (DPM) and Regions with CNN features (R-CNN) object detectors. Experimental results on a railway dataset show that the recall rate of our proposed system is 29% higher than what can be achieved with DPM or R-CNN detectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Familial hypercholesterolemia (FH) is a common single gene disorder, which predisposes to coronary artery disease. In a previous study, we have shown that in patients with definite FH around 20% had no identifiable gene defect after screening the entire exon coding area of the low density lipoprotein receptor (LDLR) and testing for the common Apolipoprotein B (ApoB) R3500Q mutation. In this study, we have extended the screen to additional families and have included the non-coding intron splice regions of the gene. In families with definite FH (tendon xanthoma present, n = 68) the improved genetic screening protocol increased the detection rate of mutations to 87%. This high detection rate greatly enhances the potential value of this test as part of a clinical screening program for FH. In contrast, the use of a limited screen in patients with possible FH (n = 130) resulted in a detection rate of 26%, but this is still of significant benefit in diagnosis of this genetic condition. We have also shown that 14% of LDLR defects are due to splice site mutations and that the most frequent splice mutation in our series (c.1845 + 11 c > g) is expressed at the RNA level. In addition, DNA samples from the patients in whom no LDLR or ApoB gene mutations were found, were sequenced for the NARC-1 gene. No mutations were identified which suggests that the role of NARC-1 in causing FH is minor. In a small proportion of families (

Relevância:

100.00% 100.00%

Publicador:

Resumo:

N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extensive set of machine learning and pattern classification techniques trained and tested on KDD dataset failed in detecting most of the user-to-root attacks. This paper aims to provide an approach for mitigating negative aspects of the mentioned dataset, which led to low detection rates. Genetic algorithm is employed to implement rules for detecting various types of attacks. Rules are formed of the features of the dataset identified as the most important ones for each attack type. In this way we introduce high level of generality and thus achieve high detection rates, but also gain high reduction of the system training time. Thenceforth we re-check the decision of the user-to- root rules with the rules that detect other types of attacks. In this way we decrease the false-positive rate. The model was verified on KDD 99, demonstrating higher detection rates than those reported by the state- of-the-art while maintaining low false-positive rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and aim: The usefulness of high definition colonoscopy plus i-scan (HD+i-SCAN) for average-risk colorectal cancer screening has not been fully assessed. The detection rate of adenomas and other measurements such as the number of adenomas per colonoscopy and the flat adenoma detection rate have been recognized as markers of colonoscopy quality. The aim of the present study was to compare the diagnostic performance of an HD+i-SCAN with that of standard resolution white-light colonoscope. Methods: This is a retrospective analysis of a prospectively collected screening colonoscopy database. A comparative analysis of the diagnostic yield of an HD+i-SCAN or standard resolution colonoscopy for average-risk colorectal screening was conducted. Results: During the period of study, 155/163 (95.1%) patients met the inclusion criteria. The mean age was 56.9 years. Sixty of 155 (39%) colonoscopies were performed using a HD+i-SCAN. Adenoma-detection-rates during the withdrawal of the standard resolution versus HD+i-SCAN colonoscopies were 29.5% and 30% (p = n.s.). Adenoma/colonoscopy values for standard resolution versus HD+i-SCAN colonoscopies were 0.46 (SD = 0.9) and 0.72 (SD = 1.3) (p = n.s.). A greater number of flat adenomas were detected in the HD+i-SCAN group (6/60 vs. 2/95) (p < .05). Likewise, serrated adenomas/polyps per colonoscopy were also higher in the HD+i-SCAN group. Conclusions: A HD+i-SCAN colonoscopy increases the flat adenoma detection rate and serrated adenomas/polyps per colonoscopy compared to a standard colonoscopy in average-risk screening population. HD+i-SCAN is a simple, available procedure that can be helpful, even for experienced providers. The performance of HD+i-SCAN and substantial prevalence of flat lesions in our average-risk screening cohort support its usefulness in improving the efficacy of screening colonoscopies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os testes sorológicos para diagnóstico de hanseníase, usando o glicolipídeo-fenólico-1 (PGL-1), considerado antígeno específico do M. leprae, têm aberto algumas possibilidades de estudo do comportamento epidemiológico desta doença. Algumas questões, como tempo de latência da doença, infecção subclínica e importância do contato intra-domiciliar (contatos) no controle da endemia, puderam ser melhor analisadas usando este instrumental. Este estudo teve por objetivo verificar a existência de associação entre a situação sorológica e a ocorrência de hanseníase. Foram seguidas, durante 4 anos, 6.520 pessoas com idade igual ou superior a 5 anos, submetidas no início do seguimento ao teste sorológico Anti PGL-1, pertencentes ao universo de 7.416 habitantes da área urbana de um município paulista caracterizado por elevada endemicidade de hanseníase. Foi identificado um grupo de 590 indivíduos soropositivos (9,0 %). Foram diagnosticados, no período, 82 casos novos de hanseníase, 26 no grupo de soropositivos (441 casos novos/10.000 indivíduos) e 48 no de soronegativos (81/10.000). Entre os que não fizeram sorologia, surgiram 8 casos novos (89/10.000). Procurou-se controlar, na análise, a condição de contato, dado que a taxa de soropositividade padronizada por idade e sexo era de 9,61% no grupo de contatos e 7,65% no de não-contatos. Tomando-se os não-contatos soronegativos como o grupo de não expostos, foram calculados os riscos relativos de adoecimento no período, a partir das taxas de detecção padronizadas por idade, resultando no seguinte: os contatos ID soropositivos apresentaram a taxa de 1.704/10.000, 27 vezes maior que a dos não-expostos, igual a 63/10.000; os não-contatos soropositivos e os contatos soronegativos apresentaram taxas, respectivamente, de 274 e 198/10.000, ambas maiores que as dos não-expostos e iguais entre si. A soropositividade associou-se à elevação de 8,6 vezes do risco de hanseníase entre os contatos e de 4,4 entre os não-contatos. Na situação epidemiológica estudada, caracterizada por elevada endemicidade de hanseníase, 50% dos casos novos surgiram entre os não-contatos soronegativos, ou seja, sem fonte de infecção conhecida. Portanto, o teste anti-PGL-1 usado revela-se, na prática, de pouca aplicabilidade. Resta estudar ainda o comportamento da sorologia anti-PGL-1 em áreas de média e baixa endemicidade para que se possa tirar conclusões mais consubstanciadas sobre sua utilidade no controle da endemia. Recomenda-se o aprofundamento das pesquisas sorológicas e de outras que aprimorem o diagnóstico precoce da infecção subclínica, inclusive para detecção de formas paucibacilares, para se ampliar as possibilidades de influir no controle endêmico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The Brazilian National Hansens Disease Control Program recently identified clusters with high disease transmission. Herein, we present different spatial analytical approaches to define highly vulnerable areas in one of these clusters. Method The study area included 373 municipalities in the four Brazilian states Maranha o, Para ', Tocantins and Piaui '. Spatial analysis was based on municipalities as the observation unit, considering the following disease indicators: (i) rate of new cases / 100 000 population, (ii) rate of cases < 15 years / 100 000 population, (iii) new cases with grade-2 disability / 100 000 population and (iv) proportion of new cases with grade-2 disabilities. We performed descriptive spatial analysis, local empirical Bayesian analysis and spatial scan statistic. Results A total of 254 (68.0%) municipalities were classified as hyperendemic (mean annual detection rates > 40 cases / 100 000 inhabitants). There was a concentration of municipalities with higher detection rates in Para ' and in the center of Maranha o. Spatial scan statistic identified 23 likely clusters of new leprosy case detection rates, most of them localized in these two states. These clusters included only 32% of the total population, but 55.4% of new leprosy cases. We also identified 16 significant clusters for the detection rate < 15 years and 11 likely clusters of new cases with grade-2. Several clusters of new cases with grade-2 / population overlap with those of new cases detection and detection of children < 15 years of age. The proportion of new cases with grade-2 did not reveal any significant clusters. Conclusions Several municipality clusters for high leprosy transmission and late diagnosis were identified in an endemic area using different statistical approaches. Spatial scan statistic is adequate to validate and confirm high-risk leprosy areas for transmission and late diagnosis, identified using descriptive spatial analysis and using local empirical Bayesian method. National and State leprosy control programs urgently need to intensify control actions in these highly vulnerable municipalities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Improving the performance of a incident detection system was essential to minimize the effect of incidents. A new method of incident detection was brought forward in this paper based on an in-car terminal which consisted of GPS module, GSM module and control module as well as some optional parts such as airbag sensors, mobile phone positioning system (MPPS) module, etc. When a driver or vehicle discovered the freeway incident and initiated an alarm report the incident location information located by GPS, MPPS or both would be automatically send to a transport management center (TMC), then the TMC would confirm the accident with a closed-circuit television (CCTV) or other approaches. In this method, detection rate (DR), time to detect (TTD) and false alarm rate (FAR) were more important performance targets. Finally, some feasible means such as management mode, education mode and suitable accident confirming approaches had been put forward to improve these targets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pedestrians’ use of mp3 players or mobile phones can pose the risk of being hit by motor vehicles. We present an approach for detecting a crash risk level using the computing power and the microphone of mobile devices that can be used to alert the user in advance of an approaching vehicle so as to avoid a crash. A single feature extractor classifier is not usually able to deal with the diversity of risky acoustic scenarios. In this paper, we address the problem of detection of vehicles approaching a pedestrian by a novel, simple, non resource intensive acoustic method. The method uses a set of existing statistical tools to mine signal features. Audio features are adaptively thresholded for relevance and classified with a three component heuristic. The resulting Acoustic Hazard Detection (AHD) system has a very low false positive detection rate. The results of this study could help mobile device manufacturers to embed the presented features into future potable devices and contribute to road safety.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an alternative approach to image segmentation by using the spatial distribution of edge pixels as opposed to pixel intensities. The segmentation is achieved by a multi-layered approach and is intended to find suitable landing areas for an aircraft emergency landing. We combine standard techniques (edge detectors) with novel developed algorithms (line expansion and geometry test) to design an original segmentation algorithm. Our approach removes the dependency on environmental factors that traditionally influence lighting conditions, which in turn have negative impact on pixel-based segmentation techniques. We present test outcomes on realistic visual data collected from an aircraft, reporting on preliminary feedback about the performance of the detection. We demonstrate consistent performances over 97% detection rate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presence of insect pests in grain storages throughout the supply chain is a significant problem for farmers, grain handlers, and distributors world-wide. Insect monitoring and sampling programmes are used in the stored grains industry for the detection and estimation of pest populations. At the low pest densities dictated by economic and commercial requirements, the accuracy of both detection and abundance estimates can be influenced by variations in the spatial structure of pest populations over short distances. Geostatistical analysis of Rhyzopertha dominica populations in 2 and 3 dimensions showed that insect numbers were positively correlated over short (0.5 cm) distances, and negatively correlated over longer (.10 cm) distances. At 35 C, insects were located significantly further from the grain surface than at 25 and 30 C. Dispersion metrics showed statistically significant aggregation in all cases. The observed heterogeneous spatial distribution of R. dominica may also be influenced by factors such as the site of initial infestation and disturbance during handling. To account for these additional factors, I significantly extended a simulation model that incorporates both pest growth and movement through a typical stored-grain supply chain. By incorporating the effects of abundance, initial infestation site, grain handling, and treatment on pest spatial distribution, I developed a supply chain model incorporating estimates of pest spatial distribution. This was used to examine several scenarios representative of grain movement through a supply chain, and determine the influence of infestation location and grain disturbance on the sampling intensity required to detect pest infestations at various infestation rates. This study has investigated the effects of temperature, infestation point, and grain handling on the spatial distribution and detection of R. dominica. The proportion of grain infested was found to be dependent upon abundance, initial pest location, and grain handling. Simulation modelling indicated that accounting for these factors when developing sampling strategies for stored grain has the potential to significantly reduce sampling costs while simultaneously improving detection rate, resulting in reduced storage and pest management cost while improving grain quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Generating discriminative input features is a key requirement for achieving highly accurate classifiers. The process of generating features from raw data is known as feature engineering and it can take significant manual effort. In this paper we propose automated feature engineering to derive a suite of additional features from a given set of basic features with the aim of both improving classifier accuracy through discriminative features, and to assist data scientists through automation. Our implementation is specific to HTTP computer network traffic. To measure the effectiveness of our proposal, we compare the performance of a supervised machine learning classifier built with automated feature engineering versus one using human-guided features. The classifier addresses a problem in computer network security, namely the detection of HTTP tunnels. We use Bro to process network traffic into base features and then apply automated feature engineering to calculate a larger set of derived features. The derived features are calculated without favour to any base feature and include entropy, length and N-grams for all string features, and counts and averages over time for all numeric features. Feature selection is then used to find the most relevant subset of these features. Testing showed that both classifiers achieved a detection rate above 99.93% at a false positive rate below 0.01%. For our datasets, we conclude that automated feature engineering can provide the advantages of increasing classifier development speed and reducing development technical difficulties through the removal of manual feature engineering. These are achieved while also maintaining classification accuracy.