871 resultados para Data detection


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Distribution of socio-economic features in urban space is an important source of information for land and transportation planning. The metropolization phenomenon has changed the distribution of types of professions in space and has given birth to different spatial patterns that the urban planner must know in order to plan a sustainable city. Such distributions can be discovered by statistical and learning algorithms through different methods. In this paper, an unsupervised classification method and a cluster detection method are discussed and applied to analyze the socio-economic structure of Switzerland. The unsupervised classification method, based on Ward's classification and self-organized maps, is used to classify the municipalities of the country and allows to reduce a highly-dimensional input information to interpret the socio-economic landscape. The cluster detection method, the spatial scan statistics, is used in a more specific manner in order to detect hot spots of certain types of service activities. The method is applied to the distribution services in the agglomeration of Lausanne. Results show the emergence of new centralities and can be analyzed in both transportation and social terms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives. To study the utility of the Mini-Cog test for detection of patients with cognitive impairment (CI) in primary care (PC). Methods. We pooled data from two phase III studies conducted in Spain. Patients with complaints or suspicion of CI were consecutively recruited by PC physicians. The cognitive diagnosis was performed by an expert neurologist, after formal neuropsychological evaluation. The Mini-Cog score was calculated post hoc, and its diagnostic utility was evaluated and compared with the utility of the Mini-Mental State (MMS), the Clock Drawing Test (CDT), and the sum of the MMS and the CDT (MMS + CDT) using the area under the receiver operating characteristic curve (AUC). The best cut points were obtained on the basis of diagnostic accuracy (DA) and kappa index. Results. A total sample of 307 subjects (176 CI) was analyzed. The Mini-Cog displayed an AUC (±SE) of 0.78 ± 0.02, which was significantly inferior to the AUC of the CDT (0.84 ± 0.02), the MMS (0.84 ± 0.02), and the MMS + CDT (0.86 ± 0.02). The best cut point of the Mini-Cog was 1/2 (sensitivity 0.60, specificity 0.90, DA 0.73, and kappa index 0.48 ± 0.05). Conclusions. The utility of the Mini-Cog for detection of CI in PC was very modest, clearly inferior to the MMS or the CDT. These results do not permit recommendation of the Mini-Cog in PC.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The EUROCAT website www.eurocat-network.eu publishes prenatal detection rates for major congenital anomalies using data from European population-based congenital anomaly registers, covering 28% of the EU population as well as non-EU countries. Data are updated annually. This information can be useful for comparative purposes to clinicians and public health service managers involved in the antenatal care of pregnant women as well as those interested in perinatal epidemiology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Studies on hexaminolevulinate (HAL) cystoscopy report improved detection of bladder tumours. However, recent meta-analyses report conflicting effects on recurrence. OBJECTIVE: To assess available clinical data for blue light (BL) HAL cystoscopy on the detection of Ta/T1 and carcinoma in situ (CIS) tumours, and on tumour recurrence. DESIGN, SETTING, AND PARTICIPANTS: This meta-analysis reviewed raw data from prospective studies on 1345 patients with known or suspected non-muscle-invasive bladder cancer (NMIBC). INTERVENTION: A single application of HAL cystoscopy was used as an adjunct to white light (WL) cystoscopy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: We studied the detection of NMIBC (intention to treat [ITT]: n=831; six studies) and recurrence (per protocol: n=634; three studies) up to 1 yr. DerSimonian and Laird's random-effects model was used to obtain pooled relative risks (RRs) and associated 95% confidence intervals (CIs) for outcomes for detection. RESULTS AND LIMITATIONS: BL cystoscopy detected significantly more Ta tumours (14.7%; p<0.001; odds ratio [OR]: 4.898; 95% CI, 1.937-12.390) and CIS lesions (40.8%; p<0.001; OR: 12.372; 95% CI, 6.343-24.133) than WL. There were 24.9% patients with at least one additional Ta/T1 tumour seen with BL (p<0.001), significant also in patients with primary (20.7%; p<0.001) and recurrent cancer (27.7%; p<0.001), and in patients at high risk (27.0%; p<0.001) and intermediate risk (35.7%; p=0.004). In 26.7% of patients, CIS was detected only by BL (p<0.001) and was also significant in patients with primary (28.0%; p<0.001) and recurrent cancer (25.0%; p<0.001). Recurrence rates up to 12 mo were significantly lower overall with BL, 34.5% versus 45.4% (p=0.006; RR: 0.761 [0.627-0.924]), and lower in patients with T1 or CIS (p=0.052; RR: 0.696 [0.482-1.003]), Ta (p=0.040; RR: 0.804 [0.653-0.991]), and in high-risk (p=0.050) and low-risk (p=0.029) subgroups. Some subgroups had too few patients to allow statistically meaningful analysis. Heterogeneity was minimised by the statistical analysis method used. CONCLUSIONS: This meta-analysis confirms that HAL BL cystoscopy significantly improves the detection of bladder tumours leading to a reduction of recurrence at 9-12 mo. The benefit is independent of the level of risk and is evident in patients with Ta, T1, CIS, primary, and recurrent cancer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, some steganalytic techniques designed to detect the existence of hidden messages using histogram shifting methods are presented. Firstly, some techniques to identify specific methods of histogram shifting, based on visible marks on the histogram or abnormal statistical distributions are suggested. Then, we present a general technique capable of detecting all histogram shifting techniques analyzed. This technique is based on the effect of histogram shifting methods on the "volatility" of the histogram of differences and the study of its reduction whenever new data are hidden.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems. METHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts. RESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed. CONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X data to detect flooded regions in urban areas is described. An important application for this would be the calibration and validation of the flood extent predicted by an urban flood inundation model. To date, research on such models has been hampered by lack of suitable distributed validation data. The study uses a 3m resolution TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with airborne LiDAR data to estimate regions of the TerraSAR-X image in which water would not be visible due to radar shadow or layover caused by buildings and taller vegetation, and these regions were masked out in the flood detection process. A semi-automatic algorithm for the detection of floodwater was developed, based on a hybrid approach. Flooding in rural areas adjacent to the urban areas was detected using an active contour model (snake) region-growing algorithm seeded using the un-flooded river channel network, which was applied to the TerraSAR-X image fused with the LiDAR DTM to ensure the smooth variation of heights along the reach. A simpler region-growing approach was used in the urban areas, which was initialized using knowledge of the flood waterline in the rural areas. Seed pixels having low backscatter were identified in the urban areas using supervised classification based on training areas for water taken from the rural flood, and non-water taken from the higher urban areas. Seed pixels were required to have heights less than a spatially-varying height threshold determined from nearby rural waterline heights. Seed pixels were clustered into urban flood regions based on their close proximity, rather than requiring that all pixels in the region should have low backscatter. This approach was taken because it appeared that urban water backscatter values were corrupted in some pixels, perhaps due to contributions from side-lobes of strong reflectors nearby. The TerraSAR-X urban flood extent was validated using the flood extent visible in the aerial photos. It turned out that 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. These findings indicate that TerraSAR-X is capable of providing useful data for the calibration and validation of urban flood inundation models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Analysis of meteorological records from four stations (Chittagong, Cox’s Bazar, Rangamati, Sitakunda) in south-eastern Bangladesh show coherent changes in climate over the past three decades. Mean maximum daily temperatures have increased between 1980 and 2013 by ca. 0.4 to 0.6°C per decade, with changes of comparable magnitude in individual seasons. The increase in mean maximum daily temperature is associated with decreased cloud cover and wind speed, particularly in the pre- and post-monsoon seasons. During these two seasons, the correlation between changes in maximum temperature and clouds is between -0.5 and -0.7; the correlation with wind speed is weaker although similar values are obtained in some seasons. Changes in mean daily minimum (and hence mean) temperature differ between the northern and southern part of the basin: northern stations show a decrease in mean daily minimum temperature during the post-monsoon season of between 0.2 and 0.5°C per decade while southern stations show an increase of ca. 0.1 to 0.4°C per decade during the pre-monsoon and monsoon seasons. In contrast to the significant changes in temperature, there is no trend in mean or total precipitation at any station. However, there is a significant increase in the number of rain days at the northern sites during the monsoon season, with an increase per decade of 3 days in Sitakunda and 7 days at Rangamati. These climate changes could have a significant impact on the hydrology of the Halda Basin, which supplies water to Chittagong and is the major pisciculture centre in Bangladesh.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)