863 resultados para Data sources detection
Resumo:
To develop systems in order to detect Alzheimer’s disease we want to use EEG signals. Available database is raw, so the first step must be to clean signals properly. We propose a new way of ICA cleaning on a database recorded from patients with Alzheimer's disease (mildAD, early stage). Two researchers visually inspected all the signals (EEG channels), and each recording's least corrupted (artefact-clean) continuous 20 sec interval were chosen for the analysis. Each trial was then decomposed using ICA. Sources were ordered using a kurtosis measure, and the researchers cleared up to seven sources per trial corresponding to artefacts (eye movements, EMG corruption, EKG, etc), using three criteria: (i) Isolated source on the scalp (only a few electrodes contribute to the source), (ii) Abnormal wave shape (drifts, eye blinks, sharp waves, etc.), (iii) Source of abnormally high amplitude ( �100 �V). We then evaluated the outcome of this cleaning by means of the classification of patients using multilayer perceptron neural networks. Results are very satisfactory and performance is increased from 50.9% to 73.1% correctly classified data using ICA cleaning procedure.
Resumo:
PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems. METHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts. RESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed. CONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X data to detect flooded regions in urban areas is described. An important application for this would be the calibration and validation of the flood extent predicted by an urban flood inundation model. To date, research on such models has been hampered by lack of suitable distributed validation data. The study uses a 3m resolution TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with airborne LiDAR data to estimate regions of the TerraSAR-X image in which water would not be visible due to radar shadow or layover caused by buildings and taller vegetation, and these regions were masked out in the flood detection process. A semi-automatic algorithm for the detection of floodwater was developed, based on a hybrid approach. Flooding in rural areas adjacent to the urban areas was detected using an active contour model (snake) region-growing algorithm seeded using the un-flooded river channel network, which was applied to the TerraSAR-X image fused with the LiDAR DTM to ensure the smooth variation of heights along the reach. A simpler region-growing approach was used in the urban areas, which was initialized using knowledge of the flood waterline in the rural areas. Seed pixels having low backscatter were identified in the urban areas using supervised classification based on training areas for water taken from the rural flood, and non-water taken from the higher urban areas. Seed pixels were required to have heights less than a spatially-varying height threshold determined from nearby rural waterline heights. Seed pixels were clustered into urban flood regions based on their close proximity, rather than requiring that all pixels in the region should have low backscatter. This approach was taken because it appeared that urban water backscatter values were corrupted in some pixels, perhaps due to contributions from side-lobes of strong reflectors nearby. The TerraSAR-X urban flood extent was validated using the flood extent visible in the aerial photos. It turned out that 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. These findings indicate that TerraSAR-X is capable of providing useful data for the calibration and validation of urban flood inundation models.
Resumo:
Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.
Resumo:
This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.
OFDM joint data detection and phase noise cancellation based on minimum mean square prediction error
Resumo:
This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This correspondence proposes a new algorithm for the OFDM joint data detection and phase noise (PHN) cancellation for constant modulus modulations. We highlight that it is important to address the overfitting problem since this is a major detrimental factor impairing the joint detection process. In order to attack the overfitting problem we propose an iterative approach based on minimum mean square prediction error (MMSPE) subject to the constraint that the estimated data symbols have constant power. The proposed constrained MMSPE algorithm (C-MMSPE) significantly improves the performance of existing approaches with little extra complexity being imposed. Simulation results are also given to verify the proposed algorithm.
Resumo:
This study evaluates model-simulated dust aerosols over North Africa and the North Atlantic from five global models that participated in the Aerosol Comparison between Observations and Models phase II model experiments. The model results are compared with satellite aerosol optical depth (AOD) data from Moderate Resolution Imaging Spectroradiometer (MODIS), Multiangle Imaging Spectroradiometer (MISR), and Sea-viewing Wide Field-of-view Sensor, dust optical depth (DOD) derived from MODIS and MISR, AOD and coarse-mode AOD (as a proxy of DOD) from ground-based Aerosol Robotic Network Sun photometer measurements, and dust vertical distributions/centroid height from Cloud Aerosol Lidar with Orthogonal Polarization and Atmospheric Infrared Sounder satellite AOD retrievals. We examine the following quantities of AOD and DOD: (1) the magnitudes over land and over ocean in our study domain, (2) the longitudinal gradient from the dust source region over North Africa to the western North Atlantic, (3) seasonal variations at different locations, and (4) the dust vertical profile shape and the AOD centroid height (altitude above or below which half of the AOD is located). The different satellite data show consistent features in most of these aspects; however, the models display large diversity in all of them, with significant differences among the models and between models and observations. By examining dust emission, removal, and mass extinction efficiency in the five models, we also find remarkable differences among the models that all contribute to the discrepancies of model-simulated dust amount and distribution. This study highlights the challenges in simulating the dust physical and optical processes, even in the best known dust environment, and stresses the need for observable quantities to constrain the model processes.
Resumo:
Analysis of meteorological records from four stations (Chittagong, Cox’s Bazar, Rangamati, Sitakunda) in south-eastern Bangladesh show coherent changes in climate over the past three decades. Mean maximum daily temperatures have increased between 1980 and 2013 by ca. 0.4 to 0.6°C per decade, with changes of comparable magnitude in individual seasons. The increase in mean maximum daily temperature is associated with decreased cloud cover and wind speed, particularly in the pre- and post-monsoon seasons. During these two seasons, the correlation between changes in maximum temperature and clouds is between -0.5 and -0.7; the correlation with wind speed is weaker although similar values are obtained in some seasons. Changes in mean daily minimum (and hence mean) temperature differ between the northern and southern part of the basin: northern stations show a decrease in mean daily minimum temperature during the post-monsoon season of between 0.2 and 0.5°C per decade while southern stations show an increase of ca. 0.1 to 0.4°C per decade during the pre-monsoon and monsoon seasons. In contrast to the significant changes in temperature, there is no trend in mean or total precipitation at any station. However, there is a significant increase in the number of rain days at the northern sites during the monsoon season, with an increase per decade of 3 days in Sitakunda and 7 days at Rangamati. These climate changes could have a significant impact on the hydrology of the Halda Basin, which supplies water to Chittagong and is the major pisciculture centre in Bangladesh.
Resumo:
This work describes two similar methods for calculating gamma transition intensities from multidetector coincidence measurements. In the first one, applicable to experiments where the angular correlation function is explicitly fitted, the normalization parameter from this fit is used to determine the gamma transition intensities. In the second, that can be used both in angular correlation or DCO measurements, the spectra obtained for all the detector pairs are summed up, in order to get the best detection statistics possible, and the analysis of the resulting bidimensional spectrum is used to calculate the transition intensities; in this method, the summation of data corresponding to different angles minimizes the influence of the angular correlation coefficient. Both methods are then tested in the calculation of intensities for well-known transitions from a (152)Eu standard source, as well as in the calculation of intensities obtained in beta-decay experiments with (193)Os and (155)Sm sources, yielding excellent results in all these cases. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The Enriquillo and Azuei are saltwater lakes located in a closed water basin in the southwestern region of the island of La Hispaniola, these have been experiencing dramatic changes in total lake-surface area coverage during the period 1980-2012. The size of Lake Enriquillo presented a surface area of approximately 276 km2 in 1984, gradually decreasing to 172 km2 in 1996. The surface area of the lake reached its lowest point in the satellite observation record in 2004, at 165 km2. Then the recent growth of the lake began reaching its 1984 size by 2006. Based on surface area measurement for June and July 2013, Lake Enriquillo has a surface area of ~358 km2. Sumatra sizes at both ends of the record are 116 km2 in 1984 and 134 km2in 2013, an overall 15.8% increase in 30 years. Determining the causes of lake surface area changes is of extreme importance due to its environmental, social, and economic impacts. The overall goal of this study is to quantify the changing water balance in these lakes and their catchment area using satellite and ground observations and a regional atmospheric-hydrologic modeling approach. Data analyses of environmental variables in the region reflect a hydrological unbalance of the lakes due to changing regional hydro-climatic conditions. Historical data show precipitation, land surface temperature and humidity, and sea surface temperature (SST), increasing over region during the past decades. Salinity levels have also been decreasing by more than 30% from previously reported baseline levels. Here we present a summary of the historical data obtained, new sensors deployed in the sourrounding sierras and the lakes, and the integrated modeling exercises. As well as the challenges of gathering, storing, sharing, and analyzing this large volumen of data in a remote location from such a diverse number of sources.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Optical remote sensing techniques have obvious advantages for monitoring gas and aerosol emissions, since they enable the operation over large distances, far from hostile environments, and fast processing of the measured signal. In this study two remote sensing devices, namely a Lidar (Light Detection and Ranging) for monitoring the vertical profile of backscattered light intensity, and a Sodar (Acoustic Radar, Sound Detection and Ranging) for monitoring the vertical profile of the wind vector were operated during specific periods. The acquired data were processed and compared with data of air quality obtained from ground level monitoring stations, in order to verify the possibility of using the remote sensing techniques to monitor industrial emissions. The campaigns were carried out in the area of the Environmental Research Center (Cepema) of the University of São Paulo, in the city of Cubatão, Brazil, a large industrial site, where numerous different industries are located, including an oil refinery, a steel plant, as well as fertilizer, cement and chemical/petrochemical plants. The local environmental problems caused by the industrial activities are aggravated by the climate and topography of the site, unfavorable to pollutant dispersion. Results of a campaign are presented for a 24- hour period, showing data of a Lidar, an air quality monitoring station and a Sodar. © 2011 SPIE.
Resumo:
Detecting misbehavior (such as transmissions of false information) in vehicular ad hoc networks (VANETs) is a very important problem with wide range of implications, including safety related and congestion avoidance applications. We discuss several limitations of existing misbehavior detection schemes (MDS) designed for VANETs. Most MDS are concerned with detection of malicious nodes. In most situations, vehicles would send wrong information because of selfish reasons of their owners, e.g. for gaining access to a particular lane. It is therefore more important to detect false information than to identify misbehaving nodes. We introduce the concept of data-centric misbehavior detection and propose algorithms which detect false alert messages and misbehaving nodes by observing their actions after sending out the alert messages. With the data-centric MDS, each node can decide whether an information received is correct or false. The decision is based on the consistency of recent messages and new alerts with reported and estimated vehicle positions. No voting or majority decisions is needed, making our MDS resilient to Sybil attacks. After misbehavior is detected, we do not revoke all the secret credentials of misbehaving nodes, as done in most schemes. Instead, we impose fines on misbehaving nodes (administered by the certification authority), discouraging them to act selfishly. This reduces the computation and communication costs involved in revoking all the secret credentials of misbehaving nodes. © 2011 IEEE.