875 resultados para Intrusion Detection Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the impact of in-phase and quadrature-phase imbalance (IQI) in two-way amplify-and-forward (AF) relaying systems. In particular, the effective signal-to-interference-plus-noise ratio (SINR) is derived for each source node, considering four different linear detection schemes, namely, uncompensated (Uncomp) scheme, maximal-ratio-combining (MRC), zero-forcing (ZF) and minimum mean-square error (MMSE) based schemes. For each proposed scheme, the outage probability (OP) is investigated over independent, non-identically distributed Nakagami-m fading channels, and exact closed-form expressions are derived for the first three schemes. Based on the closed-form OP expressions, an adaptive detection mode switching scheme is designed for minimizing the OP of both sources. An important observation is that, regardless of the channel conditions and transmit powers, the ZF-based scheme should always be selected if the target SINR is larger than 3 (4.77dB), while the MRC-based scheme should be avoided if the target SINR is larger than 0.38 (-4.20dB).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the discovery, tracking, and detection circumstances for 85 trans-Neptunian objects (TNOs) from the first 42 deg2 of the Outer Solar System Origins Survey. This ongoing r-band solar system survey uses the 0.9 deg2 field of view MegaPrime camera on the 3.6 m Canada–France–Hawaii Telescope. Our orbital elements for these TNOs are precise to a fractional semimajor axis uncertainty <0.1%. We achieve this precision in just two oppositions, as compared to the normal three to five oppositions, via a dense observing cadence and innovative astrometric technique. These discoveries are free of ephemeris bias, a first for large trans-Neptunian surveys. We also provide the necessary information to enable models of TNO orbital distributions to be tested against our TNO sample. We confirm the existence of a cold "kernel" of objects within the main cold classical Kuiper Belt and infer the existence of an extension of the "stirred" cold classical Kuiper Belt to at least several au beyond the 2:1 mean motion resonance with Neptune. We find that the population model of Petit et al. remains a plausible representation of the Kuiper Belt. The full survey, to be completed in 2017, will provide an exquisitely characterized sample of important resonant TNO populations, ideal for testing models of giant planet migration during the early history of the solar system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]In this paper, a basic conceptual architecture aimed at the design of Computer Vision System is qualitatively described. The proposed architecture addresses the design of vision systems in a modular fashion using modules with three distinct units or components: a processing network or diagnostics unit, a control unit and a communications unit. The control of the system at the modules level is designed based on a Discrete Events Model. This basic methodology has been used to design a realtime active vision system for detection, tracking and recognition of people. It is made up of three functional modules aimed at the detection, tracking, recognition of moving individuals plus a supervision module.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is reviewing objective assessments of Parkinson’s disease(PD) motor symptoms, cardinal, and dyskinesia, using sensor systems. It surveys the manifestation of PD symptoms, sensors that were used for their detection, types of signals (measures) as well as their signal processing (data analysis) methods. A summary of this review’s finding is represented in a table including devices (sensors), measures and methods that were used in each reviewed motor symptom assessment study. In the gathered studies among sensors, accelerometers and touch screen devices are the most widely used to detect PD symptoms and among symptoms, bradykinesia and tremor were found to be mostly evaluated. In general, machine learning methods are potentially promising for this. PD is a complex disease that requires continuous monitoring and multidimensional symptom analysis. Combining existing technologies to develop new sensor platforms may assist in assessing the overall symptom profile more accurately to develop useful tools towards supporting better treatment process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On-site detection of inoculum of polycyclic plant pathogens could potentially contribute to management of disease outbreaks. A 6-min, in-field competitive immunochromatographic lateral flow device (CLFD) assay was developed for detection of Alternaria brassicae (the cause of dark leaf spot in brassica crops) in air sampled above the crop canopy. Visual recording of the test result by eye provides a detection threshold of approximately 50 dark leaf spot conidia. Assessment using a portable reader improved test sensitivity. In combination with a weather-driven infection model, CLFD assays were evaluated as part of an in-field risk assessment to identify periods when brassica crops were at risk from A. brassicae infection. The weather-driven model overpredicted A. brassicae infection. An automated 7-day multivial cyclone air sampler combined with a daily in-field CLFD assay detected A. brassicae conidia air samples from above the crops. Integration of information from an in-field detection system (CLFD) with weather-driven mathematical models predicting pathogen infection have the potential for use within disease management systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The FIREDASS (FIRE Detection And Suppression Simulation) project is concerned with the development of fine water mist systems as a possible replacement for the halon fire suppression system currently used in aircraft cargo holds. The project is funded by the European Commission, under the BRITE EURAM programme. The FIREDASS consortium is made up of a combination of Industrial, Academic, Research and Regulatory partners. As part of this programme of work, a computational model has been developed to help engineers optimise the design of the water mist suppression system. This computational model is based on Computational Fluid Dynamics (CFD) and is composed of the following components: fire model; mist model; two-phase radiation model; suppression model and detector/activation model. The fire model - developed by the University of Greenwich - uses prescribed release rates for heat and gaseous combustion products to represent the fire load. Typical release rates have been determined through experimentation conducted by SINTEF. The mist model - developed by the University of Greenwich - is a Lagrangian particle tracking procedure that is fully coupled to both the gas phase and the radiation field. The radiation model - developed by the National Technical University of Athens - is described using a six-flux radiation model. The suppression model - developed by SINTEF and the University of Greenwich - is based on an extinguishment crietrion that relies on oxygen concentration and temperature. The detector/ activation model - developed by Cerberus - allows the configuration of many different detector and mist configurations to be tested within the computational model. These sub-models have been integrated by the University of Greenwich into the FIREDASS software package. The model has been validated using data from the SINTEF/GEC test campaigns and it has been found that the computational model gives good agreement with these experimental results. The best agreement is obtained at the ceiling which is where the detectors and misting nozzles would be located in a real system. In this paper the model is briefly described and some results from the validation of the fire and mist model are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Invasive species pose a major threat to aquatic ecosystems. Their impact can be particularly severe in tropical regions, like those in northern Australia, where >20 invasive fish species are recorded. In temperate regions, environmental DNA (eDNA) technology is gaining momentum as a tool to detect aquatic pests, but the technology's effectiveness has not been fully explored in tropical systems with their unique climatic challenges (i.e. high turbidity, temperatures and ultraviolet light). In this study, we modified conventional eDNA protocols for use in tropical environments using the invasive fish, Mozambique tilapia (Oreochromis mossambicus) as a detection model. We evaluated the effects of high water temperatures and fish density on the detection of tilapia eDNA, using filters with larger pores to facilitate filtration. Large-pore filters (20 μm) were effective in filtering turbid waters and retaining sufficient eDNA, whilst achieving filtration times of 2-3 min per 2-L sample. High water temperatures, often experienced in the tropics (23, 29, 35 °C), did not affect eDNA degradation rates, although high temperatures (35 °C) did significantly increase fish eDNA shedding rates. We established a minimum detection limit for tilapia (1 fish/0.4 megalitres/after 4 days) and found that low water flow (3.17 L/s) into ponds with high fish density (>16 fish/0.4 megalitres) did not affect eDNA detection. These results demonstrate that eDNA technology can be effectively used in tropical ecosystems to detect invasive fish species. © 2016 John Wiley & Sons Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract. Dendritic cells are antigen presenting cells that provide a vital link between the innate and adaptive immune system. Research into this family of cells has revealed that they perform the role of coordinating T-cell based immune responses, both reactive and for generating tolerance. We have derived an algorithm based on the functionality of these cells, and have used the signals and differentiation pathways to build a control mechanism for an artificial immune system. We present our algorithmic details in addition to some preliminary results, where the algorithm was applied for the purpose of anomaly detection. We hope that this algorithm will eventually become the key component within a large, distributed immune system, based on sound immunological concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of T-cells within the immune system is to confirm and assess anomalous situations and then either respond to or tolerate the source of the effect. To illustrate how these mechanisms can be harnessed to solve real-world problems, we present the blueprint of a T-cell inspired algorithm for computer security worm detection. We show how the three central T-cell processes, namely T-cell maturation, differentiation and proliferation, naturally map into this domain and further illustrate how such an algorithm fits into a complete immune inspired computer security system and framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.