945 resultados para Compressed Objects


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Possibilities for investigations of 43 varieties of file formats (objects), joined in 10 groups; 89 information attacks, joined in 33 groups and 73 methods of compression, joined in 10 groups are described in the paper. Experimental, expert, possible and real relations between attacks’ groups, method’ groups and objects’ groups are determined by means of matrix transformations and the respective maximum and potential sets are defined. At the end assessments and conclusions for future investigation are proposed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper a possibility for quantitative measuring of information security of objects, exposed to information attacks and processed with methods of compression, is represented. A co-efficient of information security, which reflects the influence of the level of compression obtained after applying methods of compression to objects and the time, required by the attack to get access to the corresponding object, is proposed. Methods’ groups with the highest and respectively the lowest values of the co-efficient of information security for all methods’ groups in relation to all attacks’ groups are determined. Assessments and conclusions for future investigations are proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Some basic types of archiving programs are described in the paper in addition to their advantages and disadvantages with respect to the analysis of security in archiving. Analysis and appraisal are performed on the results obtained during the described experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs arc combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may also be used for scene understanding by using a preprocessor and classifier that can determine both What objects are in a scene and Where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log-polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequence of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. ln the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding node's activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean image using slow and fast learning. Slow learning at the fuzzy ARTMAP map field is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of l28x128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exhaust emissions from thirteen compressed natural gas (CNG) and nine ultralow sulphur diesel in-service transport buses were monitored on a chassis dynamometer. Measurements were carried out at idle and at three steady engine loads of 25%, 50% and 100% of maximum power at a fixed speed of 60 kmph. Emission factors were estimated for particle mass and number, carbon dioxide and oxides of nitrogen for two types of CNG buses (Scania and MAN, compatible with Euro 2 and 3 emission standards, respectively) and two types of diesel buses (Volvo Pre-Euro/Euro1 and Mercedez OC500 Euro3). All emission factors increased with load. The median particle mass emission factor for the CNG buses was less than 1% of that from the diesel buses at all loads. However, the particle number emission factors did not show a statistically significant difference between buses operating on the two types of fuel. In this paper, for the very first time, particle number emission factors are presented at four steady state engine loads for CNG buses. Median values ranged from the order of 1012 particles min-1 at idle to 1015 particles km-1 at full power. Most of the particles observed in the CNG emissions were in the nanoparticle size range and likely to be composed of volatile organic compounds The CO2 emission factors were about 20% to 30% greater for the diesel buses over the CNG buses, while the oxides of nitrogen emission factors did not show any difference due to the large variation between buses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.