906 resultados para Object-Oriented Programming


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model Driven Architecture supports the transformation from reusable models to executable software. Business representations, however, cannot be fully and explicitly represented in such models for direct transformation into running systems. Thus, once business needs change, the language abstractions used by MDA (e.g. Object Constraint Language / Action Semantics), being low level, have to be edited directly. We therefore describe an Agent-oriented Model Driven Architecture (AMDA) that uses a set of business models under continuous maintenance by business people, reflecting the current business needs and being associated with adaptive agents that interpret the captured knowledge to behave dynamically. Three contributions of the AMDA approach are identified: 1) to Agent-oriented Software Engineering, a method of building adaptive Multi-Agent Systems; 2) to MDA, a means of abstracting high level business-oriented models to align executable systems with their requirements at runtime; 3) to distributed systems, the interoperability of disparate components and services via the agent abstraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional and non-functional concerns require different programming effort, different techniques and different methodologies when attempting to program efficient parallel/distributed applications. In this work we present a "programmer oriented" methodology based on formal tools that permits reasoning about parallel/distributed program development and refinement. The proposed methodology is semi-formal in that it does not require the exploitation of highly formal tools and techniques, while providing a palatable and effective support to programmers developing parallel/distributed applications, in particular when handling non-functional concerns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Apparatus for scanning a moving object includes a visible waveband sensor oriented to collect a series of images of the object as it passes through a field of view. An image processor uses the series of images to form a composite image. The image processor stores image pixel data for a current image and predecessor image in the series. It uses information in the current image and its predecessor to analyse images and derive likelihood measures indicating probabilities that current image pixels correspond to parts of the object. The image processor estimates motion between the current image and its predecessor from likelihood weighted pixels. It generates the composite image from frames positioned according to respective estimates of object image motion. Image motion may alternatively be detected be a speed sensor such as Doppler radar sensing object motion directly and providing image timing signals

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data flow techniques have been around since the early '70s when they were used in compilers for sequential languages. Shortly after their introduction they were also consideredas a possible model for parallel computing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow "macro" instructions is left to the programmer, while the compiler/run time system manages only the efficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the feasibility of the approach and the efficiency of the resulting "object" code on different classes of state-of-the-art multi-core architectures. The experimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical, parallel frameworks are also presented. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Field Programmable Gate Array (FPGA) implementation of the commonly used Histogram of Oriented Gradients (HOG) algorithm is explored. The HOG algorithm is employed to extract features for object detection. A key focus has been to explore the use of a new FPGA-based processor which has been targeted at image processing. The paper gives details of the mapping and scheduling factors that influence the performance and the stages that were undertaken to allow the algorithm to be deployed on FPGA hardware, whilst taking into account the specific IPPro architecture features. We show that multi-core IPPro performance can exceed that of against state-of-the-art FPGA designs by up to 3.2 times with reduced design and implementation effort and increased flexibility all on a low cost, Zynq programmable system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The corner stone of the interoperability of eLearning systems is the standard definition of learning objects. Nevertheless, for some domains this standard is insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. On the other hand, a standard definition of learning objects in not enough to ensure interoperability among eLearning systems; they must also use a standard API to exchange learning objects. This paper presents the design and implementation of a service oriented repository of learning objects called crimsonHex. This repository is fully compliant with the existing interoperability standards and supports new definitions of learning objects for specialized domains. We illustrate this feature with the definition of programming problems as learning objects and its validation by the repository. This repository is also prepared to store usage data on learning objects to tailor the presentation order and adapt it to learner profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The e-Framework is arguably the most prominent e-learning framework currently in use. For this reason it was selected as basis for modelling a programming exercises evaluation service. The purpose of this type of evaluator is to mark and grade exercises in computer programming courses and in programming contests. By exposing its functions as services a programming exercise evaluator is able to participate in business processes integrating different system types, such as Programming Contest Management Systems, Learning Management Systems, Integrated Development Environments and Learning Object Repositories. This paper formalizes the approaches to be used in the implementation of a programming exercise evaluator as a service on the e-Framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is widely accepted that solving programming exercises is fundamental to learn how to program. Nevertheless, solving exercises is only effective if students receive an assessment on their work. An exercise solved wrong will consolidate a false belief, and without feedback many students will not be able to overcome their difficulties. However, creating, managing and accessing a large number of exercises, covering all the points in the curricula of a programming course, in classes with large number of students, can be a daunting task without the appropriated tools working in unison. This involves a diversity of tools, from the environments where programs are coded, to automatic program evaluators providing feedback on the attempts of students, passing through the authoring, management and sequencing of programming exercises as learning objects. We believe that the integration of these tools will have a great impact in acquiring programming skills. Our research objective is to manage and coordinate a network of eLearning systems where students can solve computer programming exercises. Networks of this kind include systems such as learning management systems (LMS), evaluation engines (EE), learning objects repositories (LOR) and exercise resolution environments (ERE). Our strategy to achieve the interoperability among these tools is based on a shared definition of programming exercise as a Learning Object (LO).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment plays a vital role in learning. This is certainly the case with assessment of computer programs, both in curricular and competitive learning. The lack of a standard – or at least a widely used format – creates a modern Ba- bel tower made of Learning Objects, of assessment items that cannot be shared among automatic assessment systems. These systems whose interoperability is hindered by the lack of a common format include contest management systems, evaluation engines, repositories of learning objects and authoring tools. A prag- matical approach to remedy this problem is to create a service to convert among existing formats. A kind of translation service specialized in programming prob- lems formats. To convert programming exercises on-the-fly among the most used formats is the purpose of the BabeLO – a service to cope with the existing Babel of Learning Object formats for programming exercises. BabeLO was designed as a service to act as a middleware in a network of systems typically used in auto- matic assessment of programs. It provides support for multiple exercise formats and can be used by: evaluation engines to assess exercises regardless of its format; repositories to import exercises from various sources; authoring systems to create exercises in multiple formats or based on exercises from other sources. This paper analyses several of existing formats to highlight both their differ- ences and their similar features. Based on this analysis it presents an approach to extensible format conversion. It presents also the features of PExIL, the pivotal format in which the conversion is based; and the function definitions of the proposed service – BabeLO. Details on the design and implementation of BabeLO, including the service API and the interfaces required to extend the conversion to a new format, are also provided. To evaluate the effectiveness and efficiency of this approach this paper reports on two actual uses of BabeLO: to relocate exercises to a different repository; and to use an evaluation engine in a network of heterogeneous systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These are the resources used for the Computer Science course Programming Principles, designed to teach students the fundamentals of computer programming and object orientation via learning the Java language. We also touch on some software engineering basics, such as patterns, software design and testing. The course assumes no previous knowledge of programming, but there is a fairly steep learning curve, and students are encouraged to practice, practice, practice!