906 resultados para object-oriented programming
Resumo:
Design is being performed on an ever-increasing spectrum of complex practices arising in response to emerging markets and technologies, co-design, digital interaction, service design and cultures of innovation. This emerging notion of design has led to an expansive array of collaborative and facilitation skills to demonstrate and share how such methods can shape innovation. The meaning of these design things in practice can't be taken for granted as matters of fact, which raises a key challenge for design to represent its role through the contradictory nature of matters of concern. This paper explores an innovative, object-oriented approach within the field of design research, visually combining an actor-network theory framework with situational analysis, to report on the role of design for fledgling companies in Scotland, established and funded through the knowledge exchange hub Design in Action (DiA). Key findings and visual maps are presented from reflective discussions with actors from a selection of the businesses within DiA's portfolio. The suggestion is that any notions of strategic value, of engendering meaningful change, of sharing the vision of design, through design things, should be grounded in the reflexive interpretations of matters of concern that emerge.
Resumo:
The Portable Document Format (PDF), defined by Adobe Systems Inc. as the basis of its Acrobat product range, is discussed in some detail. Particular emphasis is given to its flexible object-oriented structure, which has yet to be fully exploited. It is currently used to represent not logical structure but simply a series of pages and associated resources. A definition of an Encapsulated PDF (EPDF) is presented, in which EPDF blocks carry with them their own resource requirements, together with geometrical and logical information. A block formatter called Juggler is described which can lay out EPDF blocks from various sources onto new pages. Future revisions of PDF supporting uniquely-named EPDF blocks tagged with semantic information would assist in composite-pagemakeup and could even lead to fully revisable PDF.
Resumo:
(POO) est l’utilisation de patrons de conception (PC). Un PC est un arrangement caractéristique de classes permettant d’offrir une solution éprouvée, tout en obtenant un code réutilisable et compréhensible. Plusieurs PC sont définis, dont 24 par la GoF [12] et plusieurs autres sont apparus par la suite. Le concept de PC est abstrait ce qui peut amener différentes interprétations. Ces différences peuvent aussi causer une mauvaise implémentation qui peut réduire les avantages d’utiliser ce patron. Ce projet consiste à concevoir un outil facilitant l’utilisation des PC. L’outil Génération et Restructuration de Patrons de Conception(GRPC) permet la génération automatique du squelette d’un patron de conception ainsi que la restructuration d’un code en le transformant structure respectant un PC. La génération et la restructuration automatique permettent d’obtenir un code uniforme et de qualité tout en respectant le patron de conception. La compréhension et la maintenance du code sont ainsi améliorées. GRPC est module d’extension pour l’environnement de développement Eclipse écrit en Java. Le code est conçu pour être facilement compréhensible et extensible. Les deux principaux objectifs de GRPC sont de restructurer (refactoring) une section de code vers l’architecture d’un patron de conception et de générer des squelettes de patrons de conception. Une interface graphique permet de guider l’utilisateur et d’aller chercher toutes les informations importantes pour le fonctionnement du logiciel. Elle permet aussi de configurer les éléments du patron de conception. Pour s’assurer de la possibilité d’effectuer une restructuration, chaque patron est associé avec une ou plusieurs règles qui analysent le code pour détecter la présence d’une structure particulière. Des procédures aident les développeurs à ajouter de nouveaux PC dans GRPC. GRPC fournit des fonctionnalités permettant d’implémenter quelques patrons de conception de la POO définis dans le livre Design Patterns : Elements of Reusable Object-Oriented Software.
Resumo:
Relatório de Estágio apresentado para a obtenção do grau de mestre em Educação e Comunicação Multiméddia
Resumo:
Relatório de Estágio apresentado para a obtenção do grau de mestre em Educação e Comunicação Multiméddia
Resumo:
Se ha realizado el modelado orientado a objetos del sistema de control cardiovascular en situaciones de diálisis aplicando una analogía eléctrica en el que se emplean componentes conectados mediante interconexiones. En este modelado se representan las ecuaciones diferenciales del sistema cardiovascular y del sistema de control barorreceptor así como las ecuaciones dinámicas del intercambio de fluidos y solutos del sistema hemodializador. A partir de este modelo se ha realizado experiencias de simulación en condiciones normales y situaciones de hemorragias, transfusiones de sangre y de ultrafiltración e infusión de fluido durante tratamiento de hemodiálisis. Los resultados obtenidos muestran en primer lugar la efectividad del sistema barorreceptor para compensar la hipotensión arterial inducida por los episodios de hemorragia y transfusión de sangre. En segundo lugar se muestra la respuesta del sistema de control ante diferentes tasas de ultrafiltración durante la hemodiálisis y se sugieren valores óptimos para la adecuada operación.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Agent-oriented programming (AOP) è un paradigma di programmazione che concepisce un software come insieme di agenti che possiedono caratteristiche di autonomia, proattività e che sono in grado di comunicare con altri agenti. Sebbene sia stato impiegato soprattutto nell'ambito dell'intelligenza artificiale questo tipo di programmazione si rivela utile per lo sviluppo di sistemi distribuiti riuscendo a gestire agilmente problemi di concorrenza. Lo scopo di questa tesi è analizzare le caratteristiche del paradigma e dei software basati su agenti, utilizzando come caso di studio Sarl, un linguaggio general-purpose molto recente. La parte principale del lavoro consiste nella descrizione dei modelli teorici che hanno portato alla nascita della programmazione ad agenti, in particolare del modello BDI, e dei principali framework per lo sviluppo di sistemi multi-agente.
Resumo:
Thesis (Ph.D, Education) -- Queen's University, 2016-09-22 22:05:24.246
Resumo:
Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506
Resumo:
Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).
Resumo:
Above ground biomass is frequently estimated with forest inventory data and an extrapolation method for the per unit area evaluations. This procedure is labour demanding and costly. In this study above ground biomass functions, whose independent variable is crown horizontal projection, were developed. Multi-resolution segmentation method and object-oriented classification, based on very high spatial resolution satellite images, were used to obtain the area of tree crown horizontal projection for umbrella pine (Pinus pinea L.). A set of inventory plots were measured and with existing allometric functions for this species above ground biomass per tree and per plot were calculated. The two data sets were used to fit linear functions both for individual plot and their cumulative values. The results show a good performance of the models. Errors smaller than 10% are obtained for stand areas greater than 1.4 ha. These functions have the advantages of estimating above ground biomass for all the area under study or surveillance, not requiring forest inventory; allow monitoring in short time periods; and are easily implemented in a geographical information system environment.
Resumo:
Forest biomass has been having an increasing importance in the world economy and in the evaluation of the forests development and monitoring. It was identified as a global strategic reserve, due to its applications in bioenergy, bioproduct development and issues related to reducing greenhouse gas emissions. The estimation of above ground biomass is frequently done with allometric functions per species with plot inventory data. An adequate sampling design and intensity for an error threshold is required. The estimation per unit area is done using an extrapolation method. This procedure is labour demanding and costly. The mail goal of this study is the development of allometric functions for the estimation of above ground biomass with ground cover as independent variable, for forest areas of holm aok (Quercus rotundifolia), cork oak (Quercus suber) and umbrella pine (Pinus pinea) in multiple use systems. Ground cover per species was derived from crown horizontal projection obtained by processing high resolution satellite images, orthorectified, geometrically and atmospheric corrected, with multi-resolution segmentation method and object oriented classification. Forest inventory data were used to estimate plot above ground biomass with published allometric functions at tree level. The developed functions were fitted for monospecies stands and for multispecies stands of Quercus rotundifolia and Quercus suber, and Quercus suber and Pinus pinea. The stand composition was considered adding dummy variables to distinguish monospecies from multispecies stands. The models showed a good performance. Noteworthy is that the dummy variables, reflecting the differences between species, originated improvements in the models. Significant differences were found for above ground biomass estimation with the functions with and without the dummy variables. An error threshold of 10% corresponds to stand areas of about 40 ha. This method enables the overall area evaluation, not requiring extrapolation procedures, for the three species, which occur frequently in multispecies stands.
Resumo:
This paper presents a tool called Petcha that acts as an automated Teaching Assistant in computer programming courses. The ultimate objective of Petcha is to increase the number of programming exercises effectively solved by students. Petcha meets this objective by helping both teachers to author programming exercises and students to solve them. It also coordinates a network of heterogeneous systems, integrating automatic program evaluators, learning management systems, learning object repositories and integrated programming environments. This paper presents the concept and the design of Petcha and sets this tool in a service oriented architecture for managing learning processes based on the automatic evaluation of programming exercises. The paper presents also a case study that validates the use of Petcha and of the proposed architecture.
Resumo:
Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements.