968 resultados para Execution semantics
Resumo:
[EN] Programming software for controlling robotic systems in order to built working systems that perform adequately according to their design requirements remains being a task that requires an important development effort. Currently, there are no clear programming paradigms for programming robotic systems, and the programming techniques which are of common use today are not adequate to deal with the complexity associated with these systems. The work presented in this document describes a programming tool, concretely a framework, that must be considered as a first step to devise a tool for dealing with the complexity present in robotics systems. In this framework the software that controls a system is viewed as a dynamic network of units of execution inter-connected by means of data paths. Each one of these units of execution, called a component, is a port automaton which provides a given functionality, hidden behind an external interface specifying clearly which data it needs and which data it produces. Components, once defined and built, may be instantiated, integrated and used as many times as needed in other systems. The framework provides the infrastructure necessary to support this concept for components and the inter communication between them by means of data paths (port connections) which can be established and de-established dynamically. Moreover, and considering that the more robust components that conform a system are, the more robust the system is, the framework provides the necessary infrastructure to control and monitor the components than integrate a system at any given instant of time.
Resumo:
[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.
Resumo:
[ES] El objetivo de este Trabajo es el de parametrizar, implementar las estructuras de datos y programar las aplicaciones necesarias que posibilitan el intercambio de información entre dos entornos software, SAP R/3 y Knapp, líderes en sus campos de actuación. El resultado de aplicar tales cambios permitirá a la organización no sólo centralizar la información en el ERP, sino que mejorará sus procesos de negocio y agilizará la toma de decisiones por parte de los responsables. Se realiza un estudio de la situación actual y, tras un análisis detallado, se propone una solución que permita alcanzar los objetivos propuestos. Una vez diseñada, presentada y aprobada la propuesta, se procede a la parametrización de SAP R/3, a la definición de los segmentos y tipos de IDOC y a la codificación de funciones y programas que permitan tratar la información enviada por Knapp. Finalizadas estas tareas, se elaboran juegos de datos de los procesos comerciales y se ejecutan en un entorno de test, en colaboración con los usuarios claves, para comprobar la bondad de la solución implementada. Se analizan los resultados y se corrigen posibles deficiencias. Finalmente se transporta al sistema productivo todos los cambios realizados y se verifica la correcta ejecución de los procesos de negocio de la organización.
Resumo:
[ES] El Detector de Efectos Stroop (SED - Stroop Effect Detector), es una herramienta informática de asistencia, desarrollada a través del programa de investigación de Desarrollo Tecnológico Social de la Universidad de Las Palmas de Gran Canaria, que ayuda a profesionales del sector neuropsicológico a identificar problemas en la corteza orbitofrontal de un individuo, usándose para ello la técnica ideada por Schenker en 1998. Como base metodológica, se han utilizado los conocimientos adquiridos en las diferentes materias de la adaptación al grado en Ingeniería Informática como Gestión del Software, Arquitectura del Software y Desarrollo de Interfaces de Usuario así como conocimiento adquirido con anterioridad en asignaturas de Programación e Ingeniería del Software I y II. Como para realizar este proyecto sólo el conocimiento informático no era suficiente, he realizado una labor de investigación acerca del problema, teniendo que recopilar información de otros documentos científicos que abordan el tema, consultas a profesionales del sector como son el Doctor Don Ayoze Nauzet González Hernández, neurólogo del hospital Doctor Negrín de Las Palmas de Gran Canaria y el psicólogo Don José Manuel Rodríguez Pellejero que habló de este problema en clase del máster de Formación del Profesorado y que actualmente estoy cursando. Este trabajo presenta el test de Stroop con las dos versiones de Schenker: RCN (Reading Color Names) y NCW (Naming Colored Words). Como norma general, ambas pruebas presentan ante los sujetos estudios palabras (nombres de colores) escritas con la tinta de colores diferentes. De esta forma, el RCN consiste en leer la palabra escrita omitiendo la tonalidad de su fuente e intentando que no nos influya. Por el contrario, el NCW requiere enunciar el nombre del color de la tinta con la que está escrita la palabra sin que nos influya que ésta última sea el nombre de un color.
Resumo:
[ES] IPOL es una revista científica de procesamiento digital de imágenes y diversos métodos de análisis de imágenes. En cada publicación se incorpora una demo donde cualquier persona puede probar, vía web, el funcionamiento del método descrito en dicha publicación. De esta forma, se puede usar el método sin tener conocimiento de programación ni tener que instalarlo en su ordenador. En este proyecto fin de carrera se quiere desarrollar una aplicación que permita la ejecución de las demos desde un dispositivo móvil. Con ello, se pretende hacer más accesible la ejecución de algoritmo de procesamiento de imágenes y aumentar su divulgación científica.
Resumo:
[ES] El estándar Functional Mockup Interface (FMI), es un estándar abierto e independiente de cualquier aplicación o herramienta que permite compartir modelos de sistemas dinámicos entre aplicaciones. En FMI, se define una interfaz común (API) que permite la distribución e interoperabilidad de simulaciones. Así, una simulación puede transformarse en un formato ejecutable para su distribución con una interfaz pública conocida. En este estándar, una simulación se empaqueta en un formato de fichero llamado Functional Mock-up Unit (FMU). La ejecución de una simulación compleja en la que intervienen muchas FMUs es una necesidad que puede ser inviable de realizar en un sólo ordenador por la cantidad de recursos que consume.
Resumo:
Programa de doctorado: Ingeniería de Telecomunicación Avanzada
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
[ES] La realización de los nuevos estudios de gestión de la demanda requiere nuevas aproximaciones en las que la red eléctrica se analiza como un sistema complejo. Estos están formados por un gran número de entidades fuertemente enlazadas entre sí. Se afronta el reto de añadir la capacidad de interacción sobre una simulación de sistemas complejos en tiempo de ejecución. Pero, ¿Cómo abordar la representación de un sistema complejo de tal manera que sea fácilmente gestionable por una persona?, o ¿Cómo ofrecer una manera sencilla de alterar la simulación?. Con esta idea nace Simulation Gateway Interface, un framework que permite hacer accesibles las simulaciones a través de una interfaz gráfica.
Resumo:
The knee joint is a key structure of the human locomotor system. The knowledge of how each single anatomical structure of the knee contributes to determine the physiological function of the knee, is of fundamental importance for the development of new prostheses and novel clinical, surgical, and rehabilitative procedures. In this context, a modelling approach is necessary to estimate the biomechanic function of each anatomical structure during daily living activities. The main aim of this study was to obtain a subject-specific model of the knee joint of a selected healthy subject. In particular, 3D models of the cruciate ligaments and of the tibio-femoral articular contact were proposed and developed using accurate bony geometries and kinematics reliably recorded by means of nuclear magnetic resonance and 3D video-fluoroscopy from the selected subject. Regarding the model of the cruciate ligaments, each ligament was modelled with 25 linear-elastic elements paying particular attention to the anatomical twisting of the fibres. The devised model was as subject-specific as possible. The geometrical parameters were directly estimated from the experimental measurements, whereas the only mechanical parameter of the model, the elastic modulus, had to be considered from the literature because of the invasiveness of the needed measurements. Thus, the developed model was employed for simulations of stability tests and during living activities. Physiologically meaningful results were always obtained. Nevertheless, the lack of subject-specific mechanical characterization induced to design and partially develop a novel experimental method to characterize the mechanics of the human cruciate ligaments in living healthy subjects. Moreover, using the same subject-specific data, the tibio-femoral articular interaction was modelled investigating the location of the contact point during the execution of daily motor tasks and the contact area at the full extension with and without the whole body weight of the subject. Two different approaches were implemented and their efficiency was evaluated. Thus, pros and cons of each approach were discussed in order to suggest future improvements of this methodologies. The final results of this study will contribute to produce useful methodologies for the investigation of the in-vivo function and pathology of the knee joint during the execution of daily living activities. Thus, the developed methodologies will be useful tools for the development of new prostheses, tools and procedures both in research field and in diagnostic, surgical and rehabilitative fields.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.
Resumo:
This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...