928 resultados para Iterative Implementation Model
Resumo:
This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.
Resumo:
This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
In this paper the hardware implementation of an inner hair cell model is presented. Main features of the design are the use of Meddis’ transduction structure and the methodology for Design with Reusability. Which allows future migration to new hardware and design refinements for speech processing and custom-made hearing aids
Resumo:
sharedcircuitmodels is presented in this work. The sharedcircuitsmodelapproach of sociocognitivecapacities recently proposed by Hurley in The sharedcircuitsmodel (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading. Behavioral and Brain Sciences 31(1) (2008) 1–22 is enriched and improved in this work. A five-layer computational architecture for designing artificialcognitivecontrolsystems is proposed on the basis of a modified sharedcircuitsmodel for emulating sociocognitive experiences such as imitation, deliberation, and mindreading. In order to show the enormous potential of this approach, a simplified implementation is applied to a case study. An artificialcognitivecontrolsystem is applied for controlling force in a manufacturing process that demonstrates the suitability of the suggested approach
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (Le., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external datábase. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (i.e., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external database. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules.
Resumo:
Andorra-I is the first implementation of a language based on the Andorra Principie, which states that determinate goals can (and shonld) be run before other goals, and even in a parallel fashion. This principie has materialized in a framework called the Basic Andorra model, which allows or-parallelism as well as (dependent) and-parallelism for determinate goals. In this report we show that it is possible to further extend this model in order to allow general independent and-parallelism for nondeterminate goals, withont greatly modifying the underlying implementation machinery. A simple an easy way to realize such an extensión is to make each (nondeterminate) independent goal determinate, by using a special "bagof" constract. We also show that this can be achieved antomatically by compile-time translation from original Prolog programs. A transformation that fulfüls this objective and which can be easily antomated is presented in this report.
Resumo:
We argüe that in order to exploit both Independent And- and Or-parallelism in Prolog programs there is advantage in recomputing some of the independent goals, as opposed to all their solutions being reused. We present an abstract model, called the Composition- Tree, for representing and-or parallelism in Prolog Programs. The Composition-tree closely mirrors sequential Prolog execution by recomputing some independent goals rather than fully re-using them. We also outline two environment representation techniques for And-Or parallel execution of full Prolog based on the Composition-tree model abstraction. We argüe that these techniques have advantages over earlier proposals for exploiting and-or parallelism in Prolog.
Resumo:
Las técnicas de cirugía de mínima invasión (CMI) se están consolidando hoy en día como alternativa a la cirugía tradicional, debido a sus numerosos beneficios para los pacientes. Este cambio de paradigma implica que los cirujanos deben aprender una serie de habilidades distintas de aquellas requeridas en cirugía abierta. El entrenamiento y evaluación de estas habilidades se ha convertido en una de las mayores preocupaciones en los programas de formación de cirujanos, debido en gran parte a la presión de una sociedad que exige cirujanos bien preparados y una reducción en el número de errores médicos. Por tanto, se está prestando especial atención a la definición de nuevos programas que permitan el entrenamiento y la evaluación de las habilidades psicomotoras en entornos seguros antes de que los nuevos cirujanos puedan operar sobre pacientes reales. Para tal fin, hospitales y centros de formación están gradualmente incorporando instalaciones de entrenamiento donde los residentes puedan practicar y aprender sin riesgos. Es cada vez más común que estos laboratorios dispongan de simuladores virtuales o simuladores físicos capaces de registrar los movimientos del instrumental de cada residente. Estos simuladores ofrecen una gran variedad de tareas de entrenamiento y evaluación, así como la posibilidad de obtener información objetiva de los ejercicios. Los diferentes estudios de validación llevados a cabo dan muestra de su utilidad; pese a todo, los niveles de evidencia presentados son en muchas ocasiones insuficientes. Lo que es más importante, no existe un consenso claro a la hora de definir qué métricas son más útiles para caracterizar la pericia quirúrgica. El objetivo de esta tesis doctoral es diseñar y validar un marco de trabajo conceptual para la definición y validación de entornos para la evaluación de habilidades en CMI, en base a un modelo en tres fases: pedagógica (tareas y métricas a emplear), tecnológica (tecnologías de adquisición de métricas) y analítica (interpretación de la competencia en base a las métricas). Para tal fin, se describe la implementación práctica de un entorno basado en (1) un sistema de seguimiento de instrumental fundamentado en el análisis del vídeo laparoscópico; y (2) la determinación de la pericia en base a métricas de movimiento del instrumental. Para la fase pedagógica se diseñó e implementó un conjunto de tareas para la evaluación de habilidades psicomotoras básicas, así como una serie de métricas de movimiento. La validación de construcción llevada a cabo sobre ellas mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. Adicionalmente, los resultados obtenidos en la validación de apariencia fueron en general positivos en todos los grupos considerados (noveles, residentes, expertos). Para la fase tecnológica, se introdujo el EVA Tracking System, una solución para el seguimiento del instrumental quirúrgico basado en el análisis del vídeo endoscópico. La precisión del sistema se evaluó a 16,33ppRMS para el seguimiento 2D de la herramienta en la imagen; y a 13mmRMS para el seguimiento espacial de la misma. La validación de construcción con una de las tareas de evaluación mostró buenos resultados para tiempo, camino recorrido, profundidad, velocidad media, aceleración media, economía de área y economía de volumen. La validación concurrente con el TrEndo® Tracking System por su parte presentó valores altos de correlación para 8 de las 9 métricas analizadas. Finalmente, para la fase analítica se comparó el comportamiento de tres clasificadores supervisados a la hora de determinar automáticamente la pericia quirúrgica en base a la información de movimiento del instrumental, basados en aproximaciones lineales (análisis lineal discriminante, LDA), no lineales (máquinas de soporte vectorial, SVM) y difusas (sistemas adaptativos de inferencia neurodifusa, ANFIS). Los resultados muestran que en media SVM presenta un comportamiento ligeramente superior: 78,2% frente a los 71% y 71,7% obtenidos por ANFIS y LDA respectivamente. Sin embargo las diferencias estadísticas medidas entre los tres no fueron demostradas significativas. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la definición de sistemas de evaluación de habilidades para cirugía de mínima invasión, a la utilidad del análisis de vídeo como fuente de información y a la importancia de la información de movimiento de instrumental a la hora de caracterizar la pericia quirúrgica. Basándose en estos cimientos, se han de abrir nuevos campos de investigación que contribuyan a la definición de programas de formación estructurados y objetivos, que puedan garantizar la acreditación de cirujanos sobradamente preparados y promocionen la seguridad del paciente en el quirófano. Abstract Minimally invasive surgery (MIS) techniques have become a standard in many surgical sub-specialties, due to their many benefits for patients. However, this shift in paradigm implies that surgeons must acquire a complete different set of skills than those normally attributed to open surgery. Training and assessment of these skills has become a major concern in surgical learning programmes, especially considering the social demand for better-prepared professionals and for the decrease of medical errors. Therefore, much effort is being put in the definition of structured MIS learning programmes, where practice with real patients in the operating room (OR) can be delayed until the resident can attest for a minimum level of psychomotor competence. To this end, skills’ laboratory settings are being introduced in hospitals and training centres where residents may practice and be assessed on their psychomotor skills. Technological advances in the field of tracking technologies and virtual reality (VR) have enabled the creation of new learning systems such as VR simulators or enhanced box trainers. These systems offer a wide range of tasks, as well as the capability of registering objective data on the trainees’ performance. Validation studies give proof of their usefulness; however, levels of evidence reported are in many cases low. More importantly, there is still no clear consensus on topics such as the optimal metrics that must be used to assess competence, the validity of VR simulation, the portability of tracking technologies into real surgeries (for advanced assessment) or the degree to which the skills measured and obtained in laboratory environments transfer to the OR. The purpose of this PhD is to design and validate a conceptual framework for the definition and validation of MIS assessment environments based on a three-pillared model defining three main stages: pedagogical (tasks and metrics to employ), technological (metric acquisition technologies) and analytical (interpretation of competence based on metrics). To this end, a practical implementation of the framework is presented, focused on (1) a video-based tracking system and (2) the determination of surgical competence based on the laparoscopic instruments’ motionrelated data. The pedagogical stage’s results led to the design and implementation of a set of basic tasks for MIS psychomotor skills’ assessment, as well as the definition of motion analysis parameters (MAPs) to measure performance on said tasks. Validation yielded good construct results for parameters such as time, path length, depth, average speed, average acceleration, economy of area and economy of volume. Additionally, face validation results showed positive acceptance on behalf of the experts, residents and novices. For the technological stage the EVA Tracking System is introduced. EVA provides a solution for tracking laparoscopic instruments from the analysis of the monoscopic video image. Accuracy tests for the system are presented, which yielded an average RMSE of 16.33pp for 2D tracking of the instrument on the image and of 13mm for 3D spatial tracking. A validation experiment was conducted using one of the tasks and the most relevant MAPs. Construct validation showed significant differences for time, path length, depth, average speed, average acceleration, economy of area and economy of volume; especially between novices and residents/experts. More importantly, concurrent validation with the TrEndo® Tracking System presented high correlation values (>0.7) for 8 of the 9 MAPs proposed. Finally, the analytical stage allowed comparing the performance of three different supervised classification strategies in the determination of surgical competence based on motion-related information. The three classifiers were based on linear (linear discriminant analysis, LDA), non-linear (support vector machines, SVM) and fuzzy (adaptive neuro fuzzy inference systems, ANFIS) approaches. Results for SVM show slightly better performance than the other two classifiers: on average, accuracy for LDA, SVM and ANFIS was of 71.7%, 78.2% and 71% respectively. However, when confronted, no statistical significance was found between any of the three. Overall, this PhD corroborates the investigated research hypotheses regarding the definition of MIS assessment systems, the use of endoscopic video analysis as the main source of information and the relevance of motion analysis in the determination of surgical competence. New research fields in the training and assessment of MIS surgeons can be proposed based on these foundations, in order to contribute to the definition of structured and objective learning programmes that guarantee the accreditation of well-prepared professionals and the promotion of patient safety in the OR.
Resumo:
There have been several previous proposals for the integration of Object Oriented Programming features into Logic Programming, resulting in much support theory and several language proposals. However, none of these proposals seem to have made it into the mainstream. Perhaps one of the reasons for these is that the resulting languages depart too much from the standard logic programming languages to entice the average Prolog programmer. Another reason may be that most of what can be done with object-oriented programming can already be done in Prolog through the meta- and higher-order programming facilities that the language includes, albeit sometimes in a more cumbersome way. In light of this, in this paper we propose an alternative solution which is driven by two main objectives. The first one is to include only those characteristics of object-oriented programming which are cumbersome to implement in standard Prolog systems. The second one is to do this in such a way that there is minimum impact on the syntax and complexity of the language, i.e., to introduce the minimum number of new constructs, declarations, and concepts to be learned. Finally, we would like the implementation to be as straightforward as possible, ideally based on simple source to source expansions.
Resumo:
A mathematical model for finite strain elastoplastic consolidation of fully saturated soil media is implemented into a finite element program. The algorithmic treatment of finite strain elastoplasticity for the solid phase is based on multiplicative decomposition and is coupled with the algorithm for fluid flow via the Kirchhoff pore water pressure. A two-field mixed finite element formulation is employed in which the nodal solid displacements and the nodal pore water pressures are coupled via the linear momentum and mass balance equations. The constitutive model for the solid phase is represented by modified Cam—Clay theory formulated in the Kirchhoff principal stress space, and return mapping is carried out in the strain space defined by the invariants of the elastic logarithmic principal stretches. The constitutive model for fluid flow is represented by a generalized Darcy's law formulated with respect to the current configuration. The finite element model is fully amenable to exact linearization. Numerical examples with and without finite deformation effects are presented to demonstrate the impact of geometric nonlinearity on the predicted responses. The paper concludes with an assessment of the performance of the finite element consolidation model with respect to accuracy and numerical stability.