33 resultados para Computer arithmetic and logic units
Resumo:
We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking.
Resumo:
This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.
Resumo:
Most implementations of parallel logic programming rely on complex low-level machinery which is arguably difflcult to implement and modify. We explore an alternative approach aimed at taming that complexity by raising core parts of the implementation to the source language level for the particular case of and-parallelism. Therefore, we handle a signiflcant portion of the parallel implementation mechanism at the Prolog level with the help of a comparatively small number of concurrency-related primitives which take care of lower-level tasks such as locking, thread management, stack set management, etc. The approach does not eliminate altogether modiflcations to the abstract machine, but it does greatly simplify them and it also facilitates experimenting with different alternatives. We show how this approach allows implementing both restricted and unrestricted (i.e., non fork-join) parallelism. Preliminary experiments show that the amount of performance sacriflced is reasonable, although granularity control is required in some cases. Also, we observe that the availability of unrestricted parallelism contributes to better observed speedups.
Resumo:
Certain aspects of functional programming provide syntactic convenience, such as having a designated implicit output argument, which allows function cali nesting and sometimes results in more compact code. Functional programming also sometimes allows a more direct encoding of lazy evaluation, with its ability to deal with infinite data structures. We present a syntactic functional extensión of Prolog covering function application, predefined evaluable functors, functional definitions, quoting, and lazy evaluation. The extensión is also composable with higher-order features. We also highlight the Ciao features which help implementation and present some data on the overhead of using lazy evaluation with respect to eager evaluation.
Improving the compilation of prolog to C using type and determinism information: Preliminary results
Resumo:
We describe the current status of and provide preliminary performance results for a compiler of Prolog to C. The compiler is novel in that it is designed to accept different kinds of high-level information (typically obtained via an analysis of the initial Prolog program and expressed in a standardized language of assertions) and use this information to optimize the resulting C code, which is then further processed by an off-the-shelf C compiler. The basic translation process used essentially mimics an unfolding of a C-coded bytecode emúlator with respect to the particular bytecode corresponding to the Prolog program. Optimizations are then applied to this unfolded program. This is facilitated by a more flexible design of the bytecode instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: ancillary pieces of C code, data definitions, memory management routines and áreas, etc., as well as mixing bytecode emulated code with natively compiled code in a relatively straightforward way We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.
Resumo:
The Boundary Element Method is a powerful numerical technique well rooted in everyday engineering practice. This is shown by boundary element methods included in the most important commercial computer packages and in the continuous publication of books composed to explain the features of the method to beginners or practicing engineers. Our first paper in Computers & Structures on Boundary Elements was published in 1979 (C & S 10, pp. 351–362), so this Special Issue is for us not only the accomplishment of our obligation to show other colleagues the possibilities of a numerical technique in which we believe, but also the celebration of our particular silver jubilee with this Journal.
Resumo:
This paper describes a framework to combine tabling evalua- tion and constraint logic programming (TCLP). While this combination has been studied previously from a theoretical point of view and some implementations exist, they either suffer from a lack of efficiency, flex- ibility, or generality, or have inherent limitations with respect to the programs they can execute to completion (either with success or fail- ure). Our framework addresses these issues directly, including the ability to check for answer / call entailment, which allows it to terminate in more cases than other approaches. The proposed framework is experimentally compared with existing solutions in order to provide evidence of the mentioned advantages.
Resumo:
There is an increasing interest in the intersection of human-computer interaction and public policy. This day-long workshop will examine successes and challenges related to public policy and human computer interaction, in order to provide a forum to create a baseline of examples and to start the process of writing a white paper on the topic.
Resumo:
La segmentación de imágenes es un campo importante de la visión computacional y una de las áreas de investigación más activas, con aplicaciones en comprensión de imágenes, detección de objetos, reconocimiento facial, vigilancia de vídeo o procesamiento de imagen médica. La segmentación de imágenes es un problema difícil en general, pero especialmente en entornos científicos y biomédicos, donde las técnicas de adquisición imagen proporcionan imágenes ruidosas. Además, en muchos de estos casos se necesita una precisión casi perfecta. En esta tesis, revisamos y comparamos primero algunas de las técnicas ampliamente usadas para la segmentación de imágenes médicas. Estas técnicas usan clasificadores a nivel de pixel e introducen regularización sobre pares de píxeles que es normalmente insuficiente. Estudiamos las dificultades que presentan para capturar la información de alto nivel sobre los objetos a segmentar. Esta deficiencia da lugar a detecciones erróneas, bordes irregulares, configuraciones con topología errónea y formas inválidas. Para solucionar estos problemas, proponemos un nuevo método de regularización de alto nivel que aprende información topológica y de forma a partir de los datos de entrenamiento de una forma no paramétrica usando potenciales de orden superior. Los potenciales de orden superior se están popularizando en visión por computador, pero la representación exacta de un potencial de orden superior definido sobre muchas variables es computacionalmente inviable. Usamos una representación compacta de los potenciales basada en un conjunto finito de patrones aprendidos de los datos de entrenamiento que, a su vez, depende de las observaciones. Gracias a esta representación, los potenciales de orden superior pueden ser convertidos a potenciales de orden 2 con algunas variables auxiliares añadidas. Experimentos con imágenes reales y sintéticas confirman que nuestro modelo soluciona los errores de aproximaciones más débiles. Incluso con una regularización de alto nivel, una precisión exacta es inalcanzable, y se requeire de edición manual de los resultados de la segmentación automática. La edición manual es tediosa y pesada, y cualquier herramienta de ayuda es muy apreciada. Estas herramientas necesitan ser precisas, pero también lo suficientemente rápidas para ser usadas de forma interactiva. Los contornos activos son una buena solución: son buenos para detecciones precisas de fronteras y, en lugar de buscar una solución global, proporcionan un ajuste fino a resultados que ya existían previamente. Sin embargo, requieren una representación implícita que les permita trabajar con cambios topológicos del contorno, y esto da lugar a ecuaciones en derivadas parciales (EDP) que son costosas de resolver computacionalmente y pueden presentar problemas de estabilidad numérica. Presentamos una aproximación morfológica a la evolución de contornos basada en un nuevo operador morfológico de curvatura que es válido para superficies de cualquier dimensión. Aproximamos la solución numérica de la EDP de la evolución de contorno mediante la aplicación sucesiva de un conjunto de operadores morfológicos aplicados sobre una función de conjuntos de nivel. Estos operadores son muy rápidos, no sufren de problemas de estabilidad numérica y no degradan la función de los conjuntos de nivel, de modo que no hay necesidad de reinicializarlo. Además, su implementación es mucho más sencilla que la de las EDP, ya que no requieren usar sofisticados algoritmos numéricos. Desde un punto de vista teórico, profundizamos en las conexiones entre operadores morfológicos y diferenciales, e introducimos nuevos resultados en este área. Validamos nuestra aproximación proporcionando una implementación morfológica de los contornos geodésicos activos, los contornos activos sin bordes, y los turbopíxeles. En los experimentos realizados, las implementaciones morfológicas convergen a soluciones equivalentes a aquéllas logradas mediante soluciones numéricas tradicionales, pero con ganancias significativas en simplicidad, velocidad y estabilidad. ABSTRACT Image segmentation is an important field in computer vision and one of its most active research areas, with applications in image understanding, object detection, face recognition, video surveillance or medical image processing. Image segmentation is a challenging problem in general, but especially in the biological and medical image fields, where the imaging techniques usually produce cluttered and noisy images and near-perfect accuracy is required in many cases. In this thesis we first review and compare some standard techniques widely used for medical image segmentation. These techniques use pixel-wise classifiers and introduce weak pairwise regularization which is insufficient in many cases. We study their difficulties to capture high-level structural information about the objects to segment. This deficiency leads to many erroneous detections, ragged boundaries, incorrect topological configurations and wrong shapes. To deal with these problems, we propose a new regularization method that learns shape and topological information from training data in a nonparametric way using high-order potentials. High-order potentials are becoming increasingly popular in computer vision. However, the exact representation of a general higher order potential defined over many variables is computationally infeasible. We use a compact representation of the potentials based on a finite set of patterns learned fromtraining data that, in turn, depends on the observations. Thanks to this representation, high-order potentials can be converted into pairwise potentials with some added auxiliary variables and minimized with tree-reweighted message passing (TRW) and belief propagation (BP) techniques. Both synthetic and real experiments confirm that our model fixes the errors of weaker approaches. Even with high-level regularization, perfect accuracy is still unattainable, and human editing of the segmentation results is necessary. The manual edition is tedious and cumbersome, and tools that assist the user are greatly appreciated. These tools need to be precise, but also fast enough to be used in real-time. Active contours are a good solution: they are good for precise boundary detection and, instead of finding a global solution, they provide a fine tuning to previously existing results. However, they require an implicit representation to deal with topological changes of the contour, and this leads to PDEs that are computationally costly to solve and may present numerical stability issues. We present a morphological approach to contour evolution based on a new curvature morphological operator valid for surfaces of any dimension. We approximate the numerical solution of the contour evolution PDE by the successive application of a set of morphological operators defined on a binary level-set. These operators are very fast, do not suffer numerical stability issues, and do not degrade the level set function, so there is no need to reinitialize it. Moreover, their implementation is much easier than their PDE counterpart, since they do not require the use of sophisticated numerical algorithms. From a theoretical point of view, we delve into the connections between differential andmorphological operators, and introduce novel results in this area. We validate the approach providing amorphological implementation of the geodesic active contours, the active contours without borders, and turbopixels. In the experiments conducted, the morphological implementations converge to solutions equivalent to those achieved by traditional numerical solutions, but with significant gains in simplicity, speed, and stability.
Resumo:
Situado en el límite entre Ingeniería, Informática y Biología, la mecánica computacional de las neuronas aparece como un nuevo campo interdisciplinar que potencialmente puede ser capaz de abordar problemas clínicos desde una perspectiva diferente. Este campo es multiescala por naturaleza, yendo desde la nanoescala (como, por ejemplo, los dímeros de tubulina) a la macroescala (como, por ejemplo, el tejido cerebral), y tiene como objetivo abordar problemas que son complejos, y algunas veces imposibles, de estudiar con medios experimentales. La modelización computacional ha sido ampliamente empleada en aplicaciones Neurocientíficas tan diversas como el crecimiento neuronal o la propagación de los potenciales de acción compuestos. Sin embargo, en la mayoría de los enfoques de modelización hechos hasta ahora, la interacción entre la célula y el medio/estímulo que la rodea ha sido muy poco explorada. A pesar de la tremenda importancia de esa relación en algunos desafíos médicos—como, por ejemplo, lesiones traumáticas en el cerebro, cáncer, la enfermedad del Alzheimer—un puente que relacione las propiedades electrofisiológicas-químicas y mecánicas desde la escala molecular al nivel celular todavía no existe. Con ese objetivo, esta investigación propone un marco computacional multiescala particularizado para dos escenarios respresentativos: el crecimiento del axón y el acomplamiento electrofisiológicomecánico de las neuritas. En el primer caso, se explora la relación entre los constituyentes moleculares del axón durante su crecimiento y sus propiedades mecánicas resultantes, mientras que en el último, un estímulo mecánico provoca deficiencias funcionales a nivel celular como consecuencia de sus alteraciones electrofisiológicas-químicas. La modelización computacional empleada en este trabajo es el método de las diferencias finitas, y es implementada en un nuevo programa llamado Neurite. Aunque el método de los elementos finitos es también explorado en parte de esta investigación, el método de las diferencias finitas tiene la flexibilidad y versatilidad necesaria para implementar mode los biológicos, así como la simplicidad matemática para extenderlos a simulaciones a gran escala con un coste computacional bajo. Centrándose primero en el efecto de las propiedades electrofisiológicas-químicas sobre las propiedades mecánicas, una versión adaptada de Neurite es desarrollada para simular la polimerización de los microtúbulos en el crecimiento del axón y proporcionar las propiedades mecánicas como función de la ocupación de los microtúbulos. Después de calibrar el modelo de crecimiento del axón frente a resultados experimentales disponibles en la literatura, las características mecánicas pueden ser evaluadas durante la simulación. Las propiedades mecánicas del axón muestran variaciones dramáticas en la punta de éste, donde el cono de crecimiento soporta las señales químicas y mecánicas. Bansándose en el conocimiento ganado con el modelo de diferencias finitas, y con el objetivo de ir de 1D a 3D, este esquema preliminar pero de una naturaleza innovadora allana el camino a futuros estudios con el método de los elementos finitos. Centrándose finalmente en el efecto de las propiedades mecánicas sobre las propiedades electrofisiológicas- químicas, Neurite es empleado para relacionar las cargas mecánicas macroscópicas con las deformaciones y velocidades de deformación a escala microscópica, y simular la propagación de la señal eléctrica en las neuritas bajo carga mecánica. Las simulaciones fueron calibradas con resultados experimentales publicados en la literatura, proporcionando, por tanto, un modelo capaz de predecir las alteraciones de las funciones electrofisiológicas neuronales bajo cargas externas dañinas, y uniendo lesiones mecánicas con las correspondientes deficiencias funcionales. Para abordar simulaciones a gran escala, aunque otras arquitecturas avanzadas basadas en muchos núcleos integrados (MICs) fueron consideradas, los solvers explícito e implícito se implementaron en unidades de procesamiento central (CPU) y unidades de procesamiento gráfico (GPUs). Estudios de escalabilidad fueron llevados acabo para ambas implementaciones mostrando resultados prometedores para casos de simulaciones extremadamente grandes con GPUs. Esta tesis abre la vía para futuros modelos mecánicos con el objetivo de unir las propiedades electrofisiológicas-químicas con las propiedades mecánicas. El objetivo general es mejorar el conocimiento de las comunidades médicas y de bioingeniería sobre la mecánica de las neuronas y las deficiencias funcionales que aparecen de los daños producidos por traumatismos mecánicos, como lesiones traumáticas en el cerebro, o enfermedades neurodegenerativas como la enfermedad del Alzheimer. ABSTRACT Sitting at the interface between Engineering, Computer Science and Biology, Computational Neuron Mechanics appears as a new interdisciplinary field potentially able to tackle clinical problems from a new perspective. This field is multiscale by nature, ranging from the nanoscale (e.g., tubulin dimers) to the macroscale (e.g., brain tissue), and aims at tackling problems that are complex, and sometime impossible, to study through experimental means. Computational modeling has been widely used in different Neuroscience applications as diverse as neuronal growth or compound action potential propagation. However, in the majority of the modeling approaches done in this field to date, the interactions between the cell and its surrounding media/stimulus have been rarely explored. Despite of the tremendous importance of such relationship in several medical challenges—e.g., traumatic brain injury (TBI), cancer, Alzheimer’s disease (AD)—a bridge between electrophysiological-chemical and mechanical properties of neurons from the molecular scale to the cell level is still lacking. To this end, this research proposes a multiscale computational framework particularized for two representative scenarios: axon growth and electrophysiological-mechanical coupling of neurites. In the former case, the relation between the molecular constituents of the axon during its growth and its resulting mechanical properties is explored, whereas in the latter, a mechanical stimulus provokes functional deficits at cell level as a consequence of its electrophysiological-chemical alterations. The computational modeling approach chosen in this work is the finite difference method (FDM), and was implemented in a new program called Neurite. Although the finite element method (FEM) is also explored as part of this research, the FDM provides the necessary flexibility and versatility to implement biological models, as well as the mathematical simplicity to extend them to large scale simulations with a low computational cost. Focusing first on the effect of electrophysiological-chemical properties on the mechanical proper ties, an adaptation of Neurite was developed to simulate microtubule polymerization in axonal growth and provide the axon mechanical properties as a function of microtubule occupancy. After calibrating the axon growth model against experimental results available in the literature, the mechanical characteristics can be tracked during the simulation. The axon mechanical properties show dramatic variations at the tip of the axon, where the growth cone supports the chemical and mechanical signaling. Based on the knowledge gained from the FDM scheme, and in order to go from 1D to 3D, this preliminary yet novel scheme paves the road for future studies with FEM. Focusing then on the effect of mechanical properties on the electrophysiological-chemical properties, Neurite was used to relate macroscopic mechanical loading to microscopic strains and strain rates, and simulate the electrical signal propagation along neurites under mechanical loading. The simulations were calibrated against experimental results published in the literature, thus providing a model able to predict the alteration of neuronal electrophysiological function under external damaging load, and linking mechanical injuries to subsequent acute functional deficits. To undertake large scale simulations, although other state-of-the-art architectures based on many integrated cores (MICs) were considered, the explicit and implicit solvers were implemented for central processing units (CPUs) and graphics processing units (GPUs). Scalability studies were done for both implementations showing promising results for extremely large scale simulations with GPUs. This thesis opens the avenue for future mechanical modeling approaches aimed at linking electrophysiological- chemical properties to mechanical properties. Its overarching goal is to enhance the bioengineering and medical communities knowledge on neuronal mechanics and functional deficits arising from damages produced by direct mechanical insults, such as TBI, or neurodegenerative evolving illness, such as AD.
Resumo:
A recent study elaborated by Vicerrectorado de Ordenación Académica y Planificación Estratégica of Technical University of Madrid (UPM) defines the satisfaction of the university student body as "the response that the University offers to the expectations and demands of service of the students, considered in a general way ". Besides an indicator of academic and institutional insertion of the student, the assessment of student engagement allows us to adapt the academic offer and the extension services of the University to the real needs of the students. The process of convergence towards the European Higher Education Area (EHEA) raises the need to form in competitions, that is to say, of developing in our students capacities and knowledge beyond the purely theoretical-practical thing. Therefore, the perception and experience of the educational process and environment by the students is an important issue to be addressed to accomplish their expectations and achieve a curriculum accordingly to EHEA expectations. The present study aims to explore the student motivation and approval of the educational environment at the UPM. To this end a total of 97 students enrolled in the undergraduate program of Civil Engineering, Computer Engineering and Agronomic Engineering at UPM were surveyed. The survey consisted of 40 questions divided in three blocks. The first one of 20 questions of personal character in that they were gathering, besides the sex and the age, the degree of fulfilment, implication and dedication with the institution and the academic tasks. In the second block we identify 10 questions related to the perception of the student on the teaching quality, and finally a block of 10 questions regarding the Bologna Process. The students personal motivation was moderately high, with a score of 3.6 (all scores are provided on a 5-point scale), being the most valuable items obtaining a university degree (4,3) and the friendship between students (4,2). Any significant difference was shown between sexes (P=0.23) since the averages for this block of questions were of 3.7±0.3 and 3.5±0.4 for women and men respectively. The students are moderately satisfied with their graduate studies with an average score of 3,2, being the questions that reflect a minor satisfaction the research profile of the teachers (2,8) and the organization of the Schools (2,9). The best valued questions are related to the usefulness and quality of the degrees, with 3,5 and 3,4 respectively, and to the interest of the courses within the degree (3,4). For sexes, the results of this block of questions are similar (3.1±0.3 and 3.2±0.3 for men and women respectively=0.79). Also, there were no differences (P=0.39) between the students who arrange work and studies or do not work (3.1±0.2 and 3.2±0.3 respectively). In conclusion, students at UPM present an acceptable degree of motivation and satisfaction with regard to the studies and services that offer their respective Schools. Both characteristics receive the same value both for men and for women and so much for students who arrange work and studies as for those who devote themselves only to studying. In a significant way, students who are more engaged and are in-class attendants present the major degree of satisfaction.Overall, there is a great lack of information regarding the Bologna Process. In fact to the majority, they would like to know more on what it is, what it means and what changes will involve its implementation.
Resumo:
A nonlinear analysis of an elastic tube subjected to gravity forces and buoyancy pressure is carried out. An update lagrangian formulation is used. The structural analysis efficiency in terms of computer time and accuracy, has been improved when load stiffness matrices have been introduced. In this way the follower forces characteristics such as their intensity and direction changes can be well represented. A sensitivity study of different involved variables on the final deformed pipeline shape is carried out.
Resumo:
The use of Project Based Learning has spread widely over the last decades, not only throughout countries but also among disciplines. One of the most significant characteristics of this methodology is the use of ill-structured problems as central activity during the course, which represents an important difficulty for both teachers and students. This work presents a model, supported by a tool, focused on helping teachers and students in Project Based Learning, overcoming these difficulties. Firstly, teachers are guided in designing the project following the main principles of this methodology. Once the project has been specified at the desired level of depth, the same tool helps students to finish the project specification and organize the implementation. Collaborative work among different users is allowed in both phases. This tool has been satisfactorily tested designing two real projects used in Computer Engineering and Software Engineering degrees.
Resumo:
The main purpose of this work is to describe the case of an online Java Programming course for engineering students to learn computer programming and to practice other non-technicalabilities: online training, self-assessment, teamwork and use of foreign languages. It is important that students develop confidence and competence in these skills, which will be required later in their professional tasks and/or in other engineering courses (life-long learning). Furthermore, this paper presents the pedagogical methodology, the results drawn from this experience and an objective performance comparison with another conventional (face-to-face) Java course.
Resumo:
Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.