36 resultados para symbolic computation
Resumo:
An abstract is not available.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also,a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with medium to large parallel execution overheads.
Resumo:
This paper presents a study of the effectiveness of three different algorithms for the parallelization of logic programs based on compile-time detection of independence among goals. The algorithms are embedded in a complete parallelizing compiler, which incorporates different abstract interpretation-based program analyses. The complete system shows the task of automatic program parallelization to be practical. The trade-offs involved in using each of the algorithms in this task are studied experimentally, weaknesses of these identified, and possible improvements discussed.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also, a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with médium to large parallel execution overheads.
Resumo:
Este proyecto se enmarca dentro de la Computación Simbólica y de los fundamentos matemáticos del Diseño Geométrico Asistido por ordenador (CAGD). Se abordara uno de los problemas principales en el ámbito del CAGD y que es la manipulación de las Curvas Concoide. La importancia del avance en la manipulación de las curvas concoide radica en el papel fundamental que desempeñan en múltiples aplicaciones en la actualidad dentro de campos de diversa índole tales como la medicina, la óptica, el electromagnetismo, la construcción, etc. El objetivo principal de este proyecto es el diseño e implementación de algoritmos para el estudio, cálculo y manipulación de curvas concoides, utilizando técnicas propias del Calculo Simbólico. Esta implementación se ha programado utilizando el sistema de computación simbólica Maple. El proyecto consiste en dos partes bien diferenciadas, una parte teórica y otra más practica. La primera incluye la descripción geométrica y definición formal de curvas concoide, así como las ideas y propiedades básicas. De forma más precisa, se presenta un estudio matemático sobre el análisis de racionalidad de estas curvas, explicando los algoritmos que serán implementados en las segunda parte, y que constituye el objetivo principal de este proyecto. Para cerrar esta parte, se presenta una pequeña introducción al sistema y a la programación en Maple. Por otro lado, la segunda parte de este proyecto es totalmente original, y en ella el autor desarrolla las implementaciones en Maple de los algoritmos presentados en la parte anterior, así como la creación de un paquete Maple que las recoge. Por último, se crean las paginas de ayudas en el sistema Maple para la correcta utilización del paquete matemático anteriormente mencionado. Una vez terminada la parte de implementación, se aplican los algoritmos implementados a una colección de curvas clásicas conocidas, recogiendo los datos y resultados obtenidos en un atlas de curvas. Finalmente, se presenta una recopilación de las aplicaciones más destacadas en las que las concoides desempeñan un papel importante así como una breve reseña sobre las concoides de superficies, objeto de varios estudios en la actualidad y a los que se considera que el presente proyecto les puede resultar de gran utilidad. Abstract This project is set up in the framework of Symbolic Computation as well as in the implementation of algebraic-geometric problems that arise from Computer Aided Geometric Design (C.A.G.D.) applications. We address problems related to conchoid curves. The importance of these curves is the fundamental role that they play in current applications as medicine, optics, electromagnetism, construction, etc. The main goal of this project is to design and implement some algorithms to solve problems in studying, calculating and generating conchoid curves with symbolic computation techniques. For this purpose, we program our implementations in the symbolic system “Maple". The project consists of two differentiated parts, one more theoretical part and another part more practical. The first one includes the description of conchoid curves as well as the basic ideas about the concept and its basic properties. More precisely, we introduce in this part the mathematical analysis of the rationality of the conchoids, and we present the algorithms that will be implemented. Furthermore, the reader will be brie y introduced in Maple programming. On the other hand, the second part of this project is totally original. In this more practical part, the author presents the implemented algorithms and a Maple package that includes them, as well as their help pages. These implemented procedures will be check and illustrated with some classical and well known curves, collecting the main properties of the conchoid curves obtained in a brief atlas. Finally, a compilation of the most important applications where conchoids play a fundamental role, and a brief introduction to the conchoids of surfaces, subject of several studies today and where this project could be very useful, are presented.
Resumo:
Over the past 20 years,theuse of Computer Algebra Systems(CAS) has helped with the teaching of mathematics inengineer-ingschools. However the traditional use of CAS only in math labs has led to a narrow view by the student: the CAS is an additional work, not included in the learning process. The didactic guidelines of the European Higher Education Area(EHEA) propose a new teaching–learning model based on competencies. We suggest the use of the CAS be adapted to the new rules. In this paper,we present a model for the integrated use of the CAS,and we describe and analyze two experiments carried out in the academic year2011–2012. Our analysis suggests that the use of CAS in all learning and assessment activities has the potential to positively influence the development of competencies.
Resumo:
This paper is framed within the problem of analyzing the rationality of the components of two classical geometric constructions, namely the offset and the conchoid to an algebraic plane curve and, in the affirmative case, the actual computation of parametrizations. We recall some of the basic definitions and main properties on offsets (see [13]), and conchoids (see [15]) as well as the algorithms for parametrizing their rational components (see [1] and [16], respectively). Moreover, we implement the basic ideas creating two packages in the computer algebra system Maple to analyze the rationality of conchoids and offset curves, as well as the corresponding help pages. In addition, we present a brief atlas where the offset and conchoids of several algebraic plane curves are obtained, their rationality analyzed, and parametrizations are provided using the created packages.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
Dendritic computation is a term that has been in neuro physiological research for a long time [1]. It is still controversial and far for been clarified within the concepts of both computation and neurophysiology [2], [3]. In any case, it hasnot been integrated neither in a formal computational scheme or structure, nor into formulations of artificial neural nets. Our objective here is to formulate a type of distributed computation that resembles dendritic trees, in such a way that it shows the advantages of neural network distributed computation, mostly the reliability that is shown under the existence of holes (scotomas) in the computing net, without ?blind spots?.
Resumo:
Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of optimizations which includes granularity control and recursion elimination. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predeñned) predicates which traverse the terms involved. We propose a technique which has the potential of performing this computation much more efficiently. The technique is based on ñnding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows ñnding minimal transformations under certain criteria. We also discuss the advantages and applications of our technique (specifically in the task of granularity control) and present some performance results.
Resumo:
Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, recursion elimination and granularity analysis. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and present some applications of our technique.
Resumo:
En esta tesis se integran numéricamente las ecuaciones reducidas de Navier Stokes (RNS), que describen el flujo en una capa límite tridimensional que presenta también una escala característica espacial corta en el sentido transversal. La formulación RNS se usa para el cálculo de “streaks” no lineales de amplitud finita, y los resultados conseguidos coinciden con los existentes en la literatura, obtenidos típicamente utilizando simulación numérica directa (DNS) o nonlinear parabolized stability equations (PSE). El cálculo de los “streaks” integrando las RNS es mucho menos costoso que usando DNS, y no presenta los problemas de estabilidad que aparecen en la formulación PSE cuando la amplitud del “streak” deja de ser pequeña. El código de integración RNS se utiliza también para el cálculo de los “streaks” que aparecen de manera natural en el borde de ataque de una placa plana en ausencia de perturbaciones en la corriente uniforme exterior. Los resultados existentes hasta ahora calculaban estos “streaks” únicamente en el límite lineal (amplitud pequeña), y en esta tesis se lleva a cabo el cálculo de los mismos en el régimen completamente no lineal (amplitud finita). En la segunda parte de la tesis se generaliza el código RNS para incluir la posibilidad de tener una placa no plana, con curvatura en el sentido transversal que varía lentamente en el sentido de la corriente. Esto se consigue aplicando un cambio de coordenadas, que transforma el dominio físico en uno rectangular. La formulación RNS se integra también expresada en las correspondientes coordenadas curvilíneas. Este código generalizado RNS se utiliza finalmente para estudiar el flujo de capa límite sobre una placa con surcos que varían lentamente en el sentido de la corriente, y es usado para simular el flujo sobre surcos que crecen en tal sentido. Abstract In this thesis, the reduced Navier Stokes (RNS) equations are numerically integrated. This formulation describes the flow in a three-dimensional boundary layer that also presents a short characteristic space scale in the spanwise direction. RNS equations are used to calculate nonlinear finite amplitude “streaks”, and the results agree with those reported in the literature, typically obtained using direct numerical simulation (DNS) or nonlinear parabolized stability equations (PSE). “Streaks” simulations through the RNS integration are much cheaper than using DNS, and avoid stability problems that appear in the PSE when the amplitude of the “streak” is not small. The RNS integration code is also used to calculate the “streaks” that naturally emerge at the leading edge of a flat plate boundary layer in the absence of any free stream perturbations. Up to now, the existing results for these “streaks” have been only calculated in the linear limit (small amplitude), and in this thesis their calculation is carried out in the fully nonlinear regime (finite amplitude). In the second part of the thesis, the RNS code is generalized to include the possibility of having a non-flat plate, curved in the spanwise direction and slowly varying in the streamwise direction. This is achieved by applying a change of coordinates, which transforms the physical domain into a rectangular one. The RNS formulation expressed in the corresponding curvilinear coordinates is also numerically integrated. This generalized RNS code is finally used to study the boundary layer flow over a plate with grooves which vary slowly in the streamwise direction; and this code is used to simulate the flow over grooves that grow in the streamwise direction.
Resumo:
The paper resumes the results obtained applying various implementations of the direct boundary element method (BEM) to the solution of the Laplace Equation governing the potential flow problem during everyday service manoeuvres of high-speed trains. In particular the results of train passing events at three different speed combinations are presented. Some recommendations are given in order to reduce calculation times which as is demonstrated can be cut down to not exceed reasonable limits even when using nowadays office PCs. Thus the method is shown to be a very valuable tool for the design engineer.
Resumo:
Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, granularity analysis and selection among different algorithms or control rules whose performance may be dependent on such size. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and applications of our technique and present some performance results.