23 resultados para Theory of proportion and its application

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Case-based reasoning (CBR) is a unique tool for the evaluation of possible failure of firms (EOPFOF) for its eases of interpretation and implementation. Ensemble computing, a variation of group decision in society, provides a potential means of improving predictive performance of CBR-based EOPFOF. This research aims to integrate bagging and proportion case-basing with CBR to generate a method of proportion bagging CBR for EOPFOF. Diverse multiple case bases are first produced by multiple case-basing, in which a volume parameter is introduced to control the size of each case base. Then, the classic case retrieval algorithm is implemented to generate diverse member CBR predictors. Majority voting, the most frequently used mechanism in ensemble computing, is finally used to aggregate outputs of member CBR predictors in order to produce final prediction of the CBR ensemble. In an empirical experiment, we statistically validated the results of the CBR ensemble from multiple case bases by comparing them with those of multivariate discriminant analysis, logistic regression, classic CBR, the best member CBR predictor and bagging CBR ensemble. The results from Chinese EOPFOF prior to 3 years indicate that the new CBR ensemble, which significantly improved CBRs predictive ability, outperformed all the comparative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, calculus of variations and combined blade element and momentum theory (BEMT) are used to demonstrate that, in hover, when neither root nor tip losses are considered; the rotor, which minimizes the total power (MPR), generates an induced velocity that varies linearly along the blade span. The angle of attack of every blade element is constant and equal to its optimum value. The traditional ideal twist (ITR) and optimum (OR) rotors are revisited in the context of this variational framework. Two more optimum rotors are obtained considering root and tip losses, the ORL, and the MPRL. A comparison between these five rotors is presented and discussed. The MPR and MPRL present a remarkable saving of power for low values of both thrust coefficient and maximum aerodynamic efficiency. The result obtained can be exploited to improve the aerodynamic behaviour of rotary wing micro air vehicles (MAV). A comparison with experimental results obtained from the literature is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gestión del tráfico aéreo (Air Traffic Management, ATM) está experimentando un cambio de paradigma hacia las denominadas operaciones basadas trayectoria. Bajo dicho paradigma se modifica el papel de los controladores de tráfico aéreo desde una operativa basada su intervención táctica continuada hacia una labor de supervisión a más largo plazo. Esto se apoya en la creciente confianza en las soluciones aportadas por las herramientas automatizadas de soporte a la decisión más modernas. Para dar soporte a este concepto, se precisa una importante inversión para el desarrollo, junto con la adquisición de nuevos equipos en tierra y embarcados, que permitan la sincronización precisa de la visión de la trayectoria, basada en el intercambio de información entre ambos actores. Durante los últimos 30 a 40 años las aerolíneas han generado uno de los menores retornos de la inversión de entre todas las industrias. Sin beneficios tangibles, la industria aérea tiene dificultades para atraer el capital requerido para su modernización, lo que retrasa la implantación de dichas mejoras. Esta tesis tiene como objetivo responder a la pregunta de si las capacidades actualmente instaladas en las aeronaves comerciales se pueden aplicar para lograr la sincronización de la trayectoria con el nivel de calidad requerido. Además, se analiza en ella si, conjuntamente con mejoras en las herramientas de predicción trayectorias instaladas en tierra en para facilitar la gestión de las arribadas, dichas capacidades permiten obtener los beneficios esperados en el marco de las operaciones basadas en trayectoria. Esto podría proporcionar un incentivo para futuras actualizaciones de la aviónica que podrían llevar a mejoras adicionales. El concepto operacional propuesto en esta tesis tiene como objetivo permitir que los aviones sean pilotados de una manera consistente con las técnicas actuales de vuelo optimizado. Se permite a las aeronaves que desciendan en el denominado “modo de ángulo de descenso gestionado” (path-managed mode), que es el preferido por la mayoría de las compañías aéreas, debido a que conlleva un reducido consumo de combustible. El problema de este modo es que en él no se controla de forma activa el tiempo de llegada al punto de interés. En nuestro concepto operacional, la incertidumbre temporal se gestiona en mediante de la medición del tiempo en puntos estratégicamente escogidos a lo largo de la trayectoria de la aeronave, y permitiendo la modificación por el control de tierra de la velocidad de la aeronave. Aunque la base del concepto es la gestión de las ordenes de velocidad que se proporcionan al piloto, para ser capaces de operar con los niveles de equipamiento típicos actualmente, dicho concepto también constituye un marco en el que la aviónica más avanzada (por ejemplo, que permita el control por el FMS del tiempo de llegada) puede integrarse de forma natural, una vez que esta tecnología este instalada. Además de gestionar la incertidumbre temporal a través de la medición en múltiples puntos, se intenta reducir dicha incertidumbre al mínimo mediante la mejora de las herramienta de predicción de la trayectoria en tierra. En esta tesis se presenta una novedosa descomposición del proceso de predicción de trayectorias en dos etapas. Dicha descomposición permite integrar adecuadamente los datos de la trayectoria de referencia calculada por el Flight Management System (FMS), disponibles usando Futuro Sistema de Navegación Aérea (FANS), en el sistema de predicción de trayectorias en tierra. FANS es un equipo presente en los aviones comerciales de fuselaje ancho actualmente en la producción, e incluso algunos aviones de fuselaje estrecho pueden tener instalada avionica FANS. Además de informar automáticamente de la posición de la aeronave, FANS permite proporcionar (parte de) la trayectoria de referencia en poder de los FMS, pero la explotación de esta capacidad para la mejora de la predicción de trayectorias no se ha estudiado en profundidad en el pasado. La predicción en dos etapas proporciona una solución adecuada al problema de sincronización de trayectorias aire-tierra dado que permite la sincronización de las dimensiones controladas por el sistema de guiado utilizando la información de la trayectoria de referencia proporcionada mediante FANS, y también facilita la mejora en la predicción de las dimensiones abiertas restantes usado un modelo del guiado que explota los modelos meteorológicos mejorados disponibles en tierra. Este proceso de predicción de la trayectoria de dos etapas se aplicó a una muestra de 438 vuelos reales que realizaron un descenso continuo (sin intervención del controlador) con destino Melbourne. Dichos vuelos son de aeronaves del modelo Boeing 737-800, si bien la metodología descrita es extrapolable a otros tipos de aeronave. El método propuesto de predicción de trayectorias permite una mejora en la desviación estándar del error de la estimación del tiempo de llegada al punto de interés, que es un 30% menor que la que obtiene el FMS. Dicha trayectoria prevista mejorada se puede utilizar para establecer la secuencia de arribadas y para la asignación de las franjas horarias para cada aterrizaje (slots). Sobre la base del slot asignado, se determina un perfil de velocidades que permita cumplir con dicho slot con un impacto mínimo en la eficiencia del vuelo. En la tesis se propone un nuevo algoritmo que determina las velocidades requeridas sin necesidad de un proceso iterativo de búsqueda sobre el sistema de predicción de trayectorias. El algoritmo se basa en una parametrización inteligente del proceso de predicción de la trayectoria, que permite relacionar el tiempo estimado de llegada con una función polinómica. Resolviendo dicho polinomio para el tiempo de llegada deseado, se obtiene de forma natural el perfil de velocidades optimo para cumplir con dicho tiempo de llegada sin comprometer la eficiencia. El diseño de los sistemas de gestión de arribadas propuesto en esta tesis aprovecha la aviónica y los sistemas de comunicación instalados de un modo mucho más eficiente, proporcionando valor añadido para la industria. Por tanto, la solución es compatible con la transición hacia los sistemas de aviónica avanzados que están desarrollándose actualmente. Los beneficios que se obtengan a lo largo de dicha transición son un incentivo para inversiones subsiguientes en la aviónica y en los sistemas de control de tráfico en tierra. ABSTRACT Air traffic management (ATM) is undergoing a paradigm shift towards trajectory based operations where the role of an air traffic controller evolves from that of continuous intervention towards supervision, as decision making is improved based on increased confidence in the solutions provided by advanced automation. To support this concept, significant investment for the development and acquisition of new equipment is required on the ground as well as in the air, to facilitate the high degree of trajectory synchronisation and information exchange required. Over the past 30-40 years the airline industry has generated one of the lowest returns on invested capital among all industries. Without tangible benefits realised, the airline industry may find it difficult to attract the required investment capital and delay acquiring equipment needed to realise the concept of trajectory based operations. In response to these challenges facing the modernisation of ATM, this thesis aims to answer the question whether existing aircraft capabilities can be applied to achieve sufficient trajectory synchronisation and improvements to ground-based trajectory prediction in support of the arrival management process, to realise some of the benefits envisioned under trajectory based operations, and to provide an incentive for further avionics upgrades. The proposed operational concept aims to permit aircraft to operate in a manner consistent with current optimal aircraft operating techniques. It allows aircraft to descend in the fuel efficient path managed mode as preferred by a majority of airlines, with arrival time not actively controlled by the airborne automation. The temporal uncertainty is managed through metering at strategically chosen points along the aircraft’s trajectory with primary use of speed advisories. While the focus is on speed advisories to support all aircraft and different levels of equipage, the concept also constitutes a framework in which advanced avionics as airborne time-of-arrival control can be integrated once this technology is widely available. In addition to managing temporal uncertainty through metering at multiple points, this temporal uncertainty is minimised by improving the supporting trajectory prediction capability. A novel two-stage trajectory prediction process is presented to adequately integrate aircraft trajectory data available through Future Air Navigation Systems (FANS) into the ground-based trajectory predictor. FANS is standard equipment on any wide-body aircraft in production today, and some single-aisle aircraft are easily capable of being fitted with FANS. In addition to automatic position reporting, FANS provides the ability to provide (part of) the reference trajectory held by the aircraft’s Flight Management System (FMS), but this capability has yet been widely overlooked. The two-stage process provides a ‘best of both world’s’ solution to the air-ground synchronisation problem by synchronising with the FMS reference trajectory those dimensions controlled by the guidance mode, and improving on the prediction of the remaining open dimensions by exploiting the high resolution meteorological forecast available to a ground-based system. The two-stage trajectory prediction process was applied to a sample of 438 FANS-equipped Boeing 737-800 flights into Melbourne conducting a continuous descent free from ATC intervention, and can be extrapolated to other types of aircraft. Trajectories predicted through the two-stage approach provided estimated time of arrivals with a 30% reduction in standard deviation of the error compared to estimated time of arrival calculated by the FMS. This improved predicted trajectory can subsequently be used to set the sequence and allocate landing slots. Based on the allocated landing slot, the proposed system calculates a speed schedule for the aircraft to meet this landing slot at minimal flight efficiency impact. A novel algorithm is presented that determines this speed schedule without requiring an iterative process in which multiple calls to a trajectory predictor need to be made. The algorithm is based on parameterisation of the trajectory prediction process, allowing the estimate time of arrival to be represented by a polynomial function of the speed schedule, providing an analytical solution to the speed schedule required to meet a set arrival time. The arrival management solution proposed in this thesis leverages the use of existing avionics and communications systems resulting in new value for industry for current investment. The solution therefore supports a transition concept from mixed equipage towards advanced avionics currently under development. Benefits realised under this transition may provide an incentive for ongoing investment in avionics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review the evolution, state of the art and future lines of research on the sources, transport pathways, and sinks of particulate trace elements in urban terrestrial environments to include the atmosphere, soils, and street and indoor dusts. Such studies reveal reductions in the emissions of some elements of historical concern such as Pb, with interest consequently focusing on other toxic trace elements such as As, Cd, Hg, Zn, and Cu. While establishment of levels of these elements is important in assessing the potential impacts of human society on the urban environment, it is also necessary to apply this knowledge in conjunction with information on the toxicity of those trace elements and the degree of exposure of human receptors to an assessment of whether such contamination represents a real risk to the city’s inhabitants and therefore how this risk can be addressed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of abstract machines involves complex decisions regarding, e.g., data representation, opcodes, or instruction specialization levéis, all of which affect the final performance of the emulator and the size of the bytecode programs in ways that are often difficult to foresee. Besides, studying alternatives by implementing abstract machine variants is a time-consuming and error-prone task because of the level of complexity and optimization of competitive implementations, which makes them generally difficult to understand, maintain, and modify. This also makes it hard to genérate specific implementations for particular purposes. To ameliorate those problems, we propose a systematic approach to the automatic generation of implementations of abstract machines. Different parts of their definition (e.g., the instruction set or the infernal data and bytecode representation) are kept sepárate and automatically assembled in the generation process. Alternative versions of the abstract machine are therefore easier to produce, and variants of their implementation can be created mechanically, with specific characteristics for a particular application if necessary. We illustrate the practicality of the approach by reporting on an implementation of a generator of production-quality WAMs which are specialized for executing a particular fixed (set of) program(s). The experimental results show that the approach is effective in reducing emulator size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the design of an original twin capacitive load that is able of tracing simultaneously the I?V characteristics of two photovoltaic modules. Besides, an example of the application of this dual system to the outdoor rating of photovoltaic modules is presented, whose results have shown a good degree of repeatability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanical behavior of living murine T-lymphocytes was assessed by atomic force microscopy (AFM). A robust experimental procedure was developed to overcome some features of lymphocytes, in particular their spherical shape and non-adherent character. The procedure included the immobilization of the lymphocytes on amine-functionalized substrates, the use of hydrodynamic effects on the deflection of the AFM cantilever to monitor the approaching, and the use of the jumping mode for obtaining the images. Indentation curves were analyzed according to Hertz's model for contact mechanics. The calculated values of the elastic modulus are consistent both when considering the results obtained from a single lymphocyte and when comparing the curves recorded from cells of different specimens

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of data description languages initially designed as standards for trie WWW are currently being used to implement user interfaces to programs. This is done independently of whether such programs are executed in the same or a different host as trie one running the user interface itself. The advantage of this approach is that it provides a portable, standardized, and easy to use solution for the application programmer, and a familiar behavior for the user, typically well versed in the use of WWW browsers. Among the proposed standard description languages, VRML is a aimed at representing three dimensional scenes including hyperlink capabilities. VRML is already used as an import/export format in many 3-D packages and tools, and has been shown effective in displaying complex objects and scenarios. We propose and describe a Prolog library which allows parsing and checking VRML code, transforming it, and writing it out as VRML again. The library converts such code to an internal representation based on first order terms which can then be arbitrarily manipulated. We also present as an example application the use of this library to implement a novel 3-D visualization for examining and understanding certain aspects of the behavior of CLP(FD) programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. This paper reports on the application of abstract múltiple specialization to automatic program parallelization in the &-Prolog compiler. Abstract executability, the main concept underlying abstract specialization, is formalized, the design of the specialization system presented, and a non-trivial example of specialization in automatic parallelization is given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide an overall description of the Ciao multiparadigm programming system emphasizing some of the novel aspects and motivations behind its design and implementation. An important aspect of Ciao is that, in addition to supporting logic programming (and, in particular, Prolog), it provides the programmer with a large number of useful features from different programming paradigms and styles and that the use of each of these features (including those of Prolog) can be turned on and off at will for each program module. Thus, a given module may be using, e.g., higher order functions and constraints, while another module may be using assignment, predicates, Prolog meta-programming, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of optimizations (including automatic parallelization). Such optimizations produce code that is highly competitive with other dynamic languages or, with the (experimental) optimizing compiler, even that of static languages, all while retaining the flexibility and interactive development of a dynamic language. This compilation architecture supports modularity and separate compilation throughout. The environment also includes a powerful autodocumenter and a unit testing framework, both closely integrated with the assertion system. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in a single journal paper, pointing instead to previous Ciao literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When users face a certain problem needing a product, service, or action to solve it, selecting the best alternative among them can be a dicult task due to the uncertainty of their quality. This is especially the case in the domains where users do not have an expertise, like for example in Software Engineering. Multiple criteria decision making (MCDM) methods are methods that help making better decisions when facing the complex problem of selecting the best solution among a group of alternatives that can be compared according to different conflicting criteria. In MCDM problems, alternatives represent concrete products, services or actions that will help in achieving a goal, while criteria represent the characteristics of these alternatives that are important for making a decision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a unified algorithm-architecture-circuit co-design environment for complex FPGA system development is presented. The main objective is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in verification stage, so as to speed up the development period. A proposed high performance FFT/iFFT processor for Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB) system design process is given as an example to demonstrate the proposed methodology. This efficient design methodology is tested and considered to be suitable for almost all types of complex FPGA system designs and verifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Use of electrodynamic bare tethers in exploring the Jovian system by tapping its rotational energy for power and propulsion is studied. The position of perijove and apojove in elliptical orbits, relative to the synchronous orbit at 2.24 times Jupiter’s radius, is exploited to conveniently make the induced Lorentz force to be drag or thrust, while generating power, and navigating the system. Capture and evolution to a low elliptical orbit near Jupiter, and capture into low circular orbits at moons Io and Europa are discussed.