388 resultados para UNIFICATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, abstract interpretation algorithms are described for computing the sharmg as well as the freeness information about the run-time instantiations of program variables. An abstract domain is proposed which accurately and concisely represents combined freeness and sharing information for program variables. Abstract unification and all other domain-specific functions for an abstract interpreter working on this domain are presented. These functions are illustrated with an example. The importance of inferring freeness is stressed by showing (1) the central role it plays in non-strict goal independence, and (2) the improved accuracy it brings to the analysis of sharing information when both are computed together. Conversely, it is shown that keeping accurate track of sharing allows more precise inference of freeness, thus resulting in an overall much more powerful abstract interpreter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents improved unification algorithms, an implementation, and an analysis of the effectiveness of an abstract interpreter based on the sharing + freeness domain presented in a previous paper, which was designed to accurately and concisely represent combined freeness and sharing information for program variables. We first briefly review this domain and the unification algorithms previously proposed. We then improve these algorithms and correct them to deal with some cases which were not well analyzed previously, illustrating the improvement with an example. We then present the implementation of the improved algorithm and evaluate its performance by comparing the effectiveness of the information inferred to that of other interpreters available to us for an application (program parallelization) that is common to all these interpreters. All these systems have been embedded in a real parallelizing compiler. Effectiveness of the analysis is measured in terms of actual final performance of the system: i.e. in terms of the actual speedups obtained. The results show good performance for the combined domain in that it improves the accuracy of both types of information and also in that the analyzer using the combined domain is more effective in the application than any of the other analyzers it is compared to.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesis doctoral desarrolla una investigación original, dentro del marco disciplinario de la historia de la construcción, sobre los fundamentos constructivos de las fortificaciones bajomedievales fronterizas entre las Coronas de Castilla y Aragón en la actual provincia de Soria. En el título de la tesis ya queda expresado el objeto fundamental y fundacional, así como el ámbito temporal —desde la reconquista del oriente soriano por parte de Alfonso I el Batallador a principios del siglo XII hasta la unificación de las coronas hispánicas en el siglo XV bajo el común mandato de los Reyes Católicos— y la extensión territorial que delimita la investigación: aquéllas comarcas castellanas lindantes con Aragón pertenecientes a la actual provincia de Soria. Durante este período bajomedieval se produjeron una serie de enfrentamientos fronterizos que obligó a fortificar la frontera y las vías de comunicación entre ambas coronas. La falta de estudios de conjunto de estas fortificaciones entendiéndolas como participantes en un sistema fortificado ha constituido la justificación de la investigación, que se realiza en varios niveles de análisis: territorial, histórico, arquitectónico, poliorcético y constructivo. Así mismo, se ha detectado cierta falta de rigor acompañada de inexactitudes en las consideraciones constructivas publicadas sobre algunas de las fortificaciones del ámbito de estudio, lo que ha provocado errores en su datación al no más haber elementos de corte artístico o estilístico que marquen indudablemente la pertenencia a una época. En la tesis se ponen en duda las dataciones tradicionalmente aceptadas planteando la hipótesis que da pie a la investigación: ante la falta de elementos artísticos o estilísticos en unos sobrios edificios eminentemente funcionales es posible establecer con suficiente aproximación la fecha de construcción en base a criterios constructivos una vez formada una clasificación cronotipológica de cada técnica constructiva. La hipótesis, por lo tanto, plantea un objetivo principal —el estudio de la razón constructiva del sistema fortificado fronterizo— desarrollado en una serie de objetivos específicos cuya consecución programa los sucesivos niveles de análisis: - Conocer y detallar los elementos históricos que originaron los enfrentamientos entre las Coronas de Castilla y Aragón y su desarrollo mediante herramientas historiográficas y analizar las características naturales del territorio en litigio mediante instrumentos cartográficos. - Conocer y analizar los tipos arquitectónicos y las tradiciones constructivas empleadas en las construcciones castrenses en el ámbito temporal en que se enmarca la investigación. - Localizar, documentar y seleccionar para su análisis las fortalezas y construcciones militares erigidas durante dichas luchas fronterizas en la actual provincia de Soria a través del trabajo de campo y métodos cartográficos y bibliográficos. - Realizar un estudio general sobre el sistema fortificado a escala territorial - Investigar la tipología arquitectónica, poliorcética y constructiva del conjunto de estas fortificaciones bajomedievales fronterizas. - Analizar los fundamentos constructivos de los casos de estudio seleccionados entre estas construcciones y caracterizarlas en cuanto al material, elementos, sistemas y procesos constructivos. - Ordenar la información histórica dispersa y corregir errores para hacer una base sobre la que establecer un discurso histórico de cada caso de estudio. - Comparar y relacionar las técnicas constructivas empleadas en estas fortalezas con los utilizados en el mismo ámbito temporal. - Difundir para su debate los resultados de la investigación por los foros científicos habituales. El método empleado combina los trabajos de gabinete con una intensa labor de campo, en la que se han documentado cincuenta fortificaciones y se han redactado sus correspondientes fichas de toma de datos. La recopilación de datos se ha incluido en una base de datos que incluye aspectos generales, tipológicos, constructivos y bibliográficos básicos del conjunto, a modo de inventario, de fortificaciones de la provincia. Las fortificaciones seleccionadas se agrupan según una clasificación tipológica y constructiva que marca las líneas de estudio posteriores. Se desarrolla un capítulo de antecedentes en el que se estudia la historia de la construcción fortificada medieval tanto en Europa como en España analizando la evolución de los tipos arquitectónicos y las múltiples influencias culturales que surcaron el Mediterráneo desde el Oriente cruzado e islámico al Poniente donde se desarrollaba la empresa reconquistadora que mantuvo en estado de guerra continuo a la Península Ibérica durante ochocientos años. El análisis del territorio como contenedor del hecho fortificado revela que hay una relación íntima entre la ubicación de las fortificaciones y las formas naturales que definen las vías de comunicación entre los valles del Duero, del Ebro y del Tajo. En efecto, el ámbito de estudio ha supuesto desde la Antigüedad un territorio de paso fundamental en la articulación de las comunicaciones en la Península Ibérica. Este carácter de paso más que de frontera explica las inquietudes y la preocupación por su control tanto por Roma como por el califato cordobés como por los reinos cristianos medievales. El análisis de los elementos históricos se complementa con el estudio detallado de los enfrentamientos fronterizos entre Castilla y Aragón así como los aspectos sociales y políticos que provocaron la fortificación como sistema de definición de la frontera y de organización espacial, jurisdiccional, social y administrativa del territorio. La arquitectura fortificada es esencialmente funcional: su cometido es la defensa. En este sentido, tras un estudio morfológico de los castillos seleccionados se realiza un extenso análisis poliorcético de sus elementos, investigando su origen y aplicación para servir también de parámetros de datación. Siendo el objeto inaugural de la tesis el estudio de los fundamentos constructivos, se explican los distintos materiales de construcción empleados y se agrupan las fábricas de las fortificaciones seleccionadas en dos grandes grupos constructivos: las fábricas aparejadas y las fábricas encofradas. Se han destacado y estudiado la evolución histórica y la tipología y mensiología constructiva de tres técnicas destacadas: el uso del ladrillo, la tapia de cal y canto o mampostería encofrada y la tapia de tierra. Para el estudio de la componente histórica y de la dimensión constructiva de cada técnica ha sido necesario documentar numerosos casos tanto en el ámbito de estudio como en la Península Ibérica con el fin de establecer grupos cronotipológicos constructivos entre los que poder ubicar las fábricas de estas técnicas presentes en el ámbito de estudio. Se ha observado una evolución dimensional de las fábricas de tapia que es más evidente en las hispanomusulmanas al modularse en codos pero que también se advierte significativamente en las cristianas bajomedievales. De cada una de las técnicas analizadas se ha seleccionado un caso de estudio singular y representativo. El castillo de Arcos de Jalón es un ejemplo significativo del empleo de la fábrica mixta de mampostería con verdugadas de ladrillo, así como las murallas de la ciudad fortificada de Peñalcázar lo es de la fábrica de mampostería encofrada y el castillo de Serón de Nágima constituye un caso característico y principal de la utilización de la tapia de tierra en la arquitectura militar bajomedieval. Cada uno de estos tres casos de estudios se examina bajo los mismos cuatro niveles anteriormente mencionados: territorial, histórico, arquitectónico y defensivo y constructivo. El sistemático método de estudio ha facilitado el orden en la investigación y la obtención de unos resultados y conclusiones que verifican la hipótesis y cumplen los objetivos marcados al comienzo. Se ha revisado la datación en la construcción de las fortificaciones analizadas mediante el estudio cronotipológico de sus fábricas, pudiendo trasladarse el método a otros sistemas fortificados. La tesis abre, finalmente, dos vías principales de investigación encaminadas a completar el estudio del sistema fortificado fronterizo bajomedieval en la raya oriental soriana de Castilla: la caracterización y datación por métodos físico-químicos de las muestras de piezas de madera de construcción que se conservan embebidas en las fábricas y la búsqueda documental y archivística que pueda revelar nuevos datos respecto a la fundación, reparación, venta o cualquier aspecto económico, legislativo, organizativo o administrativo relativo a las fortificaciones en documentos coetáneos. ABSTRACT The doctoral thesis develops an original research, held in the field of the Construction History, about the constructive reason of the frontier fortifications in the Late Middle Age between the Crowns of Castile and Aragón in the actual province of Soria, Spain. In the title is expressed the main objective, and also the temporal scope —from the reconquest in the 12th Century by Alfonso the First of Aragón to the unification under the common kingdom of the Catholic Kings— and the territorial extension that the research delimits: those Castilian regions in the border with Aragón in the actual province of Soria. During this period, a series of border wars were been, and this is the reason for the fortification of the border line and the main roads between both Crowns. The lack of studies of these fortifications as participants in a fortified system is the justification of the research. There is several analysis levels: territorial, historical, architectonic, defensive and constructive. Likewise, there is a lack of strictness and inaccuracy in the constructive items in the publications about several fortifications of this study field. This aspect has caused mistakes in the dating because there is neither artistic nor stylistic elements which determines a epoch. The traditionally accepted datings are challenged. An hypothesis is formulated: in the absence of artistic or stylistic elements in a sober and functional buildings is possible to date the time of construction with sufficiently approximation based on construction criteria once formed a cronotypologic classification of each building technique. The hypothesis, therefore, propose a main aim: the study of the constructive reason of the fortified border system. This aim is developed in a series of specifically targets whose achievement programs the analysis levels: - To know and to detail the historical elements which started the wars between Castile and Aragon and its development using historiographical tools, and to analyze the natural characteristic of the territory through cartographical tools. - To understand and to analyze the different architectural types and the building traditions employed in the military buildings in the time researched. - To locate, to document, and to select for their analysis the fortresses and military constructions erected during these border wars in the actual province of Soria through fieldwork and bibliographical and cartographical methods. - To conduct a general study on the fortified system in territorial scale. - To research the architectural, constructive and defensive typology of the system of these border late medieval fortifications. - To analyze the construction logic of the selected case studies and to characterize in the items of material, elements, systems and construction processes. - To sort scattered historical information and to correct mistakes to make a base by which to establish a historic speech of each case study. - To compare and to relate the construction techniques employed in these fortresses with those used in the same time range . - To spread for discussion the research results in the usual scientists forums. The method combines the destock work with an intense fieldwork. Fifty fortifications have been documented and it has written their corresponding data collection card. Data collection has been included in a database that includes general aspects, typological, constructive and basic bibliographical data, as an inventory of fortifications in the province. The selected fortifications are grouped according to a typological and constructive classification which lead the lines of the later study. There is a chapter for the antecedents in which the history of the medieval fortified construction in Europe and in Spain is studied by analyzing the evolution of architectural types and the many cultural influences along the opposite seasides of the Mediterranean Sea, from the Islamic and Crusader East to the Iberian Peninsula in where there were a long and continuous war during eight hundred years. The territory is analyzed as a container of fortifications. This analysis reveals that there is an intimate relationship between the location of the fortifications and the natural forms that define the communication roads between the Duero, Ebro and Tajo valleys. Indeed, the study area has been a cross-territory from ancient times more than a frontierterritory. This communication character explains the concerns about its control both by Rome and by the Muslims of Córdoba as medieval Christian kingdoms. The analysis of historical elements is supplemented by detailed study of border war between Castile and Aragon and the social and political issues that led to the fortification as border definition system and spatial, jurisdictional, social and administrative planning. The fortified architecture is essentially functional: it is responsible for defense. In this sense, after a morphological study of selected castles is performed an extensive analysis of its defensive elements, investigating its origin and application. This analisis serves for the definition of parameters for dating. The purpose of the thesis is the study of the constructive logic. First, various building materials are explained. Then, masonry is grouped into two major constructive groups: rigged masonry and formwork masonry. The historical evolution and the constructive typology and mensiology are studied for each one of the three main techniques: the use of brickwork, the mortar wall and rammed-earth. Many case studies have been documented along the Iberian Peninsula and also in the study area. As conclusion, there is a dimensional evolution of the rammed-earth walls. This evolution is more evident into the Muslim masonry than in the late medieval walls: the reason is the use of the cubit as module. From each of the techniques discussed, a singular and representative case of study has been selected. The castle of Arcos de Jalon is a significant example of mix masonry of stone and brick rows. The walled city of Peñalcázar is built with masonry formwork. Serón de Nágima castle, at last, is a typical and main case of the use of the rammedearth wall of late medieval military architecture. Each of these three case studies were examined under the same four analysis levels above mentioned: territorial, historical, architectural and defensive and constructive. The systematic method of study has facilitated the order in the research and the obtaining of results and conclusions that verify the hypotheses and achieve the research objectives. Dating of the fortifications construction has been revised by studying the cronotypological issues of its masonry. The method can be transferred to the study of other fortified systems. Finally, the thesis describes two main research new ways aimed at completing the study of the late medieval fortified border of Castile in the actual province of Soria. The first of them is the characterization and datig by physicochemical methods the sample pieces of wood construction preserved embedded in the masonry. The second research way is the investigation of the documents in archives that may reveal new information about the foundation, repair, sale or any aspect to economic, legal, organizational or administrative concerning fortifications in contemporary documents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Basándonos en la recopilación inicial de preposiciones, locuciones preposicionales, términos con preposición dependiente y phrasal verbs utilizados en el texto técnico realizada en otros proyectos anteriores del Departamento de Lingüística Aplicada a la Ciencia y a la Tecnología, el objetivo de este trabajo es completar, organizar, actualizar y dar visibilidad a esta información inicial. Tras realizar un proceso exhaustivo de verificación, unificación, clasificación y ampliación de la información existente, en caso necesario, el listado resultante se utiliza para elaborar un glosario de términos con preposición. El objetivo final de este proyecto es que este glosario esté a disposición de los usuarios, a través de una consulta on-line, en la página del ILLLab (http://illlab.euitt.upm.es/wordpress/), dependiente del Departamento de Lingüística Aplicada a la Ciencia y a la Tecnología. Para incluir en el glosario ejemplos actualizados de textos técnicos, se ha recopilado un corpus lingüístico de textos técnicos, tomando como base diferentes números de la revista IEEE Spectrum, en su edición digital, publicados entre los años 2009 y 2012. El objetivo de esta recopilación es la de ofrecer al consultante diferentes ejemplos de uso en el texto técnico de los distintos términos con preposición que componen el glosario, de manera que pueda acceder de manera rápida y sencilla a ejemplos de uso real de los términos que está buscando, con objeto de clarificar aspectos relacionados con su uso o, en su caso, facilitar su aprendizaje. Toda esta información, tanto el listado de términos con preposición como las frases pertenecientes al corpus recopilado, se incorpora a una base de datos, alojada dentro de la misma página web del ILLLab. A través de un formulario de consulta, a disposición del usuario en dicha página, se pueden obtener todos los términos recopilados que coincidan con los criterios de búsqueda introducidos. El usuario puede realizar dos tipos de búsqueda principales: por preposición o por término completo. Además, puede elegir una búsqueda global (entre todos los términos que integran el glosario) o parcial (en una sola de las categorías en las que se han dividido los diferentes términos, de acuerdo con su función gramatical). Por último, se presentan unas estadísticas de uso de los términos recopilados dentro de los diferentes textos que integran el corpus lingüístico, de manera que pueda establecerse una relación de los que aparecen con más frecuencia en el texto técnico. ABSTRACT. Based on the initial collection of prepositions, prepositional phrases, dependent prepositions and phrasal verbs used in technical texts collected on previous projects in the Department of Applied Linguistics to Science and Technology, the aim of this project is to improve, organize, update and provide visibility to this initial information. Following a process of verification, unification, classification and extension of existing information, if necessary, a glossary of terms with preposition is built. The ultimate objective of this project is to make this glossary available to users through an online consultation in the ILLLab webpage (http://illlab.euitt.upm.es/wordpress/). The administration of tis webpage depends of the Department of Applied Linguistics in Science and Technology. A linguistic corpus of technical texts has been compiled, based on different numbers of the IEEE Spectrum magazine, in its online edition, published between the years 2009 and 2012. The aim of this collection is to provide different examples of use in the technical text for the terms included in the glossary, so that examples of the actual use of the terms consulted can be easily and quickly accessed, in order to clarify doubts regarding their meaning or translation into Spanish and facilitate learning. All this information, both the list of terms with prepositional phrases as well as the corpus developed, is incorporated in a database. Through a searching form, the ILLLab's user may obtain all the terms matching the search criteria entered. The user can perform two types of main search: by preposition or by full term. Additionally, a global search can be selected (including all terms included in the glossary) or a partial one (including only one of the glossary's categories). Finally, some statistics of use are presented according to the various texts included in the corpus, so a relation of the most frequent prepositions in the technical text can be established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La importancia de los sistemas de recomendación ha experimentado un crecimiento exponencial como consecuencia del auge de las redes sociales. En esta tesis doctoral presentaré una amplia visión sobre el estado del arte de los sistemas de recomendación. Incialmente, estos estaba basados en fitrado demográfico, basado en contendio o colaborativo. En la actualidad, estos sistemas incorporan alguna información social al proceso de recomendación. En el futuro utilizarán información implicita, local y personal proveniente del Internet de las cosas. Los sistemas de recomendación basados en filtrado colaborativo se pueden modificar con el fin de realizar recomendaciones a grupos de usuarios. Existen trabajos previos que han incluido estas modificaciones en diferentes etapas del algoritmo de filtrado colaborativo: búsqueda de los vecinos, predicción de las votaciones y elección de las recomendaciones. En esta tesis doctoral proporcionaré un nuevo método que realizar el proceso de unficación (pasar de varios usuarios a un grupo) en el primer paso del algoritmo de filtrado colaborativo: cálculo de la métrica de similaridad. Proporcionaré una formalización completa del método propuesto. Explicaré cómo obtener el conjunto de k vecinos del grupo de usuarios y mostraré cómo obtener recomendaciones usando dichos vecinos. Asimismo, incluiré un ejemplo detallando cada paso del método propuesto en un sistema de recomendación compuesto por 8 usuarios y 10 items. Las principales características del método propuesto son: (a) es más rápido (más eficiente) que las alternativas proporcionadas por otros autores, y (b) es al menos tan exacto y preciso como otras soluciones estudiadas. Para contrastar esta hipótesis realizaré varios experimentos que miden la precisión, la exactitud y el rendimiento del método. Los resultados obtenidos se compararán con los resultados de otras alternativas utilizadas en la recomendación de grupos. Los experimentos se realizarán con las bases de datos de MovieLens y Netflix. ABSTRACT The importance of recommender systems has grown exponentially with the advent of social networks. In this PhD thesis I will provide a wide vision about the state of the art of recommender systems. They were initially based on demographic, contentbased and collaborative filtering. Currently, these systems incorporate some social information to the recommendation process. In the future, they will use implicit, local and personal information from the Internet of Things. As we will see here, recommender systems based on collaborative filtering can be used to perform recommendations to group of users. Previous works have made this modification in different stages of the collaborative filtering algorithm: establishing the neighborhood, prediction phase and determination of recommended items. In this PhD thesis I will provide a new method that carry out the unification process (many users to one group) in the first stage of the collaborative filtering algorithm: similarity metric computation. I will provide a full formalization of the proposed method. I will explain how to obtain the k nearest neighbors of the group of users and I will show how to get recommendations using those users. I will also include a running example of a recommender system with 8 users and 10 items detailing all the steps of the method I will present. The main highlights of the proposed method are: (a) it will be faster (more efficient) that the alternatives provided by other authors, and (b) it will be at least as precise and accurate as other studied solutions. To check this hypothesis I will conduct several experiments measuring the accuracy, the precision and the performance of my method. I will compare these results with the results generated by other methods of group recommendation. The experiments will be carried out using MovieLens and Netflix datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P, Z, N} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis presenta un modelo, una metodología, una arquitectura, varios algoritmos y programas para crear un lexicón de sentimientos unificado (LSU) que cubre cuatro lenguas: inglés, español, portugués y chino. El objetivo principal es alinear, unificar, y expandir el conjunto de lexicones de sentimientos disponibles en Internet y los desarrollados a lo largo de esta investigación. Así, el principal problema a resolver es la tarea de unificar de forma automatizada los diferentes lexicones de sentimientos obtenidos por el crawler CSR, porque la unidad de medida para asignar la intensidad de los valores de la polaridad (de forma manual, semiautomática y automática) varía de acuerdo con las diferentes metodologías utilizadas para la construcción de cada lexicón. La representación codificada de la estructura de datos de los términos presenta también una variación en la estructura de lexicón a lexicón. Por lo que al unificar en un lexicón de sentimientos se hace posible la reutilización del conocimiento recopilado por los diferentes grupos de investigación y se incrementa, a la vez, el alcance, la calidad y la robustez de los lexicones. Nuestra metodología LSU calcula un valor unificado de la intensidad de la polaridad para cada entrada léxica que está presente en al menos dos de los lexicones de sentimientos que forman parte de este estudio. En contraste, las entradas léxicas que no son comunes en al menos dos de los lexicones conservan su valor original. El coeficiente de Pearson resultante permite medir la correlación existente entre las entradas léxicas asignándoles un rango de valores de uno a menos uno, donde uno indica que los valores de los términos están perfectamente correlacionados, cero indica que no existe correlación y menos uno significa que están inversamente correlacionados. Este procedimiento se lleva acabo con la función de MetricasUnificadas tanto en la CPU como en la GPU. Otro problema a resolver es el tiempo de procesamiento que se requiere para realizar la tarea de unificación de la intensidad de la polaridad y con ello alcanzar una cobertura mayor de lemas en los lexicones de sentimientos existentes. Asimismo, la metodología LSU utiliza el procesamiento paralelo para unificar los 155 802 términos. El algoritmo LSU procesa mediante cargas iguales el subconjunto de entradas léxicas en cada uno de los 1344 núcleos en la GPU. Los resultados de nuestro análisis arrojaron un total de 95 430 entradas léxicas donde 35 201 obtuvieron valores positivos, 22 029 negativos y 38 200 neutrales. Finalmente, el tiempo de ejecución fue de 2,506 segundos para el total de las entradas léxicas, lo que permitió reducir el procesamiento de cómputo hasta en una tercera parte con respecto al algoritmo secuencial. De estos resultados se concluye que al lograr un lexicón de sentimientos unificado que permite homogeneizar la intensidad de la polaridad de las unidades léxicas (con valores positivos, negativos y neutrales) deriva no sólo en el análisis semántico del corpus basado en los términos con una mayor carga de polaridad, o del resumen de las valoraciones o las tendencias de neuromarketing, sino también en aplicaciones como el etiquetado subjetivo de sitios web o de portales sintácticos y semánticos, por mencionar algunas. ABSTRACT This thesis presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral P, N, Z depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and - 1 , where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155,802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95,430 lexical entries, out of which there are 35,201 considered to be positive, 22,029 negative, and 38,200 neutral. Finally, the runtime was 2.505 seconds for 95,430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times with respect to the sequential implementation. A key contribution of this work is that we preserve the use of a unified sentiment lexicon for all tasks. Such lexicon is used to define resources and resource-related properties that can be verified based on the results of the analysis and is powerful, general and extensible enough to express a large class of interesting properties. Some applications of this work include merging, aligning, pruning and extending the current sentiment lexicons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo tiene como objetivo general el análisis de las técnicas de diseño y optimización de redes topográficas, observadas mediante topografía convencional (no satelital) el desarrollo e implementación de un sistema informático capaz de ayudar a la definición de la geometría más fiable y precisa, en función de la orografía del terreno donde se tenga que ubicar. En primer lugar se realizará un estudio de la metodología del ajuste mediante mínimos cuadrados y la propagación de varianzas, para posteriormente analizar su dependencia de la geometría que adopte la red. Será imprescindible determinar la independencia de la matriz de redundancia (R) de las observaciones y su total dependencia de la geometría, así como la influencia de su diagonal principal (rii), números de redundancia, para garantizar la máxima fiabilidad interna de la misma. También se analizará el comportamiento de los números de redundancia (rii) en el diseño de una red topográfica, la variación de dichos valores en función de la geometría, analizando su independencia respecto de las observaciones así como los diferentes niveles de diseño en función de los parámetros y datos conocidos. Ha de señalarse que la optimización de la red, con arreglo a los criterios expuestos, está sujeta a los condicionantes que impone la necesidad de que los vértices sean accesibles, y además sean visibles entre sí, aquellos relacionados por observaciones, situaciones que dependen esencialmente del relieve del terreno y de los obstáculos naturales o artificiales que puedan existir. Esto implica la necesidad de incluir en el análisis y en el diseño, cuando menos de un modelo digital del terreno (MDT), aunque lo más útil sería la inclusión en el estudio del modelo digital de superficie (MDS), pero esta opción no siempre será posible. Aunque el tratamiento del diseño esté basado en un sistema bidimensional se estudiará la posibilidad de incorporar un modelo digital de superficie (MDS); esto permitirá a la hora de diseñar el emplazamiento de los vértices de la red la viabilidad de las observaciones en función de la orografía y los elementos, tanto naturales como artificiales, que sobre ella estén ubicados. Este sistema proporcionaría, en un principio, un diseño óptimo de una red constreñida, atendiendo a la fiabilidad interna y a la precisión final de sus vértices, teniendo en cuenta la orografía, lo que equivaldría a resolver un planteamiento de diseño en dos dimensiones y media1; siempre y cuando se dispusiera de un modelo digital de superficie o del terreno. Dado que la disponibilidad de obtener de manera libre el MDS de las zonas de interés del proyecto, hoy en día es costoso2, se planteará la posibilidad de conjuntar, para el estudio del diseño de la red, de un modelo digital del terreno. Las actividades a desarrollar en el trabajo de esta tesis se describen en esta memoria y se enmarcan dentro de la investigación para la que se plantean los siguientes objetivos globales: 1. Establecer un modelo matemático del proceso de observación de una red topográfica, atendiendo a todos los factores que intervienen en el mismo y a su influencia sobre las estimaciones de las incógnitas que se obtienen como resultado del ajuste de las observaciones. 2. Desarrollar un sistema que permita optimizar una red topográfica en sus resultados, aplicando técnicas de diseño y simulación sobre el modelo anterior. 3. Presentar una formulación explícita y rigurosa de los parámetros que valoran la fiabilidad de una red topográfica y de sus relaciones con el diseño de la misma. El logro de este objetivo se basa, además de en la búsqueda y revisión de las fuentes, en una intensa labor de unificación de notaciones y de construcción de pasos intermedios en los desarrollos matemáticos. 4. Elaborar una visión conjunta de la influencia del diseño de una red, en los seis siguientes factores (precisiones a posteriori, fiabilidad de las observaciones, naturaleza y viabilidad de las mismas, instrumental y metodología de estacionamiento) como criterios de optimización, con la finalidad de enmarcar el tema concreto que aquí se aborda. 5. Elaborar y programar los algoritmos necesarios para poder desarrollar una aplicación que sea capaz de contemplar las variables planteadas en el apartado anterior en el problema del diseño y simulación de redes topográficas, contemplando el modelo digital de superficie. Podrían considerarse como objetivos secundarios, los siguientes apartados: Desarrollar los algoritmos necesarios para interrelacionar el modelo digital del terreno con los propios del diseño. Implementar en la aplicación informática la posibilidad de variación, por parte del usuario, de los criterios de cobertura de los parámetros (distribución normal o t de Student), así como los grados de fiabilidad de los mismos ABSTRACT The overall purpose of this work is the analysis of the techniques of design and optimization for geodetic networks, measured with conventional survey methods (not satellite), the development and implementation of a computational system capable to help on the definition of the most liable and accurate geometry, depending on the land orography where the network has to be located. First of all, a study of the methodology by least squares adjustment and propagation of variances will be held; then, subsequently, analyze its dependency of the geometry that the network will take. It will be essential to determine the independency of redundancy matrix (R) from the observations and its absolute dependency from the network geometry, as well as the influence of the diagonal terms of the R matrix (rii), redundancy numbers, in order to ensure maximum re liability of the network. It will also be analyzed first the behavior of redundancy numbers (rii) in surveying network design, then the variation of these values depending on the geometry with the analysis of its independency from the observations, and finally the different design levels depending on parameters and known data. It should be stated that network optimization, according to exposed criteria, is subject to the accessibility of the network points. In addition, common visibility among network points, which of them are connected with observations, has to be considered. All these situations depends essentially on the terrain relief and the natural or artificial obstacles that should exist. Therefore, it is necessary to include, at least, a digital terrain model (DTM), and better a digital surface model (DSM), not always available. Although design treatment is based on a bidimensional system, the possibility of incorporating a digital surface model (DSM) will be studied; this will allow evaluating the observations feasibility based on the terrain and the elements, both natural and artificial, which are located on it, when selecting network point locations. This system would provide, at first, an optimal design of a constrained network, considering both the internal reliability and the accuracy of its points (including the relief). This approach would amount to solving a “two and a half dimensional”3 design, if a digital surface model is available. As the availability of free DSM4 of the areas of interest of the project today is expensive, the possibility of combining a digital terrain model will arise. The activities to be developed on this PhD thesis are described in this document and are part of the research for which the following overall objectives are posed: 1. To establish a mathematical model for the process of observation of a survey network, considering all the factors involved and its influence on the estimates of the unknowns that are obtained as a result of the observations adjustment. 2. To develop a system to optimize a survey network results, applying design and simulation techniques on the previous model. 3. To present an explicit and rigorous formulation of parameters which assess the reliability of a survey network and its relations with the design. The achievement of this objective is based, besides on the search and review of sources, in an intense work of unification of notation and construction of intermediate steps in the mathematical developments. 4. To develop an overview of the influence on the network design of six major factors (posterior accuracy, observations reliability, viability of observations, instruments and station methodology) as optimization criteria, in order to define the subject approached on this document. 5. To elaborate and program the algorithms needed to develop an application software capable of considering the variables proposed in the previous section, on the problem of design and simulation of surveying networks, considering the digital surface model. It could be considered as secondary objectives, the following paragraphs: To develop the necessary algorithms to interrelate the digital terrain model with the design ones. To implement in the software application the possibility of variation of the coverage criteria parameters (normal distribution or Student t test) and therefore its degree of reliability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tese é o resultado de uma pesquisa sobre o uso e a influência do dinheiro no que tange às questões existenciais, no contexto capitalista, ligadas ao trinômio: saúde, amor e espiritualidade. Para isso, foram analisados vários tipos de vínculo, afetivos ou não, existentes nas relações humanas, no âmbito da família, do mercado e do Estado, convergindo para a busca do sagrado que dá sentido e significado existenciais. O eixo teórico se localiza em uma interface entre as Ciências Sociais e a Psicologia Analítica, de Carl Gustav Jung (1875-1961), que expressa em sua obra a necessidade humana de encontrar a realização do ser pela conquista consciente de um estado de integração evolutiva. Esta dimensão integral existe quando é realizada a unificação dos vários aspectos do eu com o inconsciente, expressos teleologicamente no processo de individuação. O resultado da evolução científica e tecnológica, acrescido pela supremacia do mercado, abrange praticamente todas as esferas da vida humana, imprimindo uma importância excessiva ao dinheiro. Por exemplo, até o campo religioso foi invadido pela lógica monetária, que se instalou impondo uma atitude monetarizada nas práticas e ritos religiosos, como ocorre em algumas igrejas neopentecostais. Por sua vez, a supervalorização do dinheiro contribui para um processo que combina dessacralização e exclusão social, bem como para o aumento significativo de doenças em todas as instâncias em que as trocas deixaram de acontecer livremente. Com a interdição das trocas, a vida se esvai, comprometendo a evolução humana nas instâncias físicas, psíquicas, sociais, espirituais, familiares, afetivas ou profissionais. Como os desejos de lucro e de acúmulo impedem as trocas, a conquista da dimensão integral vai ficando sombreada até ser substituída pela anestesia do consumo, no sentido de aliviar, apesar de não eliminar, os sentimentos de angústia pela falta de sentido existencial. Busca-se neste trabalho o entendimento da razão pela qual o ser humano contemporâneo deixou de trocar livremente e passou a acumular, muitas vezes por meio de consumo do supérfluo, ficando à mercê de um mercado que pretende ser hegemônico, colocando inclusive o dinheiro como caminho de cura e salvação. Fizemos um levantamento das possibilidades que podem restar para a concretização de uma readequação do uso do dinheiro a serviço da individuação e da realização existencial.(AU)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tese é o resultado de uma pesquisa sobre o uso e a influência do dinheiro no que tange às questões existenciais, no contexto capitalista, ligadas ao trinômio: saúde, amor e espiritualidade. Para isso, foram analisados vários tipos de vínculo, afetivos ou não, existentes nas relações humanas, no âmbito da família, do mercado e do Estado, convergindo para a busca do sagrado que dá sentido e significado existenciais. O eixo teórico se localiza em uma interface entre as Ciências Sociais e a Psicologia Analítica, de Carl Gustav Jung (1875-1961), que expressa em sua obra a necessidade humana de encontrar a realização do ser pela conquista consciente de um estado de integração evolutiva. Esta dimensão integral existe quando é realizada a unificação dos vários aspectos do eu com o inconsciente, expressos teleologicamente no processo de individuação. O resultado da evolução científica e tecnológica, acrescido pela supremacia do mercado, abrange praticamente todas as esferas da vida humana, imprimindo uma importância excessiva ao dinheiro. Por exemplo, até o campo religioso foi invadido pela lógica monetária, que se instalou impondo uma atitude monetarizada nas práticas e ritos religiosos, como ocorre em algumas igrejas neopentecostais. Por sua vez, a supervalorização do dinheiro contribui para um processo que combina dessacralização e exclusão social, bem como para o aumento significativo de doenças em todas as instâncias em que as trocas deixaram de acontecer livremente. Com a interdição das trocas, a vida se esvai, comprometendo a evolução humana nas instâncias físicas, psíquicas, sociais, espirituais, familiares, afetivas ou profissionais. Como os desejos de lucro e de acúmulo impedem as trocas, a conquista da dimensão integral vai ficando sombreada até ser substituída pela anestesia do consumo, no sentido de aliviar, apesar de não eliminar, os sentimentos de angústia pela falta de sentido existencial. Busca-se neste trabalho o entendimento da razão pela qual o ser humano contemporâneo deixou de trocar livremente e passou a acumular, muitas vezes por meio de consumo do supérfluo, ficando à mercê de um mercado que pretende ser hegemônico, colocando inclusive o dinheiro como caminho de cura e salvação. Fizemos um levantamento das possibilidades que podem restar para a concretização de uma readequação do uso do dinheiro a serviço da individuação e da realização existencial.(AU)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presented analysis of human and fly life tables proves that with the specified accuracy their entire survival and mortality curves are uniquely determined by a single point (e.g., by the birth mortality q0), according to the law, which is universal for species as remote as humans and flies. Mortality at any age decreases with the birth mortality q0. According to life tables, in the narrow vicinity of a certain q0 value (which is the same for all animals of a given species, independent of their living conditions), the curves change very rapidly and nearly simultaneously for an entire population of different ages. The change is the largest in old age. Because probability to survive to the mean reproductive age quantifies biological fitness and evolution, its universal rapid change with q0 (which changes with living conditions) manifests a new kind of an evolutionary spurt of an entire population. Agreement between theoretical and life table data is explicitly seen in the figures. Analysis of the data on basic metabolism reduces it to the maximal mean lifespan (for animals from invertebrates to mammals), or to the maximal mean fission time (for bacteria), and universally scales them with the total number of body atoms only. Phenomenological origin of this unification and universality of metabolism, survival, and evolution is suggested. Their implications and challenges are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detection of similarity is particularly difficult for small proteins and thus connections between many of them remain unnoticed. Structure and sequence analysis of several metal-binding proteins reveals unexpected similarities in structural domains classified as different protein folds in SCOP and suggests unification of seven folds that belong to two protein classes. The common motif, termed treble clef finger in this study, forms the protein structural core and is 25–45 residues long. The treble clef motif is assembled around the central zinc ion and consists of a zinc knuckle, loop, β-hairpin and an α-helix. The knuckle and the first turn of the helix each incorporate two zinc ligands. Treble clef domains constitute the core of many structures such as ribosomal proteins L24E and S14, RING fingers, protein kinase cysteine-rich domains, nuclear receptor-like fingers, LIM domains, phosphatidylinositol-3-phosphate-binding domains and His-Me finger endonucleases. The treble clef finger is a uniquely versatile motif adaptable for various functions. This small domain with a 25 residue structural core can accommodate eight different metal-binding sites and can have many types of functions from binding of nucleic acids, proteins and small molecules, to catalysis of phosphodiester bond hydrolysis. Treble clef motifs are frequently incorporated in larger structures or occur in doublets. Present analysis suggests that the treble clef motif defines a distinct structural fold found in proteins with diverse functional properties and forms one of the major zinc finger groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictions for the apparent velocity statistics under simple beaming models are presented and compared to the observations. The potential applications for tests of unification models and for cosmology (source counts, measurements of the Hubble constant H0 and the deceleration parameter q0) are discussed. First results from a large homogeneous survey are presented. The data do not show compelling evidence for the existence of intrinsically different populations of galaxies, BL Lacertae objects, or quasars. Apparent velocities betaapp in the range 1-5 h-1, where h = H0/100 km.s-1.Mpc-1 [1 megaparsec (Mpc) = 3.09 x 10(22) m], occur with roughly equal frequency; higher values, up to betaapp = 10 h-1, are rather more scarce than appeared to be the case from earlier work, which evidently concentrated on sources that are not representative of the general population. The betaapp distribution suggests that there might be a skewed distribution of Lorentz factors over the sample, with a peak at gammab approximately 2 h-1 and a tail up to at least gammab approximately 10 h-1. There appears to be a clearly rising upper envelope to the betaapp distribution when plotted as a function of observed 5-GHz luminosity; a combination of source counts and the apparent velocity statistics in a larger sample could provide much insight into the properties of radio jet sources.