135 resultados para 120323 Lenguajes de programación
Resumo:
Euskararako perpaus-identifikatzailearen sorkuntzaren ondoren, gaztelaniarako perpaus-identifikatzailea sortu nahi izan dugu, itzulpen automatikoan lagungarri izango delakoan. Gaztelaniarako perpaus-identifikatzailea sortzeko ikasketa automatikoko teknikak erabili ditugu.
Itzulpen automatikorako tresnen egokitzapena euskararako: post-edizioa, ebaluazioa eta aurre-edizioa
Resumo:
Proiektu honetan zehar Itzulpen Automatikoa eta horren inguruko tresnen inguruan jorratu da. Lengoaia Naturalaren Prozesamendua eta itzulpen automatikoa ikasi eta aztertu egin dira ikuspuntu zabal batetik. Itzulpen automatiko orokorraz eta horren aplikazio mota desberdinetatik gain, bestelako kontzeptuak ere tratatu dira, hala nola, itzulpenean laguntzeko tresnak, itzulpen automatikoaren ebaluazioa eta itzulpen automatikorako testuen aurre-edizioa eta post-edizioa. Ikasketa- eta aztertze-prozesu horretaz gain, erlazionatuta dauden tresnak erabili edota moldatu egin dira euskararako itzulpen automatikoan barne. Hiru atal nagusi nabarmendu daitezke: Lehenengo, OmegaT, itzulpenean laguntzeko softwarea, moldatu da Matxin euskararako itzultzaile automatikoa gehituz. Gainera, IXA Taldearen eta Euskal Wikipediaren arteko kolaborazio-lanean, Wikipediako artikuluak eskuratu, itzuli eta igotzeko aukera egokitu zaio OmegaT-ri eta horren erabilera sustatu da Euskal Wikipediako komunitatean eta UPV/EHUko Informatikako ikasle eta irakaslegoaren artean. Bestalde, lan honetaz baliatuz, OmegaT-k sortzen dituen itzulpen-memoriak, Matxin-en itzulpenen gaineko post-edizioan oinarrituak, eskuratzeko modu bat egin da, horiekin Matxin-en funtzionamendua hobetu ahal izateko. Ondoren, Asiya programan integratu egin da euskara. Asiya-k itzulpen automatikoaren ebaluazio eta meta-ebaluazioak egin ditzakeen aplikazioa da. Hainbat metrika aztertu dira euskara aztertzeko balio ote duten begiratzeko. Besteen artean, lau metrikari euskara gehitzeko saiakera egin nahi izan da IXA Taldeko euskarazko testuen analizatzaile batek eskainitako informazio sintaktikoa gehituz, baina bi metrika soilik egokitu ahal izan dira. Azkenik, DiSeg esaldi-segmentatzailea erabili egin da gaztelerazko corpus baten gainean esaldi luzeak banatzeko. Aurre-edizio hori eta gero itzuli egin dira eta Asiya erabiliz emaitzen ebaluazioa eta konparazioa egin dira, esaldi laburragoekin itzulpen automatiko eraginkorragoa lortzen oten den aztertzeko.
Resumo:
Duración (en horas): Más de 50 horas. Destinatario: Estudiante
Resumo:
[ES] La creación escultórica en el País Vasco durante los años noventa se siguió nutriendo del entramado creativo de la década anterior, mientras se iban difuminando los últimos coletazos programáticos de la posmodernidad. Los jóvenes artistas de esta última década no plantean lenguajes originales, ni plantean la búsqueda de temáticas inéditas, simplemente se aboga por ubicarse en un marco espacial y temporal que les permita seguir creando en base a procesos anteriores, pero con perspectivas que les sirvan para analizar y experimentar las situaciones enclavadas en el presente.
Resumo:
En este artículo se plantea la resolución de un problema de Investigación Operativa utilizando PHPSimplex (herramienta online de resolución de problemas de optimización utilizando el método Simplex), Solver de Microsoft Excel y un prototipo híbrido que combina las teorías de los Algoritmos Genéticos con una técnica heurística de búsqueda local. La hibridación de estas dos técnicas es conocida como Algoritmo Memético. Este prototipo será capaz de resolver problemas de Optimización con función de maximización o minimización conocida, superando las restricciones que se planteen. Los tres métodos conseguirán buenos resultados ante problemas sencillos de Investigación Operativa, sin embargo, se propone otro problema en el cual el Algoritmo Memético y la herramienta Solver de Microsoft Excel, alcanzarán la solución óptima. La resolución del problema utilizando PHPSimplex resultará inviable. El objetivo, además de resolver el problema propuesto, es comparar cómo se comportan los tres métodos anteriormente citados ante el problema y cómo afrontan las dificultades que éste presenta. Además, este artículo pretende dar a conocer diferentes técnicas de apoyo a la toma de decisiones, con la intención de que se utilicen cada vez más en el entorno empresarial sustentando, de esta manera, las decisiones mediante la matemática o la Inteligencia Artificial y no basándose únicamente en la experiencia.
Resumo:
In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.
Resumo:
Although blogs exist from the beginning of the Internet, their use has considerablybeen increased in the last decade. Nowadays, they are ready for being used bya broad range of people. From teenagers to multinationals, everyone can have aglobal communication space.Companies know blogs are a valuable publicity tool to share information withthe participants, and the importance of creating consumer communities aroundthem: participants come together to exchange ideas, review and recommend newproducts, and even support each other. Also, companies can use blogs for differentpurposes, such as a content management system to manage the content of websites,a bulletin board to support communication and document sharing in teams,an instrument in marketing to communicate with Internet users, or a KnowledgeManagement Tool. However, an increasing number of blog content do not findtheir source in the personal experiences of the writer. Thus, the information cancurrently be kept in the user¿s desktop documents, in the companies¿ catalogues,or in another blogs. Although the gap between blog and data source can be manuallytraversed in a manual coding, this is a cumbersome task that defeats the blog¿seasiness principle. Moreover, depending on the quantity of information and itscharacterisation (i.e., structured content, unstructured content, etc.), an automaticapproach can be more effective.Based on these observations, the aim of this dissertation is to assist blog publicationthrough annotation, model transformation and crossblogging techniques.These techniques have been implemented to give rise to Blogouse, Catablog, andBlogUnion. These tools strive to improve the publication process considering theaforementioned data sources.
Resumo:
In a time when Technology Supported Learning Systems are being widely used, there is a lack of tools that allows their development in an automatic or semi-automatic way. Technology Supported Learning Systems require an appropriate Domain Module, ie. the pedagogical representation of the domain to be mastered, in order to be effective. However, content authoring is a time and effort consuming task, therefore, efforts in automatising the Domain Module acquisition are necessary.Traditionally, textbooks have been used as the main mechanism to maintain and transmit the knowledge of a certain subject or domain. Textbooks have been authored by domain experts who have organised the contents in a means that facilitate understanding and learning, considering pedagogical issues.Given that textbooks are appropriate sources of information, they can be used to facilitate the development of the Domain Module allowing the identification of the topics to be mastered and the pedagogical relationships among them, as well as the extraction of Learning Objects, ie. meaningful fragments of the textbook with educational purpose.Consequently, in this work DOM-Sortze, a framework for the semi-automatic construction of Domain Modules from electronic textbooks, has been developed. DOM-Sortze uses NLP techniques, heuristic reasoning and ontologies to fulfill its work. DOM-Sortze has been designed and developed with the aim of automatising the development of the Domain Module, regardless of the subject, promoting the knowledge reuse and facilitating the collaboration of the users during the process.
Resumo:
Desarrollo de un sistema de recuperación y almacenamiento de las noticias multilíngües que aparecen en el Europe Media Monitor
Resumo:
Este proyecto plantea el reto de realizar una aplicación web para la gestión y control de una casa rural desde dispositivos móviles como smartphones y tabletas, con una interfaz de propietario para gestionar de manera dinámica las diferentes partes de la misma, así como la inclusión de una zona de niños donde se aplicarán tecnologías de Inteligencia Artificial en concreto representación del conocimiento mediante “frames”, donde se permitirá a los usuarios realizar preguntas al sistema para intentar adivinar un árbol que previamente la propietaria de la casa rural habrá establecido. El objetivo no es elaborar un sistema experto, tarea que requeriría muchas más horas que las que corresponden a un proyecto de fin de carrera, sino comprobar la posibilidad de integración de estas herramientas en una aplicación orientada a dispositivos móviles. Se emplearán las funcionalidades de HTML5 para la inclusión de la “zona explorador” donde los niños podrán geo localizar árboles, así como su posterior búsqueda a modo de GPS donde los usuarios podrán ver donde está situado el árbol que desean buscar y su posición actual, la cual se irá actualizando automáticamente.
Resumo:
In the last decades big improvements have been done in the field of computer aided learning, based on improvements done in computer science and computer systems. Although the field has been always a bit lagged, without using the latest solutions, it has constantly gone forward taking profit of the innovations as they show up. As long as the train of the computer science does not stop (and it won’t at least in the near future) the systems that take profit of those improvements will not either, because we humans will always need to study; Sometimes for pleasure and some other many times out of need. Not all the attempts in the field of computer aided learning have been in the same direction. Most of them address one or some few of the problems that show while studying and don’t take into account solutions proposed for some other problems. The reasons for this can be varied. Sometimes the solutions simply are not compatible. Some other times, because the project is an investigation it’s interesting to isolate the problem. And, in commercial products, licenses and patents often prevent the new projects to use previous work. The world moved forward and this is an attempt to use some of the options offered by technology, mixing some old ideas with new ones.
Resumo:
Aplicación para la gestión del transporte para la pequeña empresa
Resumo:
Seneko euskaraz idatzitako testuak lantzeko eskaintzen den aplikazio didaktikoa da. Oro har, sistemak erabiltzaileengandik jasotako fitxategien gainean automatikoki galderak sortzen ditu eta ariketa gisa eskaini. Honetaz gain, ikaskuntza/irakaskuntza metodologiari dagokionez sarean aurki daitezkeen aplikazioen alternatiba gisa garatu da sistema. Izan ere, fitxategiak eta ariketak partekatzeko metodoak oinarritzat hartuz kooperazioa eta elkarlana ahalbidetzen duen metodologiari heltzea izan da helburua.
Resumo:
Duración (en horas): Más de 50 horas. Destinatario: Estudiante y Docente
Resumo:
Traditional software development captures the user needs during the requirement analysis. The Web makes this endeavour even harder due to the difficulty to determine who these users are. In an attempt to tackle the heterogeneity of the user base, Web Personalization techniques are proposed to guide the users’ experience. In addition, Open Innovation allows organisations to look beyond their internal resources to develop new products or improve existing processes. This thesis sits in between by introducing Open Personalization as a means to incorporate actors other than webmasters in the personalization of web applications. The aim is to provide the technological basis that builds up a trusty environment for webmasters and companion actors to collaborate, i.e. "an architecture of participation". Such architecture very much depends on these actors’ profile. This work tackles three profiles (i.e. software partners, hobby programmers and end users), and proposes three "architectures of participation" tuned for each profile. Each architecture rests on different technologies: a .NET annotation library based on Inversion of Control for software partners, a Modding Interface in JavaScript for hobby programmers, and finally, a domain specific language for end-users. Proof-of-concept implementations are available for the three cases while a quantitative evaluation is conducted for the domain specific language.