983 resultados para programming language processing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a software that allows data acquisition production process, in this case, an automatic pallet nailing line. The recording of these data will enable them to make a track and analyze them later, either with the analytical tools of the application or by the transfer of such data to an Excel sheet or database. The programming language has been developed made by Ladder for the application in the PLC that controls the line of nailing. Control pages for the HMI application that monitors the process. Finally, the Visual Basic language for the production department computer application. To extract production variables from the process, the developed software communicates with the network formed by the PLC and the HMI terminal which stores and control the process using the Modbus TCP/IP protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we study the field of opinion mining by giving a comprehensive review of the available research that has been done in this topic. Also using this available knowledge we present a case study of a multilevel opinion mining system for a student organization's sales management system. We describe the field of opinion mining by discussing its historical roots, its motivations and applications as well as the different scientific approaches that have been used to solve this challenging problem of mining opinions. To deal with this huge subfield of natural language processing, we first give an abstraction of the problem of opinion mining and describe the theoretical frameworks that are available for dealing with appraisal language. Then we discuss the relation between opinion mining and computational linguistics which is a crucial pre-processing step for the accuracy of the subsequent steps of opinion mining. The second part of our thesis deals with the semantics of opinions where we describe the different ways used to collect lists of opinion words as well as the methods and techniques available for extracting knowledge from opinions present in unstructured textual data. In the part about collecting lists of opinion words we describe manual, semi manual and automatic ways to do so and give a review of the available lists that are used as gold standards in opinion mining research. For the methods and techniques of opinion mining we divide the task into three levels that are the document, sentence and feature level. The techniques that are presented in the document and sentence level are divided into supervised and unsupervised approaches that are used to determine the subjectivity and polarity of texts and sentences at these levels of analysis. At the feature level we give a description of the techniques available for finding the opinion targets, the polarity of the opinions about these opinion targets and the opinion holders. Also at the feature level we discuss the various ways to summarize and visualize the results of this level of analysis. In the third part of our thesis we present a case study of a sales management system that uses free form text and that can benefit from an opinion mining system. Using the knowledge gathered in the review of this field we provide a theoretical multi level opinion mining system (MLOM) that can perform most of the tasks needed from an opinion mining system. Based on the previous research we give some hints that many of the laborious market research tasks that are done by the sales force, which uses this sales management system, can improve their insight about their partners and by that increase the quality of their sales services and their overall results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context awareness is emerging on mobile devices. Context awareness can be used to improve usability of a mobile device. Context awareness is particularly important on mobile devices due the limitations they have. At first in this work, a literature review on context awareness and mobile environment is made. For aiding context awareness there exist an implementation of a Context Framework for Symbian S60 devices. It provides a possibility for exchanging the contexts inside the device between the client applications of the local Context Framework. The main contribution of this thesis is to design and implement an enhancement to the S60 Context Framework for providing possibility to exchange context over device boundaries. Using the implemented Context Exchange System, the context exchange is neither depending on the type of the context nor the type of the client. In addition, the clients and the contexts can reside on any interconnected device. The usage of the system is independent of the programming language since in addition to using only Symbian C++ function interfaces it can also be utilized using XML scripts. The Meeting Sniffer application, which uses the Context Exchange System, was also developed in this work. Using this application, it is possible to recognize a meeting situation and suggest device profile change to a user.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

NURBS are widely used parametric approximation curves or surfaces. NURBS can be applied to the many applications. Examples of these applications are some computeraided design applications and some medical applications. Use of NURBS is very intuitive. The objective of this work was to implement the NURBS toolbox in the Matlab environment. Matlab is a program for many kinds of computational purposes. Matlab is also a programming language. NURBS toolbox implemented in this work offers a user an opportunity to use functions of this toolbox as parts of the user’s own programs. Current version of NURBS toolbox includes functions for NURBS curve and surface evaluation. The toolbox is designed such, that it allows extensions and enhancements in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä työssä kehitetään yleiskäyttöinen palvelupyyntömalli, jonka avulla Lahden kaupungin Lahti Fenix –projektin Kuntalaistilijärjestelmän palveluväylän kautta voidaan kutsua järjestelmän tietokantatasoa tai muita palveluväylän avulla integroituja järjestelmiä. Työn tavoitteena oli suoraviivaistaa järjestelmäintegraatioihin liittyvien palveluiden kehittämistä suunnittelemalla sellainen palvelupyyntömuodostin, joka ei sisällä staattisia viittauksia jossakin tietyssä palvelussa käytettäviin luokkiin tai muihin ominaisuuksiin. Työssä hyödynnettiin Java-kielen kehittyneitä ominaisuuksia; reflektiivistä ohjelmointia, geneeristä ohjelmointia ja Java-virtuaalikoneen metodipinon lukemista. Tavoitteen saavuttamista mitattiin käyttäen avuksi McCaben syklomaattista kompleksisuutta ja metodeissa käytettyä rivimäärää. Työ aloitettiin joulukuussa 2008 ja saatiin valmiiksi helmikuussa 2009. Työn tuloksena syntyi toimiva, syklomaattiselta kompleksisuudeltaan matala ja helppokäyttöinen palvelukutsumuodostin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The starting point of our investigation was the longstanding notion that bilingual individuals need effective mechanisms to prevent interference from one language while processing material in the other (e.g. Penfield and Roberts, 1959). To demonstrate how the prevention of interference is implemented in the brain we employed event-related brain potentials (ERPs; see Munte, Urbach, ¨ Duzel and Kutas, 2000, for an introductory review) ¨ and functional magnetic resonance imaging (fMRI) techniques, thus pursuing a combined temporal and spatial imaging approach. In contrast to previous investigations using neuroimaging techniques in bilinguals, which had been mainly concerned with the localization of the primary and secondary languages (e.g. Perani, Paulesu, Galles, Dupoux, Dehaene, Bettinardi, Cappa, Fazio and Mehler, 1998; Chee, Caplan, Soon, Sriram, Tan, Thiel and Weekes, 1999), our study addressed the dynamic aspects of bilingual language processing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important issue in language learning is how new words are integrated in the brain representations that sustain language processing. To identify the brain regions involved in meaning acquisition and word learning, we conducted a functional magnetic resonance imaging study. Young participants were required to deduce the meaning of a novel word presented within increasingly constrained sentence contexts that were read silently during the scanning session. Inconsistent contexts were also presented in which no meaning could be assigned to the novel word. Participants showed meaning acquisition in the consistent but not in the inconsistent condition. A distributed brain network was identified comprising the left anterior inferior frontal gyrus (BA 45), the middle temporal gyrus (BA 21), the parahippocampal gyrus, and several subcortical structures (the thalamus and the striatum). Drawing on previous neuroimaging evidence, we tentatively identify the roles of these brain areas in the retrieval, selection, and encoding of the meaning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Des de fa uns anys, des de l’EPS de la UVic, s’està duent a terme el desenvolupament d’un dispositiu electrònic que proporciona la capacitat de capturar dades sobre un niu d’ocells. El projecte e-niu, que es pot seguir a www.tutara.info/e-niu, està actualment en una fase de test., i s’ha desenvolupat sobretot la part hardware. Aquest projecte té com a objectiu principal crear un entorn web per poder gestionar les dades que s’obtenen del niu d’ocells informatitzat (e-niu). Les dades que ens arriben dels e-nius estan en un arxiu de text, i el que es pretén és que l’usuari que controla el niu, pugui fer diversos anàlisis de les dades d’aquest. A més de poder veure els resultats en diversos tipus de gràfics, també se li vol donar la opció de treure els resultats en format taula o en format Excel, un format aquest últim, molt interessant, ja que donaria a les dades una gran possibilitat de ser tractades posteriorment, com fer seleccions del tipus de dades, treure percentatges, crear altres tipus de gràfics, etc. L’altre gran objectiu és el fet de poder treballar en la creació d’un entorn web complert a nivell gairebé professional amb l’aprenentatge que això comporta, ja que s’ha aplicat la tecnologia client-servidor, és a dir, que el llenguatge de programació està dins el servidor, i quan algun usuari l’executa, el sistema només li envia la presentació en HTML. El sistema de programació que es fa servir és el de les tres capes. La capa de dades, que està formada per una base de dades relacional del tipus MySQL i és on emmagatzemarem tota la informació. La capa de programació de la que s’encarrega el llenguatge PHP, és on s’efectua tot el tractament de les dades i finalment, la capa de presentació, que és la que s’encarrega de mostrar les dades al client en el navegador mitjançant els templates de HTML.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software integration is a stage in a software development process to assemble separate components to produce a single product. It is important to manage the risks involved and being able to integrate smoothly, because software cannot be released without integrating it first. Furthermore, it has been shown that the integration and testing phase can make up 40 % of the overall project costs. These issues can be mitigated by using a software engineering practice called continuous integration. This thesis work presents how continuous integration is introduced to the author's employer organisation. This includes studying how the continuous integration process works and creating the technical basis to start using the process on future projects. The implemented system supports software written in C and C++ programming languages on Linux platform, but the general concepts can be applied to any programming language and platform by selecting the appropriate tools. The results demonstrate in detail what issues need to be solved when the process is acquired in a corporate environment. Additionally, they provide an implementation and process description suitable to the organisation. The results show that continuous integration can reduce the risks involved in a software process and increase the quality of the product as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Matkustajainformaatio junassa koostuu vaunujen ulkopuolisilla kylkinäytöillä esitettävistä junan lähtö-, väli- ja määräasematiedoista yhdessä junan ja vaunujen myyntinumeroiden kanssa sekä vaunujen sisäpuolella automaattisista kuulutuksista ja matkustamon näytöillä esitettävästä staattisesta ja vaihtuvasta informaatiosta. Työssä toteutetaan matkustajainformaatiojärjestelmä käytettäväksi matkustajunissa. Järjestelmään syötetään ennen matkan alkua junan tiedot, jonka jälkeen se toimii automaattisesti ilman tarvetta junahenkilökunnan toimenpiteille. Poikkeustilanteissa junahenkilökunta voi estää järjestelmän toiminnan tai valita esiohjelmointuja erikoiskuulutuksia. Toteuttamismenetelmäksi valittiin C-ohjelmointikieli Linux-käyttöjärjestelmällä varustetulla sulautetulla rautatiekäyttöön suunnitellulla laitealustalla.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study proposes an activity to introduce scientific programming. In particular, the multidisciplinary concepts of scientific programming, quantum mechanics, and spectroscopy are presented in the study of the electronic spectrum of the I2 molecule. We use Python programming language and the IPython command shell, in particular, for their user friendliness and versatility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, a computer software for defining the geometry for a centrifugal compressor impeller is designed and implemented. The project is done under the supervision of Laboratory of Fluid Dynamics in Lappeenranta University of Technology. This thesis is similar to the thesis written by Tomi Putus (2009) in which a centrifugal compressor impeller flow channel is researched and commonly used design practices are reviewed. Putus wrote a computer software which can be used to define impeller’s three-dimensional geometry based on the basic geometrical dimensions given by a preliminary design. The software designed in this thesis is almost similar but it uses a different programming language (C++) and a different way to define the shape of the impeller meridional projection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.