979 resultados para computer language
Resumo:
We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.
Resumo:
Presentamos el proyecto CLARIN, un proyecto cuyo objetivo es potenciar el uso de instrumentos tecnológicos en la investigación en las Humanidades y Ciencias Sociales
Resumo:
This article reflects the analysis of personal and social competences through the study and analysis of creative tensión in engineering students, using a computer application called Cycloid. The main objective was to compare the students' creative tensión by asigning them the task of being the project leader of a given project: their own university major. The process consisted of evaluating, through special surveys, a group of students to know the current situation of competences, using fuzzy logic analysis. From this self-knowledge, provided by the survey, students can know their strong and weak characteristics regarding their study habits. Results showed that tolerance to stress and to language courses are the weaker points. This application is useful for the design of study strategies that students themselves can do to better face their courses
Resumo:
This thesis seeks to answer, if communication challenges in virtual teams can be overcome with the help of computer-mediated communication. Virtual teams are becoming more common work method in many global companies. In order for virtual teams to reach their maximum potential, effective asynchronous and synchronous methods for communication are needed. The thesis covers communication in virtual teams, as well as leadership and trust building in virtual environments with the help of CMC. First, the communication challenges in virtual teams are identified by using a framework of knowledge sharing barriers in virtual teams by Rosen et al. (2007) Secondly, the leadership and trust in virtual teams are defined in the context of CMC. The performance of virtual teams is evaluated in the case study by exploiting these three dimensions. With the help of a case study of two virtual teams, the practical issues related to selecting and implementing communication technologies as well as overcoming knowledge sharing barriers is being discussed. The case studies involve a complex inter-organisational setting, where four companies are working together in order to maintain a new IT system. The communication difficulties are related to inadequate communication technologies, lack of trust and the undefined relationships of the stakeholders and the team members. As a result, it is suggested that communication technologies are needed in order to improve the virtual team performance, but are not however solely capable of solving the communication challenges in virtual teams. In addition, suitable leadership and trust between team members are required in order to improve the knowledge sharing and communication in virtual teams.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Objectif STOPP/START est un outil de détection de la prescription médicamenteuse potentiellement inappropriée chez la personne de 65 ans ou plus. La version initiale de 2008 vient d'être mise à jour et améliorée par ses auteurs. Nous en présentons l'adaptation et la validation en langue française. Méthodes L'adaptation en français de l'outil STOPP/START.v2 a été réalisée par deux experts, confirmée par la méthode de traduction-inverse, et finalisée d'après les commentaires de neufs évaluateurs francophones, gériatres, pharmaciens cliniciens, et médecin généraliste de quatre pays (France, Belgique, Suisse, Canada). La validation a été complétée par une analyse de concordance inter-juge (CCI) des critères STOPP/START.v2 appliqués à dix vignettes cliniques standardisées. Résultats Les 115 critères de STOPP/START.v2 en français sont, par rapport à la version originale anglaise, identiques par leur classification mais adaptés en termes de présentation (critères START.v2 commençant par la condition clinique, et accompagnés par une justification du caractère inapproprié de l'omission) voire de formulation de certains critères. Cette adaptation en français est validée par (i) la traduction-inverse montrant le respect du sens clinique de la version originale, (ii) l'identification semblable des critères lorsque appliqués à dix vignettes cliniques par les neuf évaluateurs, et (iii) le haut niveau de concordance de ces neuf évaluations tant pour STOPP.v2 (CCI 0,849) que pour START.v2 (CCI 0,921). Conclusion L'adaptation en langue française des critères STOPP/START.v2 fournit aux cliniciens un outil de détection de la prescription médicamenteuse potentiellement inappropriée chez les personnes de 65 ans et plus qui est logique, fiable et facile à utiliser. Objective STOPP/START is a screening tool to detect potentially inappropriate prescribing in persons aged 65 or older. Its Irish authors recently updated and improved the initially published version of 2008. We present the adaptation and validation into French language of this updated tool. Methods STOPP/START.v2 was adapted into French by two experts, then confirmed by a translation-back translation method and finalised according to the comments of nine French-speaking assessors - geriatricians, pharmacologists and a general physician - from four countries (France, Belgium, Switzerland, and Canada). The validation was completed by an inter-rater reliability (IRR) analysis of the STOPP/START.v2 criteria applied to 10 standardized clinical vignettes. Results In comparison to the original English version, the 115 STOPP/START.v2 criteria in French language classify in identical manner, but the presentation has been adjusted (START.v2 first specifies the clinical condition followed by an explanation of the inappropriateness of the prescription or omission). This adaptation into French language was validated by means of (i) the translation/back-translation, which showed that the French version complied with the clinical meaning of the original criteria; (ii) the similar screening results when applied by the nine specialists to the 10 cases; and (iii) the high level of inter-rater reliability of these 9 evaluations, for both STOPP (IRR 0.849) and START.v2 (IRR 0.921). Conclusion The adaptation into French of the STOPP/START.v2 criteria provides clinicians with a screening tool to detect potentially inappropriate prescribing in patients aged 65 and older that is more logical, more reliable and easier to use.
Resumo:
Language diversity has become greatly endangered in the past centuries owing to processes of language shift from indigenous languages to other languages that are seen as socially and economically more advantageous, resulting in the death or doom of minority languages. In this paper, we define a new language competition model that can describe the historical decline of minority languages in competition with more advantageous languages. We then implement this non-spatial model as an interaction term in a reactiondiffusion system to model the evolution of the two competing languages. We use the results to estimate the speed at which the more advantageous language spreads geographically, resulting in the shrinkage of the area of dominance of the minority language. We compare the results from our model with the observed retreat in the area of influence of the Welsh language in the UK, obtaining a good agreement between the model and the observed data
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
In this paper we study student interaction in English and Swedish courses at a Finnish university. We focus on language choices made in task-related activities in small group interaction. Our research interests arose from the change in the teaching curriculum, in which content and language courses were integrated at Tampere University of Technology in 2013. Using conversation analysis, we analysed groups of 4-5 students who worked collaboratively on a task via a video conference programme. The results show how language alternation has different functions in 1) situations where students orient to managing the task, e.g., in transitions into task, or where they orient to technical problems, and 2) situations where students accomplish the task. With the results, we aim to show how language alternation can provide interactional opportunities for language learning. The findings will be useful in designing tasks in the future.
Resumo:
Tampere University of Technology is undergoing a degree reform that started in 2013. One of the major changes in the reform was the integration of compulsory Finnish, Swedish and English language courses to substance courses at the bachelor level. The integration of content and language courses aims at higher quality language learning, more fluency in studies, and increased motivation toward language studies. In addition, integration is an opportunity to optimize the use of resources and to offer courses that are more tailored to the students' field of study and to the skills needed in working life. The reform also aims to increase and develop co-operation between different departments at the university and to develop scientific follow up. This paper gives an overview of the integration process conducted at TUT and gives examples of adjunct CLIL implementations in three different languages.