909 resultados para I SEARCH (Program)
Resumo:
Helsingfors 1892
Resumo:
L'objectiu d'aquest article és discutir una antiga matriu comunicativa parcialment assenyalada per A. Comte, Ch. S. Peirce i U. Eco. Aquests autors reconegueren la relació entre marques, fletxes, ensenyes i espills com a diferents metàfores bàsiques de diverses operacions cognitives, encara que no en van presentar una visió de conjunt. Tots tres van treballar la distinció entre metàfora i metonímia, que va resultar tan fructífera en diferents dominis de recerca, com Jakobson (entre altres) ens va ensenyar. La meua hipòtesi és que aquestes quatre metàfores bàsiques (marques, fletxes, ensenyes i espills) es poden deduir correctament d'una matriu que relacioni l'eix metàfora-metonímia amb menes de codi ficació. E. Verón llançà una interessant reflexió sobre els modes analògic i digital, tot partint d'un problema semblant. Marques, etxes, ensenyes i miralls poden recordar bé operacions com senyalar i assenyalar, representar i reflectir, respectivament: estipular de manera aproximada com aquests objectes i aquestes operacions estan connectats podria ser una petita contribució a l'antic programa de Peirce i els altres.
Resumo:
Desarrollo de una aplicación basada en la localización y creación de rutas para personas discapacitadas, mayores, con movilidad reducida, con Alzheimer o demencia senil y personas que no puedan comunicarse verbalmente. La función de la aplicación es saber siempre su ubicación y, en caso de sufrir alguna crisis, llamar a sus cuidadores o familiares.
Resumo:
L'ontologia que s'ha dissenyat contempla els conceptes bàsics de Twitter, les relacions entre ells i les restriccions que cal respectar. L'ontologia s'ha dissenyat amb el programa Protégé i està disponible en format OWL. S'ha desenvolupat una aplicació per poblar l'ontologia amb els tweets que s'obtenen a partir d'una cerca a Twitter. L'accés a Twitter es fa via l'API que ofereix per accedir a les dades des d'aplicacions de tercers. El resultat de l'execució de l'aplicació és un fitxer RDF/XML amb les tripletes corresponents a les instàncies dels objectes en l'ontologia.
Resumo:
En el present treball hem tractat d'aportar una visió actual del món de les dades obertes enllaçades en l'àmbit de l'educació. Hem revisat tant les aplicacions que van dirigides a implementar aquestes tecnologies en els repositoris de dades existents (pàgines web, repositoris d'objectes educacionals, repositoris de cursos i programes educatius) com a ser suport de nous paradigmes dins del món de l'educació.
Resumo:
[cat] En aquest article investiguem els factors que porten a universitaris espanyols i holandesos a lamentar els estudis cursats. Espanya i Holanda tenen un sistema educatiu molt diferent en termes de la rigidesa de l’educació secundària i el vincle entre l’educació i el mercat laboral. Comparant Espanya i Holanda ens permet aprendre sobre les conseqüències de dos sistemes educatius molt diferenciats a la probabilitat de lamentar els estudis cursats. Basant-nos en la literatura psicològica sobre l’arrepentiment/lamentació, derivem unes hipòtesis de partida que contrastem empíricament. Els resultats mostren que tant la rigidesa de l’educació secundària com el desajustament entre educació i ocupació són factors importants per explicar la lamentació dels estudis universitaris cursats. L’article conclou amb recomenacions sobre el sistema educatiu universitari.
Resumo:
[cat] En aquest article investiguem els factors que porten a universitaris espanyols i holandesos a lamentar els estudis cursats. Espanya i Holanda tenen un sistema educatiu molt diferent en termes de la rigidesa de l’educació secundària i el vincle entre l’educació i el mercat laboral. Comparant Espanya i Holanda ens permet aprendre sobre les conseqüències de dos sistemes educatius molt diferenciats a la probabilitat de lamentar els estudis cursats. Basant-nos en la literatura psicològica sobre l’arrepentiment/lamentació, derivem unes hipòtesis de partida que contrastem empíricament. Els resultats mostren que tant la rigidesa de l’educació secundària com el desajustament entre educació i ocupació són factors importants per explicar la lamentació dels estudis universitaris cursats. L’article conclou amb recomenacions sobre el sistema educatiu universitari.
Resumo:
Quest for Orthologs (QfO) is a community effort with the goal to improve and benchmark orthology predictions. As quality assessment assumes prior knowledge on species phylogenies, we investigated the congruency between existing species trees by comparing the relationships of 147 QfO reference organisms from six Tree of Life (ToL)/species tree projects: The National Center for Biotechnology Information (NCBI) taxonomy, Opentree of Life, the sequenced species/species ToL, the 16S ribosomal RNA (rRNA) database, and trees published by Ciccarelli et al. (Ciccarelli FD, et al. 2006. Toward automatic reconstruction of a highly resolved tree of life. Science 311:1283-1287) and by Huerta-Cepas et al. (Huerta-Cepas J, Marcet-Houben M, Gabaldon T. 2014. A nested phylogenetic reconstruction approach provides scalable resolution in the eukaryotic Tree Of Life. PeerJ PrePrints 2:223) Our study reveals that each species tree suggests a different phylogeny: 87 of the 146 (60%) possible splits of a dichotomous and rooted tree are congruent, while all other splits are incongruent in at least one of the species trees. Topological differences are observed not only at deep speciation events, but also within younger clades, such as Hominidae, Rodentia, Laurasiatheria, or rosids. The evolutionary relationships of 27 archaea and bacteria are highly inconsistent. By assessing 458,108 gene trees from 65 genomes, we show that consistent species topologies are more often supported by gene phylogenies than contradicting ones. The largest concordant species tree includes 77 of the QfO reference organisms at the most. Results are summarized in the form of a consensus ToL (http://swisstree.vital-it.ch/species_tree) that can serve different benchmarking purposes.
Resumo:
The international HyMeX (Hydrological Mediterranean Experiment) program aims to improve our understanding of the water cycle in the Mediterranean, using a multidisciplinary and multiscale approach and with emphasis on extreme events. This program will improve our understanding and our predictive ability of hydrometeorological hazards including their evolution within the next century. One of the most important results of the program will be its observational campaigns, which will greatly improve the data available, leading to significant scientific results. The interest of the program for the Spanish research groups is described, as the active participation of some of them in the design and execution of the observational activities. At the same time, due to its location, Spain is key to the program, being a good observation platform. HyMeX will enrich the work of the Spanish research groups, it will improve the predictive ability of the weather services, will help us to have a better understanding of the impacts of hydrometeorological extremes on our society and will lead to better strategies for adapting to climate change.
Resumo:
For decades, lung cancer has been the most common cancer in terms of both incidence and mortality. There has been very little improvement in the prognosis of lung cancer. Early treatment following early diagnosis is considered to have potential for development. The National Lung Screening Trial (NLST), a large, well-designed randomized controlled trial, evaluated low-dose computed tomography (LDCT) as a screening tool for lung cancer. Compared with chest X-ray, annual LDCT screening reduced death from lung cancer and overall mortality by 20 and 6.7 %, respectively, in high-risk people aged 55-74 years. Several smaller trials of LDCT screening are under way, but none are sufficiently powered to detect a 20 % reduction in lung cancer death. Thus, it is very unlikely that the NLST results will be replicated. In addition, the NLST raises several issues related to screening, such as the high false-positive rate, overdiagnosis and cost. Healthcare providers and systems are now left with the question of whether the available findings should be translated into practice. We present the main reasons for implementing lung cancer screening in high-risk adults and discuss the main issues related to lung cancer screening. We stress the importance of eligibility criteria, smoking cessation programs, primary care physicians, and informed-decision making should lung cancer screening be implemented. Seven years ago, we were waiting for the results of trials. Such evidence is now available. Similar to almost all other cancer screens, uncertainties exist and persist even after recent scientific efforts and data. We believe that by staying within the characteristics of the original trial and appropriately sharing the evidence as well as the uncertainties, it is reasonable to implement a LDCT lung cancer screening program for smokers and former smokers.
Resumo:
Aquest PFC implementa el control d'una alarma activada per acceleració. S'ha realitzat sobre la placa LPC1769 i el sistema FreeRTOS. El projecte permet controlar l'alarma des de dispositius Android, a través d'un servidor intermedi que s'encarrega de la comunicació.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Aquest treball explora la llengua del diari del pagès Joan de la Guàrdia, de l'Esquirol, escrit al segle XVII durant la guerra dels Segadors. El treball està fet a partir d'una anàlisi dels principals trets morfològics, sintàctics i lexicosemàntics del diari, així com de l'ortografia i de les característiques d'escriptura del document original. Al treball també s'estableix una comparativa d'aquest text amb d'altres del mateix moment, d'abans i de després de la zona, així com amb l'actual parla del Collsacabra.
Resumo:
Aquest treball presenta una anàlisi textual de l'obra de Maria Aurèlia Capmany 'Feliçment, jo sóc una dona'. D'una banda, a través d'una breu presentació de l'autora fins al moment de la publicació de l'obra i, d'una altra, d'una síntesi del feminisme a Catalunya, l'autora del treball ens apropa a les teories feministes que Capmany aboca en la seva novel·la, fet que representa un punt d'inflació en la seva trajectòria novel·lística. L'anàlisi textual ens permet apropar-nos al concepte d'identitat femenina i a la seva construcció, així com relacionar-lo amb les teories feministes més recents.
Resumo:
L'estudi dels inventaris postmortem ens ha permès apropar-nos a la quotidianitat de taverners i hostalers que visqueren durant el segle XVII a la ciutat de Barcelona. Aquest tipus de document notarial ens deixa, entre d'altres, testimoni de la cultura material; els objectes, estris, mobles, robes... que ens poden ajudar a entendre diferents usos i costums, així com altres aspectes relacionats amb les mentalitats.