973 resultados para Unstructured content search
Resumo:
Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
O objetivo deste estudo foi avaliar, utilizando diferentes indicadores antropométricos, o estado nutricional dos idosos de Fortaleza. Este estudo é de base populacional, do tipo transversal, com coleta de dados primários. As variáveis antropométricas analisadas foram: Ãndice de massa corporal (IMC), dobra cutânea tricipital (DCT) e circunferência muscular do braço (CMB). O estado nutricional foi definido a partir dos diagnósticos obtidos com a análise das variáveis antropométricas: eutrófico (idoso, no qual as três variáveis antropométricas (IMC, DCT e CMB), simultaneamente, indicassem o estado de eutrofia, segundo os padrões adotados) e não eutrófico (demais idosos). Foram selecionados 385 domicÃlios para comporem a amostra deste estudo, nos quais foram entrevistados 483 idosos (68% mulheres). Quanto ao IMC, 47,3% do total de idosos foram considerados eutróficos. As mulheres apresentaram maior proporção de valores de IMC excessivo (21,9%), quando comparadas aos homens (13,5%). Foi verificada associação estatisticamente significativa entre adequação de IMC e sexo. Os valores de DCT mostraram que 54,4% do total de idosos eram eutróficos. Não houve associação estatisticamente significativa entre a adequação da DCT e sexo. Quanto à CMB, os homens apresentaram maior prevalência de desnutrição (66,5%), quando comparados à s mulheres (40,6%). Foi verificada associação estatisticamente significativa entre adequação da CMB e sexo. Ao verificar o estado nutricional por meio das variáveis antropométricas, observou-se que 83,9% dos homens foram considerados não eutróficos, assim como maior parte das mulheres (74,2%). Foi observada associação estatisticamente significativa entre estado nutricional e sexo. Os idosos de Fortaleza apresentam estado nutricional vulnerável, visto as prevalências de não eutróficos
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
En la realización de este proyecto se ha tratado principalmente la temática del web scraping sobre documentos HTML en Android. Como resultado del mismo, se ha propuesto una metodologÃa para poder realizar web scraping en aplicaciones implementadas para este sistema operativo y se desarrollará una aplicación basada en esta metodologÃa que resulte útil a los alumnos de la escuela. Web scraping se puede definir como una técnica basada en una serie de algoritmos de búsqueda de contenido con el fin de obtener una determinada información de páginas web, descartando aquella que no sea relevante. Como parte central, se ha dedicado bastante tiempo al estudio de los navegadores y servidores Web, y del lenguaje HTML presente en casi todas las páginas web en la actualidad asà como de los mecanismos utilizados para la comunicación entre cliente y servidor ya que son los pilares en los que se basa esta técnica. Se ha realizado un estudio de las técnicas y herramientas necesarias, aportándose todos los conceptos teóricos necesarios, asà como la proposición de una posible metodologÃa para su implementación. Finalmente se ha codificado la aplicación UPMdroid, desarrollada con el fin de ejemplificar la implementación de la metodologÃa propuesta anteriormente y a la vez desarrollar una aplicación cuya finalidad es brindar al estudiante de la ETSIST un soporte móvil en Android que le facilite el acceso y la visualización de aquellos datos más importantes del curso académico como son: el horario de clases y las calificaciones de las asignaturas en las que se matricule. Esta aplicación, además de implementar la metodologÃa propuesta, es una herramienta muy interesante para el alumno, ya que le permite utilizar de una forma sencilla e intuitiva gran número de funcionalidades de la escuela solucionando asà los problemas de visualización de contenido web en los dispositivos. ABSTRACT. The main topic of this project is about the web scraping over HTML documents on Android OS. As a result thereof, it is proposed a methodology to perform web scraping in deployed applications for this operating system and based on this methodology that is useful to the ETSIST school students. Web scraping can be defined as a technique based on a number of content search algorithms in order to obtain certain information from web pages, discarding those that are not relevant. As a main part, has spent considerable time studying browsers and Web servers, and the HTML language that is present today in almost all websites as well as the mechanisms used for communication between client and server because they are the pillars which this technique is based. We performed a study of the techniques and tools needed, providing all the necessary theoretical concepts, as well as the proposal of a possible methodology for implementation. Finally it has codified UPMdroid application, developed in order to illustrate the implementation of the previously proposed methodology and also to give the student a mobile ETSIST Android support to facilitate access and display those most important data of the current academic year such as: class schedules and scores for the subjects in which you are enrolled. This application, in addition to implement the proposed methodology is also a very interesting tool for the student, as it allows a simple and intuitive way of use these school functionalities thus fixing the viewing web content on devices.
Resumo:
Colombia atraviesa un proceso de desmovilización y una de las metas es la reintegración laboral, entendida como el proceso a través del cual las personas que han hecho parte de un grupo armado ilegal obtienen un empleo y se insertan definitivamente a la sociedad. El presente estudio tiene como objetivo fundamental comprender las actitudes de un grupo de tres directivos hacia la vinculación laboral de las personas en proceso de reintegración laboral (PPR), mediante un diseño cualitativo. Para ello, se llevó a cabo una serie de entrevistas semiestructuradas a una muestra de tres directivos del sector público y privado. La información obtenida se analizó mediante un proceso de codificación axial. Los resultados obtenidos evidencian que las actitudes de los tres empresarios frente a la contratación de personas en proceso de reintegración laboral, pueden ser positivas o negativas. Asà mismo, una de las actitudes predominantes, son la evaluación de creencias y prejuicios de los empresarios frente al proceso de integración laboral, estos son: la incertidumbre frente al desempeño laboral del PPR, la falta de dedicación por parte del PPR, los posibles conflictos laborales y la dificultad de relacionamiento del PPR. En conclusión, el modelo del comportamiento organizacional juega un papel muy importante, dado que abarca los elementos que influyen y determinan la construcción de las actitudes. Estas guÃan la evaluación de conductas que pueden ser a favor o en contra, de diversos ámbitos del proceso de contratación de personas desmovilizadas.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
There are three key driving forces behind the development of Internet Content Management Systems (CMS) - a desire to manage the explosion of content, a desire to provide structure and meaning to content in order to make it accessible, and a desire to work collaboratively to manipulate content in some meaningful way. Yet the traditional CMS has been unable to meet the latter of these requirements, often failing to provide sufficient tools for collaboration in a distributed context. Peer-to-Peer (P2P) systems are networks in which every node is an equal participant (whether transmitting data, exchanging content, or invoking services) and there is an absence of any centralised administrative or coordinating authorities. P2P systems are inherently more scalable than equivalent client-server implementations as they tend to use resources at the edge of the network much more effectively. This paper details the rationale and design of a P2P middleware for collaborative content management.
Resumo:
This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.
Resumo:
Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.
Resumo:
Internet growth has provoked that information search had come to have one of the most relevant roles in the industry and to be one of the most current topics in research environments. Internet is the largest information container in history and its facility to generate new information leads to new challenges when talking about retrieving information and discern which one is more relevant than the rest. Parallel to the information growth in quantity, the way information is provided has also changed. One of these changes that has provoked more information traffic has been the emergence of social networks. We have seen how social networks can provoke more traffic than search engines themselves. We can draw conclusions that allow us to take a new approach to the information retrieval problem. Public trusts the most information coming from known contacts. In this document we will explore a possible change in classic search engines to bring them closer to the social side and adquire those social advantages.
Resumo:
Searches for field horizontal-branch (FHB) stars in the halo of the Galaxy in the past have been carried out by several techniques, such as objective-prism surveys and visual or infrared photometric surveys. By choosing adequate color criteria, it is possible to improve the efficiency of identifying bona fide FHB stars among the other objects that exhibit similar characteristics, such as main-sequence A-stars, blue stragglers, subdwarfs, etc. In this work, we report the results of a spectroscopic survey carried out near the south Galactic pole intended to validate FHB stars originally selected from the HK objective-prism survey of Beers and colleagues, based on near-infrared color indices. A comparison between the stellar spectra obtained in this survey with theoretical stellar atmosphere models allows us to determine T(eff), log g, and [Fe/H] for 13 stars in the sample. Stellar temperatures were calculated from measured (B-V)(o), when this measurement was available (16 stars). The color index criteria adopted in this work are shown to correctly classify 30% of the sample as FHB, 25% as non-FHB (main-sequence stars and subdwarfes), whereas 40% could not be distinguished between FHB and main-sequence stars. We compare the efficacy of different color criteria in the literature intended to select FHB stars, and discuss the use of the Mg II 4481 line to estimate the metallicity.
Resumo:
We announce the discovery of the transiting planet CoRoT-13b. Ground-based follow-up in CFHT and IAC80 confirmed CoRoT's observations. The mass of the planet was measured with the HARPS spectrograph and the properties of the host star were obtained analyzing HIRES spectra from the Keck telescope. It is a hot Jupiter-like planet with an orbital period of 4.04 days, 1.3 Jupiter masses, 0.9 Jupiter radii, and a density of 2.34 g cm(-3). It orbits a G0V star with T(eff) = 5 945 K, M(*) = 1.09 M(circle dot), R(*) = 1.01 R(circle dot), solar metallicity, a lithium content of +1.45 dex, and an estimated age of between 0.12 and 3.15 Gyr. The lithium abundance of the star is consistent with its effective temperature, activity level, and age range derived from the stellar analysis. The density of the planet is extreme for its mass, implies that heavy elements are present with a mass of between about 140 and 300 M(circle plus).
Resumo:
We have developed a new procedure to search for carbon-enhanced metal-poor (CEMP) stars from the Hamburg/ESO (HES) prism-survey plates. This method employs an extended line index for the CH G band, which we demonstrate to have superior performance when compared to the narrower G-band index formerly employed to estimate G-band strengths for these spectra. Although CEMP stars have been found previously among candidate metal-poor stars selected from the HES, the selection on metallicity undersamples the population of intermediate-metallicity CEMP stars (-2.5 <= [Fe/H] <= -1.0); such stars are of importance for constraining the onset of the s-process in metal-deficient asymptotic giant branch stars (thought to be associated with the origin of carbon for roughly 80% of CEMP stars). The new candidates also include substantial numbers of warmer carbon-enhanced stars, which were missed in previous HES searches for carbon stars due to selection criteria that emphasized cooler stars. A first subsample, biased toward brighter stars (B < 15.5), has been extracted from the scanned HES plates. After visual inspection (to eliminate spectra compromised by plate defects, overlapping spectra, etc., and to carry out rough spectral classifications), a list of 669 previously unidentified candidate CEMP stars was compiled. Follow-up spectroscopy for a pilot sample of 132 candidates was obtained with the Goodman spectrograph on the SOAR 4.1 m telescope. Our results show that most of the observed stars lie in the targeted metallicity range, and possess prominent carbon absorption features at 4300 angstrom. The success rate for the identification of new CEMP stars is 43% (13 out of 30) for [Fe/H] < -2.0. For stars with [Fe/H] < -2.5, the ratio increases to 80% (four out of five objects), including one star with [Fe/H] < -3.0.
Resumo:
Introduction: Internet users are increasingly using the worldwide web to search for information relating to their health. This situation makes it necessary to create specialized tools capable of supporting users in their searches. Objective: To apply and compare strategies that were developed to investigate the use of the Portuguese version of Medical Subject Headings (MeSH) for constructing an automated classifier for Brazilian Portuguese-language web-based content within or outside of the field of healthcare, focusing on the lay public. Methods: 3658 Brazilian web pages were used to train the classifier and 606 Brazilian web pages were used to validate it. The strategies proposed were constructed using content-based vector methods for text classification, such that Naive Bayes was used for the task of classifying vector patterns with characteristics obtained through the proposed strategies. Results: A strategy named InDeCS was developed specifically to adapt MeSH for the problem that was put forward. This approach achieved better accuracy for this pattern classification task (0.94 sensitivity, specificity and area under the ROC curve). Conclusions: Because of the significant results achieved by InDeCS, this tool has been successfully applied to the Brazilian healthcare search portal known as Busca Saude. Furthermore, it could be shown that MeSH presents important results when used for the task of classifying web-based content focusing on the lay public. It was also possible to show from this study that MeSH was able to map out mutable non-deterministic characteristics of the web. (c) 2010 Elsevier Inc. All rights reserved.