982 resultados para Web-Resource
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
Resumo:
How to create or integrate large Smart Spaces (considered as mash-ups of sensors and actuators) into the paradigm of ?Web of Things? has been the motivation of many recent works. A cutting-edge approach deals with developing and deploying web-enabled embedded devices with two major objectives: 1) to integrate sensor and actuator technologies into everyday objects, and 2) to allow a diversity of devices to plug to Internet. Currently, developers who want to use this Internet-oriented approach need have solid understanding about sensorial platforms and semantic technologies. In this paper we propose a Resource-Oriented and Ontology-Driven Development (ROOD) methodology, based on Model Driven Architecture (MDA), to facilitate to any developer the development and deployment of Smart Spaces. Early evaluations of the ROOD methodology have been successfully accomplished through a partial deployment of a Smart Hotel.
Resumo:
This paper presents a Focused Crawler in order to Get Semantic Web Resources (CSR). Structured data web are available in formats such as Extensible Markup Language (XML), Resource Description Framework (RDF) and Ontology Web Language (OWL) that can be used for processing. One of the main challenges for performing a manual search and download semantic web resources is that this task consumes a lot of time. Our research work propose a focused crawler which allow to download these resources automatically and store them on disk in order to have a collection that will be used for data processing. CRS consists of three layers: (a) The User Interface Layer, (b) The Focus Crawler Layer and (c) The Base Crawler Layer. CSR uses as a selection policie the Shark-Search method. CSR was conducted with two experiments. The first one starts on December 15 2012 at 7:11 am and ends on December 16 2012 at 4:01 were obtained 448,123,537 bytes of data. The CSR ends by itself after to analyze 80,4375 seeds with an unlimited depth. CSR got 16,576 semantic resources files where the 89 % was RDF, the 10 % was XML and the 1% was OWL. The second one was based on the Web Data Commons work of the Research Group Data and Web Science at the University of Mannheim and the Institute AIFB at the Karlsruhe Institute of Technology. This began at 4:46 am of June 2 2013 and 1:37 am June 9 2013. After 162.51 hours of execution the result was 285,279 semantic resources where predominated the XML resources with 99 % and OWL and RDF with 1 % each one.
Resumo:
Los sistemas de videoconferencia y colaboración en tiempo real para múltiples usuarios permiten a sus usuarios comunicarse por medio de vídeo, audio y datos. Históricamente estos han sido sistemas caros de obtener y de mantener. El paso de las décadas ha limado estos problemas acercado el mundo de comunicación en tiempo real a un grupo mucho más amplio, llegando a usarse en diversos ámbitos como la educación o la medicina. En este sentido, el último gran salto evolutivo al que hemos asistido ha sido la transición de este tipo de aplicaciones hacia la Web. Varias tecnologías han permitido este viaje hacia el navegador. Las Aplicaciones Ricas de Internet (RIAs), que permiten crear aplicaciones Web interactivas huyendo del clásico esquema de petición y respuesta y llevando funcionalidades propias de las aplicaciones nativas a la Web. Por otro lado, la computación en la nube o Cloud Computing, con su modelo de pago por uso de recursos virtualizados, ha llevado a la creación de servicios que se adaptan mejor a la demanda, han habilitado este viaje hacia el navegador. No obstante, como cada cambio, este salto presenta una serie de retos para los sistemas de videoconferencia establecidos. Esta tesis doctoral propone un conjunto de arquitecturas, mecanismos y algoritmos para adaptar los sistemas de multiconferencia al entorno Web, teniendo en cuenta que este es accedido desde dispositivos diferentes y mediante redes de acceso variadas. Para ello se comienza por el estudio de los requisitos que debe cumplir un sistema de videoconferencia en la Web. Como resultado se diseña, implementa y desarrolla un servicio de videoconferencia que permite la colaboración avanzada entre múltiples usuarios mediante vídeo, audio y compartición de escritorio. Posteriormente, se plantea un sistema de comunicación entre una aplicación nativa y Web, proponiendo técnicas de adaptación entre los dos entornos que permiten la conversación de manera transparente para los usuarios. Estos sistemas permiten facilitar la transición hacia tecnologías Web. Como siguiente paso, se identificaron los principales problemas que existen para la comunicación multiusuario en dispositivos de tamaño reducido (teléfonos inteligentes) utilizando redes de acceso heterogéneas. Se propone un mecanismo, combinación de transcodificación y algoritmos de adaptación de calidad para superar estas limitaciones y permitir a los usuarios de este tipo de dispositivos participar en igualdad de condiciones. La aparición de WebRTC como tecnología disruptiva en este entorno, permitiendo nuevas posibilidades de comunicación en navegadores, motiva la segunda iteración de esta tesis. Aquí se presenta un nuevo esquema de adaptación a la demanda para servidores de videoconferencia diseñado para las necesidades del entorno Web y para aprovechar las características de Cloud Computing. Finalmente, esta tesis repasa las conclusiones obtenidas como fruto del trabajo llevado a cabo, reflejando la evolución de la videoconferencia Web desde sus inicios hasta nuestros días. ABSTRACT Multiuser Videoconferencing and real-time collaboration systems allow users to communicate using video, audio and data streams. These systems have been historically expensive to obtain and maintain. Over the last few decades, technological breakthroughs have mitigated those costs and popularized real time video communication, allowing its use in environments such as education or health. The last big evolutionary leap forward has been the transition of these types of applications towards theWeb. Several technologies have allowed this journey to theWeb browser. Firstly, Rich Internet Applications (RIAs) enable the creation of dynamic Web pages that defy the classical request-response interaction and provide an experience similar to their native counterparts. On the other hand, Cloud Computing brings the leasing of virtualized hardware resources in a pay-peruse model and, with it, better scalability in resource-demanding services. However, as with every change, this evolution imposes a set of challenges on existing videoconferencing solutions. This dissertation proposes a set of architectures, mechanisms and algorithms that aim to adapt multi-conferencing systems to the Web platform, taking into account the variety of devices and access networks that come with it. To this end, this thesis starts with a study concerning the requirements that must be met by new Web videoconferencing systems. The result of this study is the design, development and implementation of a new videoconferencing services that provides advanced collaboration to its user by providing video and audio communication as well as desktop sharing. After this, a new communication system between Web and native applications is presented. This system proposes adaptation mechanisms to bridge the two worlds providing a seamless integration transparent to users who can now access the powerful native application via an easy Web interface. The next step is to identify the main challenges posed by multi-conferencing on small devices (smartphones) with heterogeneous access networks. This dissertation proposes a mechanism that combines transcoding and adaptive quality algorithms to overcome those limitations. A second iteration in this dissertation is motivated by WebRTC. WebRTC appears as a disrupting technology by enabling new real-time communication possibilities in browsers. A new mechanism for flexible videoconferencing server scalability is presented. This mechanism aims to address the strong scalability requirements in the Web environment by taking advantage of Cloud Computing. Finally, the dissertation discusses the results obtained throughout the study, capturing the evolution of Web videoconferencing systems.
Resumo:
MetaFam is a comprehensive relational database of protein family information. This web-accessible resource integrates data from several primary sequence and secondary protein family databases. By pooling together the information from these disparate sources, MetaFam is able to provide the most complete protein family sets available. Users are able to explore the interrelationships among these primary and secondary databases using a powerful graphical visualization tool, MetaFamView. Additionally, users can identify corresponding sequence entries among the sequence databases, obtain a quick summary of corresponding families (and their sequence members) among the family databases, and even attempt to classify their own unassigned sequences. Hypertext links to the appropriate source databases are provided at every level of navigation. Global family database statistics and information are also provided. Public access to the data is available at http://metafam.ahc.umn.edu/.
Resumo:
The Mouse Tumor Biology (MTB) Database serves as a curated, integrated resource for information about tumor genetics and pathology in genetically defined strains of mice (i.e., inbred, transgenic and targeted mutation strains). Sources of information for the database include the published scientific literature and direct data submissions by the scientific community. Researchers access MTB using Web-based query forms and can use the database to answer such questions as ‘What tumors have been reported in transgenic mice created on a C57BL/6J background?’, ‘What tumors in mice are associated with mutations in the Trp53 gene?’ and ‘What pathology images are available for tumors of the mammary gland regardless of genetic background?’. MTB has been available on the Web since 1998 from the Mouse Genome Informatics web site (http://www.informatics.jax.org). We have recently implemented a number of enhancements to MTB including new query options, redesigned query forms and results pages for pathology and genetic data, and the addition of an electronic data submission and annotation tool for pathology data.
Resumo:
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP.
Resumo:
The BioKnowledge Library is a relational database and web site (http://www.proteome.com) composed of protein-specific information collected from the scientific literature. Each Protein Report on the web site summarizes and displays published information about a single protein, including its biochemical function, role in the cell and in the whole organism, localization, mutant phenotype and genetic interactions, regulation, domains and motifs, interactions with other proteins and other relevant data. This report describes four species-specific volumes of the BioKnowledge Library, concerned with the model organisms Saccharomyces cerevisiae (YPD), Schizosaccharomyces pombe (PombePD) and Caenorhabditis elegans (WormPD), and with the fungal pathogen Candida albicans (CalPD™). Protein Reports of each species are unified in format, easily searchable and extensively cross-referenced between species. The relevance of these comprehensively curated resources to analysis of proteins in other species is discussed, and is illustrated by a survey of model organism proteins that have similarity to human proteins involved in disease.
Resumo:
One challenge presented by large-scale genome sequencing efforts is effective display of uniform information to the scientific community. The Comprehensive Microbial Resource (CMR) contains robust annotation of all complete microbial genomes and allows for a wide variety of data retrievals. The bacterial information has been placed on the Web at http://www.tigr.org/CMR for retrieval using standard web browsing technology. Retrievals can be based on protein properties such as molecular weight or hydrophobicity, GC-content, functional role assignments and taxonomy. The CMR also has special web-based tools to allow data mining using pre-run homology searches, whole genome dot-plots, batch downloading and traversal across genomes using a variety of datatypes.
Resumo:
The Homeodomain Resource is an annotated collection of non-redundant protein sequences, three-dimensional structures and genomic information for the homeodomain protein family. Release 3.0 contains 795 full-length homeodomain-containing sequences, 32 experimentally-derived structures and 143 homeobox loci implicated in human genetic disorders. Entries are fully hyperlinked to facilitate easy retrieval of the original records from source databases. A simple search engine with a graphical user interface is provided to query the component databases and assemble customized data sets. A new feature for this release is the addition of DNA recognition sites for all human homeodomain proteins described in the literature. The Homeodomain Resource is freely available through the World Wide Web at http://genome.nhgri.nih.gov/homeodomain.
Resumo:
As the number of protein folds is quite limited, a mode of analysis that will be increasingly common in the future, especially with the advent of structural genomics, is to survey and re-survey the finite parts list of folds from an expanding number of perspectives. We have developed a new resource, called PartsList, that lets one dynamically perform these comparative fold surveys. It is available on the web at http://bioinfo.mbb.yale.edu/partslist and http://www.partslist.org. The system is based on the existing fold classifications and functions as a form of companion annotation for them, providing ‘global views’ of many already completed fold surveys. The central idea in the system is that of comparison through ranking; PartsList will rank the approximately 420 folds based on more than 180 attributes. These include: (i) occurrence in a number of completely sequenced genomes (e.g. it will show the most common folds in the worm versus yeast); (ii) occurrence in the structure databank (e.g. most common folds in the PDB); (iii) both absolute and relative gene expression information (e.g. most changing folds in expression over the cell cycle); (iv) protein–protein interactions, based on experimental data in yeast and comprehensive PDB surveys (e.g. most interacting fold); (v) sensitivity to inserted transposons; (vi) the number of functions associated with the fold (e.g. most multi-functional folds); (vii) amino acid composition (e.g. most Cys-rich folds); (viii) protein motions (e.g. most mobile folds); and (ix) the level of similarity based on a comprehensive set of structural alignments (e.g. most structurally variable folds). The integration of whole-genome expression and protein–protein interaction data with structural information is a particularly novel feature of our system. We provide three ways of visualizing the rankings: a profiler emphasizing the progression of high and low ranks across many pre-selected attributes, a dynamic comparer for custom comparisons and a numerical rankings correlator. These allow one to directly compare very different attributes of a fold (e.g. expression level, genome occurrence and maximum motion) in the uniform numerical format of ranks. This uniform framework, in turn, highlights the way that the frequency of many of the attributes falls off with approximate power-law behavior (i.e. according to V–b, for attribute value V and constant exponent b), with a few folds having large values and most having small values.
MEDLINEplus: building and maintaining the National Library of Medicine's consumer health Web service
Resumo:
MEDLINEplus is a Web-based consumer health information resource, made available by the National Library of Medicine (NLM). MEDLINEplus has been designed to provide consumers with a well-organized, selective Web site facilitating access to reliable full-text health information. In addition to full-text resources, MEDLINEplus directs consumers to dictionaries, organizations, directories, libraries, and clearinghouses for answers to health questions. For each health topic, MEDLINEplus includes a preformulated MEDLINE search created by librarians. The site has been designed to match consumer language to medical terminology. NLM has used advances in database and Web technologies to build and maintain MEDLINEplus, allowing health sciences librarians to contribute remotely to the resource. This article describes the development and implementation of MEDLINEplus, its supporting technology, and plans for future development.
Resumo:
Internet es un recurso de trabajo imprescindible en la tarea del ciberperiodista. Los profesionales de este campo utilizan la red como fuente de información, para crear, editar y distribuir contenidos informativos. Estas nuevas formas de trabajo, apoyadas en entornos telemáticos, se están haciendo extensibles a la universidad española, donde se está produciendo un proceso de adaptación de metodologías, técnicas y herramientas en los estudios de Periodismo. Para facilitar este proceso de ajuste por parte de los profesores de ciberperiodismo, esta investigación pretende establecer una tipología del uso de estos instrumentos en línea según los objetivos planteados en las guías docentes. Para ello, se han estudiado los programas de las asignaturas relacionadas con la docencia del ciberperiodismo en la Comunidad Valenciana. Se han seleccionado los instrumentos Web más utilizados por los profesores y se han organizado en una tabla según criterios de finalidad, acción y tipo de interacción que realiza el alumno con las herramientas y asimismo los objetivos ciberperiodísticos que pueden lograrse con cada una de ellas. La tabla ofrece información general de cada herramienta. El propósito es facilitar a los profesores un criterio de selección para elegir y profundizar en los espacios e instrumentos docentes más adecuados según las competencias que el alumno debe adquirir durante el proceso de enseñanza y aprendizaje en estos estudios.