994 resultados para Abelhas Apis mellifera


Relevância:

10.00% 10.00%

Publicador:

Resumo:

New biologically active β-lactams were designed and synthesized, developing novel antibiotics and enzymatic inhibitors directed toward specific targets. Within a work directed to the synthesis of mimetics for RGD (Arg-Gly-Asp) sequence able to interact with αvβ3 and α5β1-type integrins, new activators were developed and their Structure-Activity Relationships (SAR) analysis deepened, enhancing their activity range towards the α4β1 isoform. Moreover, to synthesize novel compounds active both against bacterial infections and pulmonary conditions of cystic fibrosis patients, new β-lactam candidates were studied. Among the abundant library of β-lactams prepared, mainly with antioxidant and antibacterial double activities, it was identified a single lead to be pharmacologically tested in vivo. Its synthesis was optimized up to the gram-scale, and pretreatment method and HPLC-MS/MS analytical protocol for sub-nanomolar quantifications were developed. Furthermore, replacement of acetoxy group in 4-acetoxy-azetidinone derivatives was studied with different nucleophiles and in aqueous media. A phosphate group was introduced and the reactivity exploited using different hydroxyapatites, obtaining biomaterials with multiple biological activities. Following the same kind of reactivity, a small series of molecules with a β-lactam and retinoic hybrid structure was synthesized as epigenetic regulators. Interacting with HDACs, two compounds were respectively identified as an inhibitor of cell proliferation and a differentiating agent on steam cells. Additionally, in collaboration with Professor L. De Cola at ISIS, University of Strasbourg, some new photochemically active β-lactam Pt (II) complexes were designed and synthesized to be used as bioprobes or theranostics. Finally, it was set up and optimized the preparation of new chiral proline-derived α-aminonitriles through an enantioselective Strecker reaction, and it was developed a chemo-enzymatic oxidative method for converting alcohols to aldehydes or acid in a selective manner, and amines to relative aldehydes, amides or imines. Moreover, enzymes and other green chemistry methodologies were used to prepare Active Pharmaceutical Ingredients (APIs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This doctorate was funded by the Regione Emilia Romagna, within a Spinner PhD project coordinated by the University of Parma, and involving the universities of Bologna, Ferrara and Modena. The aim of the project was: - Production of polymorphs, solvates, hydrates and co-crystals of active pharmaceutical ingredients (APIs) and agrochemicals with green chemistry methods; - Optimization of molecular and crystalline forms of APIs and pesticides in relation to activity, bioavailability and patentability. In the last decades, a growing interest in the solid-state properties of drugs in addition to their solution chemistry has blossomed. The achievement of the desired and/or the more stable polymorph during the production process can be a challenge for the industry. The study of crystalline forms could be a valuable step to produce new polymorphs and/or co-crystals with better physical-chemical properties such as solubility, permeability, thermal stability, habit, bulk density, compressibility, friability, hygroscopicity and dissolution rate in order to have potential industrial applications. Selected APIs (active pharmaceutical ingredients) were studied and their relationship between crystal structure and properties investigated, both in the solid state and in solution. Polymorph screening and synthesis of solvates and molecular/ionic co-crystals were performed according to green chemistry principles. Part of this project was developed in collaboration with chemical/pharmaceutical companies such as BASF (Germany) and UCB (Belgium). We focused on on the optimization of conditions and parameters of crystallization processes (additives, concentration, temperature), and on the synthesis and characterization of ionic co-crystals. Moreover, during a four-months research period in the laboratories of Professor Nair Rodriguez-Hormedo (University of Michigan), the stability in aqueous solution at the equilibrium of ionic co-crystals (ICCs) of the API piracetam was investigated, to understand the relationship between their solid-state and solution properties, in view of future design of new crystalline drugs with predefined solid and solution properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polylactide (PLA) is a biodegradable polymer that has been used in particle form for drug release, due to its biocompatibility, tailorable degradation kinetics, and desirable mechanical properties. Active pharmaceutical ingredients (APIs) may be either dissolved or encapsulated within these biomaterials to create micro- or nanoparticles. Delivery of an AIP within fine particles may overcome solubility or stability issues that can result in early elimination or degradation of the AIP in a hostile biological environment. Furthermore, it is a promising method for controlling the rate of drug delivery and dosage. The goal of this project is to develop a simple and cost-effective device that allows us to produce monodisperse micro- and nanocapsules with controllable size and adjustable sheath thickness on demand. To achieve this goal, we have studied the dual-capillary electrospray and pulsed electrospray. Dual-capillary electrospray has received considerable attention in recent years due to its ability to create core-shell structures in a single-step. However, it also increases the difficulty of controlling the inner and outer particle morphology, since two simultaneous flows are required. Conventional electrospraying has been mainly conducted using direct-current (DC) voltage with little control over anything but the electrical potential. In contrast, control over the input voltage waveform (i.e. pulsing) in electrospraying offers greater control over the process variables. Poly(L-lactic acid) (PLLA) microspheres and microcapsules were successfully fabricated via pulsed-DC electrospray and dual-capillary electrospray, respectively. Core shell combinations produced include: Water/PLLA, PLLA/polyethylene glycol (PEG), and oleic Acid/PLLA. In this study, we designed a novel high-voltage pulse forming network and a set of new designs for coaxial electrospray nozzles. We also investigated the effect of the pulsed voltage characteristics (e.g. pulse frequency, pulse amplitude and pulse width) on the particle’s size and uniformity. We found that pulse frequency, pulse amplitude, pulse width, and the combinations of these factors had a statistically significant effect on the particle’s size. In addition, factors such as polymer concentration, solvent type, feed flow rate, collection method, temperature, and humidity can significantly affect the size and shape of the particles formed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1.Pollinating insects provide crucial and economically important ecosystem services to crops and wild plants, but pollinators, particularly bees, are globally declining as a result of various driving factors, including the prevalent use of pesticides for crop protection. Sublethal pesticide exposure negatively impacts numerous pollinator life-history traits, but its influence on reproductive success remains largely unknown. Such information is pivotal, however, to our understanding of the long-term effects on population dynamics. 2.We investigated the influence of field-realistic trace residues of the routinely used neonicotinoid insecticides thiamethoxam and clothianidin in nectar substitutes on the entire life-time fitness performance of the red mason bee Osmia bicornis. 3.We show that chronic, dietary neonicotinoid exposure has severe detrimental effects on solitary bee reproductive output. Neonicotinoids did not affect adult bee mortality; however, monitoring of fully controlled experimental populations revealed that sublethal exposure resulted in almost 50% reduced total offspring production and a significantly male-biased offspring sex ratio. 4.Our data add to the accumulating evidence indicating that sublethal neonicotinoid effects on non-Apis pollinators are expressed most strongly in a rather complex, fitness-related context. Consequently, to fully mitigate long-term impacts on pollinator population dynamics, present pesticide risk assessments need to be expanded to include whole life-cycle fitness estimates, as demonstrated in the present study using O. bicornis as a model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes a possible use for the YouReputation API. A mashup combining the YouReputation and the Flickr APIs attempts to improve the visualization of reputation. First, this paper gives an introduction to Web services and APIs and further explains the developed prototype. This paper introduces an API that can be easily combined with other APIs to improve the representation of reputation terms and therefore enhance usability and design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: High and ultra-high dilutions of various starting materials, e.g. copper sulfate, Hypericum perforatum and sulfur, showed significant differences in ultraviolet light (UV) transmission from controls and amongst different dilution levels [1,2]. Verum and placebo globules of Aconitum napellus 30c or calcium carbonate/quercus e cortice 6x from the same packs as used in previous clinical trials and dissolved in water could be distinguished by UV spectroscopy [3]. However, it was unclear whether the differences in UV absorbance originated from specific characteristics of the starting materials, from differences in the production of verum and placebo globules, and/or other unknown interference factors. Aims: The aim of this study was to investigate whether globules produced with high and ultra-high dilutions (6x, 12x, 30c, 200c, 200CF (centesimal discontinuous fluxion), 10,000CF) of various starting materials (Aconitum napellus, Atropa belladonna, phosphorus, sulfur, Apis mellifica, quartz) could be distinguished by UV spectroscopy. Methodology: The globules were specially produced for this study by Spagyros AG (Gümligen, Switzerland) and differed only in the starting materials of the dilutions (but not in the batch of globules or ethanol used). Globules were dissolved in water at 10 mg/ml, in quadruplicates, approximately 22 h prior to the measurements. Absorbance of the samples in the UV range (from 190 to 340 nm) was measured in a randomized order with a Shimadzu double beam UV-1800 spectrophotometer equipped with an auto sampler. Samples of each starting material were prepared and measured on 5 independent days. The daily variations of the spectrophotometer as well as the drift during the measurements were corrected for. The average absorbance from 200 to 340 nm was compared among various starting materials within equal dilution levels using a Kruskal-Wallis test. Results: Statistically significant differences were found among 30c (Figure 1), 200c and 200CF dilutions of the various starting materials. No differences were found among 6x, 12x and 10,000CF dilutions. Conclusions: Globules prepared from high dilutions of various starting materials may show significantly different UV absorbance when dissolved in water. References [1] Wolf U, Wolf M, Heusser P, Thurneysen A, Baumgartner S. Homeopathic preparations of quartz, sulfur and copper sulfate assessed by UV-spectroscopy. Evid Based Complement Alternat Med. 2011;2011:692798. [2] Klein SD, Sandig A, Baumgartner S, Wolf U. Differences in median ultraviolet light transmissions of serial homeopathic dilutions of copper sulfate, Hypericum perforatum, and sulfur. Evid Based Complement Alternat Med. 2013;2013:370609. [3] Klein SD, Wolf U. Investigating homeopathic verum and placebo globules with ultraviolet spectroscopy. Forsch Komplementmed. 2013, accepted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Homeopathic globules are commonly used in clinical practice, while research focuses on liquid potencies. Sequential dilution and succussion in their production process has been proposed to change the physico-chemical properties of the solvent(s). It has been reported that aqueous potencies of various starting materials showed significant differences in ultraviolet light transmission compared to controls and between different dilution levels. The aim of the present study was to repeat and expand these experiments to homeopathic globules. Methods Globules were specially produced for this study by Spagyros AG (Gümligen, Switzerland) from 6 starting materials (Aconitum napellus, Atropa belladonna, phosphorus, sulfur, Apis mellifica, quartz) and for 6 dilution levels (6x, 12x, 30c, 200c, 200CF (centesimal discontinuous fluxion), 10,000CF). Native globules and globules impregnated with solvents were used as controls. Globules were dissolved in ultrapure water, and absorbance in the ultraviolet range was measured. The average absorbance from 200 to 340 nm was calculated and corrected for differences between measurement days and instrumental drift. Results Statistically significant differences were found for A. napellus, sulfur, and A. mellifica when normalized average absorbance of the various dilution levels from the same starting material (including control and solvent control globules) was compared. Additionally, absorbance within dilution levels was compared among the various starting materials. Statistically significant differences were found among 30c, 200c and 200CF dilutions. Conclusion This study has expanded previous findings from aqueous potencies to globules and may indicate that characteristics of aqueous high dilutions may be preserved and detectable in dissolved globules.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The density, species composition, and possible change in the status of pack ice seals within the Weddell Sea were investigated during the 1997/1998 summer cruise of the RV "Polarstern" (ANT-XV/3, PS48). Comparisons were made with previous surveys in the Weddell Sea where it was assumed that all seals were counted in a narrow strip on either side oft he ship or aircraft. A total of 15 aerial censuses were flown during the period 23 January - 7 March 1998 in the area bounded by 07°08' and 45°33' West longitude. The censused area in the eastern Weddell Sea was largely devoid of pack ice while a well circumscribed pack ice field remained in the western Weddell Sea. A total of 3,636 (95.4 %) crabeater seals, 21 (0.5 %) Ross seals, 45 (1.2 %) leopard seals and 111 (2.9 %) Weddell seals were observed on the pack ice during a total of 1,356.57 linear nautical miles (244.2 nm) of transect line censused. At a mean density of 21.16 1/nm**2 over an area of 244.2 nm, it is the highest densities on record for crabeater seals, density of up to 411.7 1/nm**2 being found in small areas. The overall high densities of seals (30.18 1/nm**2) recorded for the eastern Weddell Sea (27.46 1/nm**2, 0.27 1/nm**2, and 0.66 1/nm**2 for crabeater, leopard and Weddell seals respectively) is a consequence of the drastically reduced ice cover and the inverse relationship that exists between cover and seal densities. Ross seal densities (0.08 1/nm**2) were the lowest on record fort the area. It is suggested that seals largely remain within the confines of the pack ice despite seasonal and annual changes in its distribution. Indications are that in 1998 the El Niño has manifested itself in the Weddell Sea, markedly influencing the density and distribution of pack ice seals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La idea inicial de este proyecto surge de la necesidad de desarrollar una herramienta software que ayudase a estudiantes de un curso de iniciación de álgebra lineal a adquirir los conceptos expuestos en el curso mediante la asistencia de cálculos y la representación visual de conceptos e ideas. Algunas de las características o funcionalidades que debería cumplir la herramienta serían: cálculo simbólico, representación simbólica, interfaz gráfico interactivo (point-and-click para realizar operaciones y cálculos, inserción de elementos gráficos mediante drag-and-drop desde una paleta de elementos, representación visual esquemática, representación gráfica 2D y 3D...), persistencia del modelo de datos, etc. Esta fase de un proyecto puede definirse como el – qué –. El siguiente paso o fase del proyecto trata del diseño del proyecto o el – cómo –. Cómo realizar el cálculo numérico, cómo representar símbolos matemáticos en pantalla, cómo crear una paleta de elementos. . . Seguramente existen bibliotecas o APIs de programación para realizar todas estas tareas, sin embargo, su utilización exige al programador tiempo de aprendizaje y el diseño de integración de las diferentes bibliotecas (compatibilidad de versiones, mecanismos de comunicación entre ellas, configuración, etc.). Lo primero puede resolverse fácilmente dedicando tiempo de estudio a la documentación, pero ya implica tiempo. Lo segundo implica además tener que tomar decisiones sobre cómo realizar la integración, no es trivial llegar a dibujar en pantalla, mediante una API de visualización gráfica, una matriz resultado de realizar ciertas operaciones mediante un API de cálculo de álgebra lineal. Existen varias bibliotecas de cálculo de álgebra lineal en las que apoyarse para realizar cálculos. Así pues, es fácil encontrar una biblioteca o API con funciones para realizar operaciones con matrices. Lo que no resulta tan sencillo es encontrar un API que permita definir al programador los mecanismos para representar la matriz en pantalla o para que el usuario introduzca los valores de la matriz. Es en estas últimas tareas en las que el programador se ve obligado a dedicar la mayor parte del tiempo de desarrollo. El resultado de este proyecto supone una gran simplificación de esta segunda fase, la parte del – cómo –, estableciendo una plataforma sobre la que futuros desarrollos puedan basarse para obtener resultados de alta calidad sin tener que preocuparse de las tareas ajenas a la lógica del programa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las redes sociales en la actualidad son muy relevantes, no solo ocupan mucho tiempo en la vida diaria de las personas si no que también sirve a millones de empresas para publicitarse entre otras cosas. Al fenómeno de las redes sociales se le ha unido la faceta empresarial. La liberación de las APIs de algunas redes sociales ha permitido el desarrollo de aplicaciones de todo tipo y que puedan tener diferentes objetivos como por ejemplo este proyecto. Este proyecto comenzó desde el interés por Ericsson del estudio del API de Google+ y sugerencias para dar valores añadidos a las empresas de telecomunicaciones. También ha complementando la referencia disponible en Ericsson y de los otros dos proyectos de recuperación de información de las redes sociales, añadiendo una serie de opciones para el usuario en la aplicación. Para ello, se ha analizado y realizado un ejemplo, de lo que podemos obtener de las redes sociales, principalmente Twitter y Google+. Lo primero en lo que se ha basado el proyecto ha sido en realizar un estudio teórico sobre el inicio de las redes sociales, el desarrollo y el estado en el que se encuentran, analizando así las principales redes sociales que existen y aportando una visión general sobre todas ellas. También se ha realizado un estado de arte sobre una serie de webs que se dedican al uso de esa información disponible en Internet. Posteriormente, de todas las redes sociales con APIs disponibles se realizó la elección de Google+ porque es una red social nueva aun por explorar y mejorar. Y la elección de Twitter por la serie de opciones y datos que se puede obtener de ella. De ambas se han estudiado sus APIs, para posteriormente con la información obtenida, realizar una aplicación prototipo que recogiera una serie de funciones útiles a partir de los datos de sus redes sociales. Por último se ha realizado una simple interfaz en la cual se puede acceder a los datos de la cuenta como si se estuviera en Twitter o Google+, además con los datos de Twitter se puede realizar una búsqueda avanzada con alertas, un análisis de sentimiento, ver tus mayores retweets de los que te siguen y por último realizar un seguimiento comparando lo que se comenta sobre dos temas determinados. Con este proyecto se ha pretendido proporcionar una idea general de todo lo relacionado con las redes sociales, las aplicaciones disponibles para trabajar con ellas, la información del API de Twitter y Google+ y un concepto de lo que se puede obtener. Today social networks are very relevant, they not only take a long time in daily life of people but also serve millions of businesses to advertise and other things. The phenomenon of social networks has been joined the business side. The release of the APIs of some social networks has allowed the development of applications of all types and different objectives such as this project. This project started from an interest in the study of Ericsson about Google+ API and suggestions to add value to telecommunications companies. This project has complementing the reference available in Ericsson and the other two projects of information retrieval of social networks, adding a number of options for the user in the application. To do this, we have analyzed and made an example of what we can get it from social networks, mainly Twitter and Google+. The first thing that has done in the project was to make a theoretical study on the initiation of social networks, the development and the state in which they are found, and analyze the major social networks that exist. There has also been made a state of art on a number of websites that are dedicated to the use of this information available online. Subsequently, about all the social networks APIs available, Google+ was choice because it is a new social network even to explore and improve. And the choice of Twitter for the number of options and data that can be obtained from it. In both APIs have been studied, and later with the information obtained, make a prototype application to collect a number of useful features from data of social networks. Finally there has been a simple interface, in which you can access the account as if you were on Twitter or Google+. With Twitter data can perform an advanced search with alerts, sentiment analysis, see retweets of who follow you and make comparing between two particular topics. This project is intended to provide an overview of everything related to social networks, applications available to work with them, information about API of Google+ and Twitter, and a concept of what you can get.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.