942 resultados para Model-driven Web engineering
Resumo:
The performance of reanalysis-driven Canadian Regional Climate Model, version 5 (CRCM5) in reproducing the present climate over the North American COordinated Regional climate Downscaling EXperiment domain for the 1989–2008 period has been assessed in comparison with several observation-based datasets. The model reproduces satisfactorily the near-surface temperature and precipitation characteristics over most part of North America. Coastal and mountainous zones remain problematic: a cold bias (2–6 °C) prevails over Rocky Mountains in summertime and all year-round over Mexico; winter precipitation in mountainous coastal regions is overestimated. The precipitation patterns related to the North American Monsoon are well reproduced, except on its northern limit. The spatial and temporal structure of the Great Plains Low-Level Jet is well reproduced by the model; however, the night-time precipitation maximum in the jet area is underestimated. The performance of CRCM5 was assessed against earlier CRCM versions and other RCMs. CRCM5 is shown to have been substantially improved compared to CRCM3 and CRCM4 in terms of seasonal mean statistics, and to be comparable to other modern RCMs.
Resumo:
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.
Resumo:
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Resumo:
In any physicochemical process in liquids, the dynamical response of the solvent to the solutes out of equilibrium plays a crucial role in the rates and products: the solvent molecules react to the changes in volume and electron density of the solutes to minimize the free energy of the solution, thus modulating the activation barriers and stabilizing (or destabilizing) intermediate states. In charge transfer (CT) processes in polar solvents, the response of the solvent always assists the formation of charge separation states by stabilizing the energy of the localized charges. A deep understanding of the solvation mechanisms and time scales is therefore essential for a correct description of any photochemical process in dense phase and for designing molecular devices based on photosensitizers with CT excited states. In the last two decades, with the advent of ultrafast time-resolved spectroscopies, microscopic models describing the relevant case of polar solvation (where both the solvent and the solute molecules have a permanent electric dipole and the mutual interaction is mainly dipole−dipole) have dramatically progressed. Regardless of the details of each model, they all assume that the effect of the electrostatic fields of the solvent molecules on the internal electronic dynamics of the solute are perturbative and that the solvent−solute coupling is mainly an electrostatic interaction between the constant permanent dipoles of the solute and the solvent molecules. This well-established picture has proven to quantitatively rationalize spectroscopic effects of environmental and electric dynamics (time-resolved Stokes shifts, inhomogeneous broadening, etc.). However, recent computational and experimental studies, including ours, have shown that further improvement is required. Indeed, in the last years we investigated several molecular complexes exhibiting photoexcited CT states, and we found that the current description of the formation and stabilization of CT states in an important group of molecules such as transition metal complexes is inaccurate. In particular, we proved that the solvent molecules are not just spectators of intramolecular electron density redistribution but significantly modulate it. Our results solicit further development of quantum mechanics computational methods to treat the solute and (at least) the closest solvent molecules including the nonperturbative treatment of the effects of local electrostatics and direct solvent−solute interactions to describe the dynamical changes of the solute excited states during the solvent response.
Resumo:
“Import content of exports”, based on Leontief’s demand-driven input-output model, has been widely used as an indicator to measure a country’s degree of participation in vertical specialisation trade. At a sectoral level, this indicator represents the share of inter-mediates imported by all sectors embodied in a given sector’s exported output. However, this indicator only reflects one aspect of vertical specialisation – the demand side. This paper discusses the possibility of using the input-output model developed by Ghosh to measure the vertical specialisation from the perspective of the supply side. At a sector level, the Ghosh type indicator measures the share of imported intermediates used in a sector’s production that are subsequently embodied in exports by all sectors. We estimate these two indicators of vertical specialisation for 47 selected economies for 1995, 2000, 2005 using the OECD’s harmonized input-output database. In addition, the potential biases of both indicators due to the treatment of net withdrawals in inventories, are also discussed.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
El trabajo ha caracterizado el área de Engineering, Multidisciplinary en Colombia, revisándose a nivel institucional a través de la base de datos Web of Science, los trabajos realizados por investigadores en universidades colombianas, y publicados en revistas internacionales con factor de impacto entre 1997 y 2009. En el contexto de América Latina se han publicado 2, 195 trabajos del tipo artículo o revisión en 83 revistas, y a nivel de Colombia se han encontrado 419 artículos publicados en 23 revistas. También se han analizado las Universidades mediante indicadores bibliométricos (Factor de Impacto Ponderado y Relativo y el número medio de citas por documento), encontrándose toda la producción científica localizada en 37 Universidades y destacando la Universidad Nacional de Colombia por el número de documentos, la Universidad Pontificia Bolivariana por la ratio citas frente a documentos, y la Universidad Pedagógica y Tecnológica de Colombia por el Factor de Impacto.
Resumo:
El proposito del trabajo ha sido caracterizar el área de Ingeniería Química en México. Para ello, se ha revisado a nivel institucional, a través de la base de datos Web of Science (WoS), los trabajos sobre Ingeniería Química realizados por investigadores en Instituciones mexicanas y publicados en revistas internacionales con factor de impacto entre 1997 y 2008. Se ha partido del contexto de América Latina, donde se han publicado 6,183 trabajos del tipo artículo o revisión en 119 revistas, y a nivel de México se han encontrado 1,302 artículos publicados en 87 revistas la mayoría en inglés (96.08%), pero también en español (3.69%) y en francés (0.23%). Por otro lado, se han analizado las Universidades y Centros de Investigación desde el punto de vista cuantitativo y cualitativo mediante diversos indicadores bibliométricos como el Factor de Impacto Ponderado, el Factor de Impacto Relativo y la relación entre el número de citas y el número de documentos, encontrándose que de entre las cinco instituciones más productivas destaca el Instituto Mexicano del Petróleo por el número de documentos y la Universidad Nacional Autónoma de México por la relación citas frente a documentos, y por el Factor de Impacto Ponderado.
Resumo:
El prop´osito del trabajo ha sido caracterizar el ´area de Ingenier´ıa Qu´ımica en M´exico. Para ello, se ha revisado a nivel institucional, a trav´es de la base de datos Web of Science (WoS), los trabajos sobre Ingenier´ıa Qu´ımica realizados por investigadores en Instituciones mexicanas y publicados en revistas internacionales con factor de impacto entre 1997 y 2008. Se ha partido del contexto de Am´erica Latina, donde se han publicado 6,183 trabajos del tipo art´ıculo o revisi´on en 119 revistas, y a nivel de M´exico se han encontrado 1,302 art´ıculos publicados en 87 revistas la mayor´ıa en ingl´es (96.08%), pero tambi´en en espa˜nol (3.69%) y en franc´es (0.23%). Por otro lado, se han analizado las Universidades y Centros de Investigaci´on desde el punto de vista cuantitativo y cualitativo mediante diversos indicadores bibliom´etricos como el Factor de Impacto Ponderado, el Factor de Impacto Relativo y la relaci´on entre el n´umero de citas y el n´umero de documentos, encontr´andose que de entre las cinco instituciones m´as productivas destaca el Instituto Mexicano del Petr´oleo por el n´umero de documentos y la Universidad Nacional Aut´onoma de M´exico por la relaci´on citas frente a documentos, y por el Factor de Impacto Ponderado
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
In this introductory chapter we put in context and give a brief outline of the work that we thoroughly present in the rest of the dissertation. We consider this work divided in two main parts. The first part is the Firenze Framework, a knowledge level description framework rich enough to express the semantics required for describing both semantic Web services and semantic Grid services. We start by defining what the Semantic Grid is and its relation with the Semantic Web; and the possibility of their convergence since both initiatives have become mainly service-oriented. We also introduce the main motivators of the creation of this framework, one is to provide a valid description framework that works at knowledge level; the other to provide a description framework that takes into account the characteristics of Grid services in order to be able to describe them properly. The other part of the dissertation is devoted to Vega, an event-driven architecture that, by means of proposed knowledge level description framework, is able to achieve high scale provisioning of knowledge-intensive services. In this introductory chapter we portrait the anatomy of a generic event-driven architecture, and we briefly enumerate their main characteristics, which are the reason that make them our choice.
Resumo:
Because of the growing availability of third-party APIs, services, widgets and any other reusable web component, mashup developers now face a vast amount of candidate components for their developments. Moreover, these components quite often are scattered in many different repositories and web sites, which makes difficult their selection or discovery. In this paper, we discuss the problem of component selection in Service-Oriented Architectures (SOA) and Mashup-Driven Development, and introduce the Linked Mashups Ontology (LiMOn), a model that allows describing mashups and their components for integrating and sharing mashup information such as categorization or dependencies. The model has allowed the building of an integrated, centralized metadirectory of web components for query and selection, which has served to evaluate the model. The metadirectory allows accessing various heterogeneous repositories of mashups and web components while using external information from the Linked Data cloud, helping mashup development.