984 resultados para Service Interruption Modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays CPV trends mostly based in lens parqueted flat modules, enable the separate design of the sun tracker. To enable this possibility a set of specifications is to be prescribed for the tracker design team, which take into account fundamental requisites such as the maximum service loads both permanent and variable, the sun tracking accuracy and the tracker structural stiffness required to maintain the CPV array acceptance angle loss below a certain threshold. In its first part this paper outlines the author’s approach to confront these issues. Next, a method is introduced to estimate the acceptance angle losses due to the tracker’s structural flexure, which in last instance relies in the computation of the minimum enclosing circle of a set of points in the plane. This method is also useful to simulate the drifts in the tracker’s pointing vector due to structural deformation as a function of the aperture orientation angle. Results of this method when applied to the design of a two axis CPV pedestal tracker are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to explain the chloride concentration profiles obtained experimentally from control samples of an offshore platform after 25 years of service life. The platform is located 12 km off the coast of the Brazilian province Rio Grande do Norte, in the north-east of Brazil. The samples were extracted at different orientations and heights above mean sea level. A simple model based on Fick’s second law is considered and compared with a finite element model which takes into account transport of chloride ions by diffusion and convection. Results show that convective flows significantly affect the studied chloride penetrations. The convection velocity is obtained by fitting the finite element solution to the experimental data and seems to be directly proportional to the height above mean sea level and also seems to depend on the orientation of the face of the platform. This work shows that considering solely diffusion as transport mechanism does not allow a good prediction of the chloride profiles. Accounting for capillary suction due to moisture gradients permits a better interpretation of the material’s behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to explain the chloride concentration profiles obtained experimentally from control samples of an offshore platform after 25 years of service life. The platform is located 12 km off the coast of the Brazilian province Rio Grande do Norte, in the north-east of Brazil. The samples were extracted at different orientations and heights above mean sea level. A simple model based on Fick’s second law is considered and compared with a finite element model which takes into account transport of chloride ions by diffusion and convection. Results show that convective flows significantly affect the studied chloride penetrations. The convection velocity is obtained by fitting the finite element solution to the experimental data and seems to be directly proportional to the height above mean sea level and also seems to depend on the orientation of the face of the platform. This work shows that considering solely diffusion as transport mechanism does not allow a good prediction of the chloride profiles. Accounting for capillary suction due to moisture gradients permits a better interpretation of the material’s behaviour

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The banking industry is observing how new competitors threaten its millennial business model by targeting unbanked people, offering new financial services to their customer base, and even enabling new channels for existing services and customers. The knowledge on users, their behaviour, and expectations become a key asset in this new context. Well aware of this situation, the Center for Open Middleware, a joint technology center created by Santander Bank and Universidad Politécnica de Madrid, has launched a set of initiatives to allow the experimental analysis and management of socio-economic information. PosdataP2P service is one of them, which seeks to model the economic ties between the holders of university smart cards, leveraging on the social networks the holders are subscribed to. In this paper we describe the design principles guiding the development of the system, its architecture and some implementation details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usage of HTTP adaptive streaming (HAS) has become widely spread in multimedia services. Because it allows the service providers to improve the network resource utilization and user׳s Quality of Experience (QoE). Using this technology, the video playback interruption is reduced since the network and server status in addition to capability of user device, all are taken into account by HAS client to adapt the quality to the current condition. Adaptation can be done using different strategies. In order to provide optimal QoE, the perceptual impact of adaptation strategies from point of view of the user should be studied. However, the time-varying video quality due to the adaptation which usually takes place in a long interval introduces a new type of impairment making the subjective evaluation of adaptive streaming system challenging. The contribution of this paper is two-fold: first, it investigates the testing methodology to evaluate HAS QoE by comparing the subjective experimental outcomes obtained from ACR standardized method and a semi-continuous method developed to evaluate the long sequences. In addition, influence of using audiovisual stimuli to evaluate the video-related impairment is inquired. Second, impact of some of the adaptation technical factors including the quality switching amplitude and chunk size in combination with high range of commercial content type is investigated. The results of this study provide a good insight toward achieving appropriate testing method to evaluate HAS QoE, in addition to designing switching strategies with optimal visual quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is presented a mathematical model of the oculomotor plant, based on experimental data in cats. The system that generates, from the neuronal processes at the motoneuron, the control signals to the eye muscles that moves the eye. In contrast with previous models, that base the eye movement related motoneuron behavior on a first order linear differential equation, non-linear effects are described: A dependency on the eye angular position of the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Society today is completely dependent on computer networks, the Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Subconsciously, we rely increasingly on network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect, and enhance the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we do not consider network management systems during the development stage of distributed systems, then there could be serious consequences or even total failures in the development of the distributed system. It is necessary, therefore, to consider the management of the systems within the design of the distributed systems and to systematise their design to minimise the impact of network management in distributed systems projects. In this paper, we present a framework that allows the design of network management systems systematically. To accomplish this goal, formal modelling tools are used for modelling different views sequentially proposed of the same problem. These views cover all the aspects that are involved in the system; based on process definitions for identifying responsible and defining the involved agents to propose the deployment in a distributed architecture that is both feasible and appropriate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Society, as we know it today, is completely dependent on computer networks, Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Moreover, and unconsciously, all services and distributed systems require network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect or improve the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we don’t consider network management systems during the development stage of main distributed systems, then there could be serious consequences or even total failures in the development of the distributed systems. It is necessary, therefore, to consider the management of the systems within the design of distributed systems and systematize their conception to minimize the impact of the management of networks within the project of distributed systems. In this paper, we present a formalization method of the conceptual modelling for design of a network management system through the use of formal modelling tools, thus allowing from the definition of processes to identify those responsible for these. Finally we will propose a use case to design a conceptual model intrusion detection system in network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. The present paper describes a component of a large Population cost-effectiveness study that aimed to identify the averted burden and economic efficiency of current and optimal treatment for the major mental disorders. This paper reports on the findings for the anxiety disorders (panic disorder/agoraphobia, social phobia, generalized anxiety disorder, post-traumatic stress disorder and obsessive-compulsive disorder). Method. Outcome was calculated as averted 'years lived with disability' (YLD), a population summary measure of disability burden. Costs were the direct health care costs in 1997-8 Australian dollars. The cost per YLD averted (efficiency) was calculated for those already in contact with the health system for a mental health problem (current care) and for a hypothetical optimal care package of evidence-based treatment for this same group. Data sources included the Australian National Survey of Mental Health and Well-being and published treatment effects and unit costs. Results. Current coverage was around 40% for most disorders with the exception of social phobia at 21%. Receipt of interventions consistent with evidence-based care ranged from 32% of those in contact with services for social phobia to 64% for post-traumatic stress disorder. The cost of this care was estimated at $400 million, resulting in a cost per YLD averted ranging from $7761 for generalized anxiety disorder to $34 389 for panic/agoraphobia. Under optimal care, costs remained similar but health gains were increased substantially, reducing the cost per YLD to < $20 000 for all disorders. Conclusions. Evidence-based care for anxiety disorders would produce greater population health gain at a similar cost to that of current care, resulting in a substantial increase in the cost-effectiveness of treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of previous research into service quality and services marketing has concentrated upon the measurement of service quality outcomes, rather than the enhancement of the process by which service is delivered. In this study a conceptual model of the service acculturation process is proposed, modelling the input of service managers and employees in the delivery of service quality to customers. The conceptualisation is then empirically tested utilising a dyadic study of the New Zealand hotel industry. Results indicate that 1) a strong commitment to service is important for both managers and employees; and 2) that employees’ teamwork may have an adverse effect on perceived quality of customer service. Implications of the results and future research directions are subsequently discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulation modelling has been used for many years in the manufacturing sector but has now become a mainstream tool in business situations. This is partly because of the popularity of business process re-engineering (BPR) and other process based improvement methods that use simulation to help analyse changes in process design. This textbook includes case studies in both manufacturing and service situations to demonstrate the usefulness of the approach. A further reason for the increasing popularity of the technique is the development of business orientated and user-friendly Windows-based software. This text provides a guide to the use of ARENA, SIMUL8 and WITNESS simulation software systems that are widely used in industry and available to students. Overall this text provides a practical guide to building and implementing the results from a simulation model. All the steps in a typical simulation study are covered including data collection, input data modelling and experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Customer satisfaction and service quality are two important concepts in the marketing literature. However, there has been some confusion about the conceptualisation and measurement of these two concepts and the nature of the relationship between them. The primary objective of this research was to develop a more thorough understanding of these concepts, and a model that could help to explain the links between them and their relationships with post-purchase behaviour. A preliminary theoretical model was developed, based on an exhaustive review of the literature. Following exploratory research, the model was revised by incorporating "Perceived Value" and "Perceived Sacrifice" to help explain customer's post-purchase behaviour. A longitudinal survey was conducted in the context of the restaurant industry, and the data were analysed using structural equation modelling. The results provided evidence to support the main research hypotheses. However, the effect of "Normative Expectations" on "Encounter Quality" was insignificant, and "Perceived Value" had a direct effect on "Behavioural Intentions" despite expectations that such an effect would be mediated through "Customer Satisfaction". It was also found that "Normative Expectations" were relatively more stable than "Predictive Expectations". It is argued that the present research significantly contributes to the marketing literature, and in particular the role of perceived value in the formation of customers' post-purchase behaviour. Further research efforts in this area are warranted.