953 resultados para Event-driven Framework


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The DNDC (DeNitrification and DeComposition) model was first developed by Li et al. (1992) as a rain event-driven process-orientated simulation model for nitrous oxide, carbon dioxide and nitrogen gas emissions from the agricultural soils in the U.S. Over the last 20 years, the model has been modified and adapted by various research groups around the world to suit specific purposes and circumstances. The Global Research Alliance Modelling Platform (GRAMP) is a UK-led initiative for the establishment of a purposeful and credible web-based platform initially aimed at users of the DNDC model. With the aim of improving the predictions of soil C and N cycling in the context of climate change the objectives of GRAMP are to: 1) to document the existing versions of the DNDC model; 2) to create a family tree of the individual DNDC versions; 3) to provide information on model use and development; and 4) to identify strengths, weaknesses and potential improvements for the model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Business information has become a critical asset for companies and it has even more value when obtained and exploited in real time. This paper analyses how to integrate this information into an existing banking Enterprise Architecture, following an event-driven approach, and entails the study of three main issues: the definition of business events, the specification of a reference architecture, which identifies the specific integration points, and the description of a governance approach to manage the new elements. All the proposed solutions have been validated with a proof-of-concept test bed in an open source environment. It is based on a case study of the banking sector that allows an operational validation to be carried out, as well as ensuring compliance with non-functional requirements. We have focused these requirements on performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A medida que la sociedad avanza, la cantidad de datos almacenados en sistemas de información y procesados por las aplicaciones y servidores software se eleva exponencialmente. Además, las nuevas tecnologías han confiado su desarrollo en la red internacionalmente conectada: Internet. En consecuencia, se han aprovechado las conexiones máquina a máquina (M2M) mediante Internet y se ha desarrollado el concepto de "Internet de las Cosas", red de dispositivos y terminales donde cualquier objeto cotidiano puede establecer conexiones con otros objetos o con un teléfono inteligente mediante los servicios desplegados en dicha red. Sin embargo, estos nuevos datos y eventos se deben procesar en tiempo real y de forma eficaz, para reaccionar ante cualquier situación. Así, las arquitecturas orientadas a eventos solventan la comprensión del intercambio de mensajes en tiempo real. De esta forma, una EDA (Event-Driven Architecture) brinda la posibilidad de implementar una arquitectura software con una definición exhaustiva de los mensajes, notificándole al usuario los hechos que han ocurrido a su alrededor y las acciones tomadas al respecto. Este Trabajo Final de Grado se centra en el estudio de las arquitecturas orientadas a eventos, contrastándolas con el resto de los principales patrones arquitectónicos. Esta comparación se ha efectuado atendiendo a los requisitos no funcionales de cada uno, como, por ejemplo, la seguridad frente a amenazas externas. Asimismo, el objetivo principal es el estudio de las arquitecturas EDA (Event-Driven Architecture) y su relación con la red de Internet de las Cosas, que permite a cualquier dispositivo acceder a los servicios desplegados en esa red mediante Internet. El objeto del TFG es observar y verificar las ventajas de esta arquitectura, debido a su carácter de tipo inmediato, mediante el envío y recepción de mensajes en tiempo real y de forma asíncrona. También se ha realizado un estudio del estado del arte de estos patrones de arquitectura software, así como de la red de IoT (Internet of Things) y sus servicios. Por otro lado, junto con este TFG se ha desarrollado una simulación de una EDA completa, con todos sus elementos: productores, consumidores y procesador de eventos complejo, además de la visualización de los datos. Para ensalzar los servicios prestados por la red de IoT y su relación con una arquitectura EDA, se ha implementado una simulación de un servicio personalizado de Tele-asistencia. Esta prueba de concepto ha ayudado a reforzar el aprendizaje y entender con más precisión todo el conocimiento adquirido mediante el estudio teórico de una EDA. Se ha implementado en el lenguaje de programación Java, mediante las soluciones de código abierto RabbitMQ y Esper, ayudando a su unión el estándar AMQP, para completar correctamente la transferencia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La mejora de la calidad del aire es una tarea eminentemente interdisciplinaria. Dada la gran variedad de ciencias y partes involucradas, dicha mejora requiere de herramientas de evaluación simples y completamente integradas. La modelización para la evaluación integrada (integrated assessment modeling) ha demostrado ser una solución adecuada para la descripción de los sistemas de contaminación atmosférica puesto que considera cada una de las etapas involucradas: emisiones, química y dispersión atmosférica, impactos ambientales asociados y potencial de disminución. Varios modelos de evaluación integrada ya están disponibles a escala continental, cubriendo cada una de las etapas antesmencionadas, siendo el modelo GAINS (Greenhouse Gas and Air Pollution Interactions and Synergies) el más reconocido y usado en el contexto europeo de toma de decisiones medioambientales. Sin embargo, el manejo de la calidad del aire a escala nacional/regional dentro del marco de la evaluación integrada es deseable. Esto sin embargo, no se lleva a cabo de manera satisfactoria con modelos a escala europea debido a la falta de resolución espacial o de detalle en los datos auxiliares, principalmente los inventarios de emisión y los patrones meteorológicos, entre otros. El objetivo de esta tesis es presentar los desarrollos en el diseño y aplicación de un modelo de evaluación integrada especialmente concebido para España y Portugal. El modelo AERIS (Atmospheric Evaluation and Research Integrated system for Spain) es capaz de cuantificar perfiles de concentración para varios contaminantes (NO2, SO2, PM10, PM2,5, NH3 y O3), el depósito atmosférico de especies de azufre y nitrógeno así como sus impactos en cultivos, vegetación, ecosistemas y salud como respuesta a cambios porcentuales en las emisiones de sectores relevantes. La versión actual de AERIS considera 20 sectores de emisión, ya sea equivalentes a sectores individuales SNAP o macrosectores, cuya contribución a los niveles de calidad del aire, depósito e impactos han sido modelados a través de matrices fuentereceptor (SRMs). Estas matrices son constantes de proporcionalidad que relacionan cambios en emisiones con diferentes indicadores de calidad del aire y han sido obtenidas a través de parametrizaciones estadísticas de un modelo de calidad del aire (AQM). Para el caso concreto de AERIS, su modelo de calidad del aire “de origen” consistió en el modelo WRF para la meteorología y en el modelo CMAQ para los procesos químico-atmosféricos. La cuantificación del depósito atmosférico, de los impactos en ecosistemas, cultivos, vegetación y salud humana se ha realizado siguiendo las metodologías estándar establecidas bajo los marcos internacionales de negociación, tales como CLRTAP. La estructura de programación está basada en MATLAB®, permitiendo gran compatibilidad con software típico de escritorio comoMicrosoft Excel® o ArcGIS®. En relación con los niveles de calidad del aire, AERIS es capaz de proveer datos de media anual y media mensual, así como el 19o valor horario más alto paraNO2, el 25o valor horario y el 4o valor diario más altos para SO2, el 36o valor diario más alto para PM10, el 26o valor octohorario más alto, SOMO35 y AOT40 para O3. En relación al depósito atmosférico, el depósito acumulado anual por unidad de area de especies de nitrógeno oxidado y reducido al igual que de azufre pueden ser determinados. Cuando los valores anteriormente mencionados se relacionan con características del dominio modelado tales como uso de suelo, cubiertas vegetales y forestales, censos poblacionales o estudios epidemiológicos, un gran número de impactos puede ser calculado. Centrándose en los impactos a ecosistemas y suelos, AERIS es capaz de estimar las superaciones de cargas críticas y las superaciones medias acumuladas para especies de nitrógeno y azufre. Los daños a bosques se calculan como una superación de los niveles críticos de NO2 y SO2 establecidos. Además, AERIS es capaz de cuantificar daños causados por O3 y SO2 en vid, maíz, patata, arroz, girasol, tabaco, tomate, sandía y trigo. Los impactos en salud humana han sido modelados como consecuencia de la exposición a PM2,5 y O3 y cuantificados como pérdidas en la esperanza de vida estadística e indicadores de mortalidad prematura. La exactitud del modelo de evaluación integrada ha sido contrastada estadísticamente con los resultados obtenidos por el modelo de calidad del aire convencional, exhibiendo en la mayoría de los casos un buen nivel de correspondencia. Debido a que la cuantificación de los impactos no es llevada a cabo directamente por el modelo de calidad del aire, un análisis de credibilidad ha sido realizado mediante la comparación de los resultados de AERIS con los de GAINS para un escenario de emisiones determinado. El análisis reveló un buen nivel de correspondencia en las medias y en las distribuciones probabilísticas de los conjuntos de datos. Las pruebas de verificación que fueron aplicadas a AERIS sugieren que los resultados son suficientemente consistentes para ser considerados como razonables y realistas. En conclusión, la principal motivación para la creación del modelo fue el producir una herramienta confiable y a la vez simple para el soporte de las partes involucradas en la toma de decisiones, de cara a analizar diferentes escenarios “y si” con un bajo coste computacional. La interacción con políticos y otros actores dictó encontrar un compromiso entre la complejidad del modeladomedioambiental con el carácter conciso de las políticas, siendo esto algo que AERIS refleja en sus estructuras conceptual y computacional. Finalmente, cabe decir que AERIS ha sido creado para su uso exclusivo dentro de un marco de evaluación y de ninguna manera debe ser considerado como un sustituto de los modelos de calidad del aire ordinarios. ABSTRACT Improving air quality is an eminently inter-disciplinary task. The wide variety of sciences and stakeholders that are involved call for having simple yet fully-integrated and reliable evaluation tools available. Integrated AssessmentModeling has proved to be a suitable solution for the description of air pollution systems due to the fact that it considers each of the involved stages: emissions, atmospheric chemistry, dispersion, environmental impacts and abatement potentials. Some integrated assessment models are available at European scale that cover each of the before mentioned stages, being the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model the most recognized and widely-used within a European policy-making context. However, addressing air quality at the national/regional scale under an integrated assessment framework is desirable. To do so, European-scale models do not provide enough spatial resolution or detail in their ancillary data sources, mainly emission inventories and local meteorology patterns as well as associated results. The objective of this dissertation is to present the developments in the design and application of an Integrated Assessment Model especially conceived for Spain and Portugal. The Atmospheric Evaluation and Research Integrated system for Spain (AERIS) is able to quantify concentration profiles for several pollutants (NO2, SO2, PM10, PM2.5, NH3 and O3), the atmospheric deposition of sulfur and nitrogen species and their related impacts on crops, vegetation, ecosystems and health as a response to percentual changes in the emissions of relevant sectors. The current version of AERIS considers 20 emission sectors, either corresponding to individual SNAP sectors or macrosectors, whose contribution to air quality levels, deposition and impacts have been modeled through the use of source-receptor matrices (SRMs). Thesematrices are proportionality constants that relate emission changes with different air quality indicators and have been derived through statistical parameterizations of an air qualitymodeling system (AQM). For the concrete case of AERIS, its parent AQM relied on the WRF model for meteorology and on the CMAQ model for atmospheric chemical processes. The quantification of atmospheric deposition, impacts on ecosystems, crops, vegetation and human health has been carried out following the standard methodologies established under international negotiation frameworks such as CLRTAP. The programming structure isMATLAB ® -based, allowing great compatibility with typical software such as Microsoft Excel ® or ArcGIS ® Regarding air quality levels, AERIS is able to provide mean annual andmean monthly concentration values, as well as the indicators established in Directive 2008/50/EC, namely the 19th highest hourly value for NO2, the 25th highest daily value and the 4th highest hourly value for SO2, the 36th highest daily value of PM10, the 26th highest maximum 8-hour daily value, SOMO35 and AOT40 for O3. Regarding atmospheric deposition, the annual accumulated deposition per unit of area of species of oxidized and reduced nitrogen as well as sulfur can be estimated. When relating the before mentioned values with specific characteristics of the modeling domain such as land use, forest and crops covers, population counts and epidemiological studies, a wide array of impacts can be calculated. When focusing on impacts on ecosystems and soils, AERIS is able to estimate critical load exceedances and accumulated average exceedances for nitrogen and sulfur species. Damage on forests is estimated as an exceedance of established critical levels of NO2 and SO2. Additionally, AERIS is able to quantify damage caused by O3 and SO2 on grapes, maize, potato, rice, sunflower, tobacco, tomato, watermelon and wheat. Impacts on human health aremodeled as a consequence of exposure to PM2.5 and O3 and quantified as losses in statistical life expectancy and premature mortality indicators. The accuracy of the IAM has been tested by statistically contrasting the obtained results with those yielded by the conventional AQM, exhibiting in most cases a good agreement level. Due to the fact that impacts cannot be directly produced by the AQM, a credibility analysis was carried out for the outputs of AERIS for a given emission scenario by comparing them through probability tests against the performance of GAINS for the same scenario. This analysis revealed a good correspondence in the mean behavior and the probabilistic distributions of the datasets. The verification tests that were applied to AERIS suggest that results are consistent enough to be credited as reasonable and realistic. In conclusion, the main reason thatmotivated the creation of this model was to produce a reliable yet simple screening tool that would provide decision and policy making support for different “what-if” scenarios at a low computing cost. The interaction with politicians and other stakeholders dictated that reconciling the complexity of modeling with the conciseness of policies should be reflected by AERIS in both, its conceptual and computational structures. It should be noted however, that AERIS has been created under a policy-driven framework and by no means should be considered as a substitute of the ordinary AQM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We reviewed the use of advanced display technologies for monitoring in anesthesia. Researchers are investigating displays that integrate information and that, in some cases, also deliver the results continuously to the anesthesiologist. Integrated visual displays reveal higher-order properties of patient state and speed in responding to events, but their benefits under an intensely timeshared load is unknown. Head-mounted displays seem to shorten the time to respond to changes, but their impact on peripheral awareness and attention is unknown. Continuous auditory displays extending pulse oximetry seem to shorten response times and improve the ability to time-share other tasks, but their integration into the already noisy operative environment still needs to be tested. We reviewed the advantages and disadvantages of the three approaches, drawing on findings from other fields, such as aviation, to suggest outcomes where there are still no results for the anesthesia context. Proving that advanced patient monitoring displays improve patient outcomes is difficult, and a more realistic goal is probably to prove that such displays lead to better situational awareness, earlier responding, and less workload, all of which keep anesthesia practice away from the outer boundaries of safe operation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

∗ Thematic Harmonisation in Electrical and Information EngineeRing in Europe,Project Nr. 10063-CP-1-2000-1-PT-ERASMUS-ETNE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is devoted to the learning of event programming by using Visual C# in specialized training in Informatics in high schools. Some basic tools and technologies for the implementation of graphics and animation in C# are discussed. Two example problems are proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the proliferation of multimedia data and ever-growing requests for multimedia applications, there is an increasing need for efficient and effective indexing, storage and retrieval of multimedia data, such as graphics, images, animation, video, audio and text. Due to the special characteristics of the multimedia data, the Multimedia Database management Systems (MMDBMSs) have emerged and attracted great research attention in recent years. Though much research effort has been devoted to this area, it is still far from maturity and there exist many open issues. In this dissertation, with the focus of addressing three of the essential challenges in developing the MMDBMS, namely, semantic gap, perception subjectivity and data organization, a systematic and integrated framework is proposed with video database and image database serving as the testbed. In particular, the framework addresses these challenges separately yet coherently from three main aspects of a MMDBMS: multimedia data representation, indexing and retrieval. In terms of multimedia data representation, the key to address the semantic gap issue is to intelligently and automatically model the mid-level representation and/or semi-semantic descriptors besides the extraction of the low-level media features. The data organization challenge is mainly addressed by the aspect of media indexing where various levels of indexing are required to support the diverse query requirements. In particular, the focus of this study is to facilitate the high-level video indexing by proposing a multimodal event mining framework associated with temporal knowledge discovery approaches. With respect to the perception subjectivity issue, advanced techniques are proposed to support users' interaction and to effectively model users' perception from the feedback at both the image-level and object-level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el presente trabajo se aplica el perfil de complejidad al modelamiento de procesos de negocio, desarrollado a partir del formalismo de Event-driven Process Chain (EPC). Al ser el perfil de complejidad una función de la información mutua entre elementos de un sistema, para cada nivel de escala de este, su aplicación al modelamiento de procesos permitirá establecer cuánta información recogen estos y cómo se distribuye esta información en cada posible secuencia de sus subprocesos. Se explora, en un inicio, las características cualitativas que tiene dicho perfil según cómo se encuentre estructurado el diagrama y qué significado esconden las estructuras más básicas con las que puede construirse un proceso. En la medida en que el modelado de procesos se realiza con el objetivo de optimizar el funcionamiento de un negocio, la aplicación del perfil de complejidad podrá ofrecer también una medida cuantitativa con la cual establecer cuánto afecta en el resultado final la modificación de un subproceso.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta investigación tiene como objetivo determinar si los anuncios de cambios de CEO y presidentes de Directorio de las empresas listadas en la Bolsa de Valores de Lima (BVL) afectan el valor de la firma en los días cercanos al anuncio. Fue la pronta salida de Steve Jobs como CEO de Apple Inc., a causa de una enfermedad mortal, lo que nos generó el cuestionamiento respecto a cuál sería el desempeño que tienen las acciones cuyas compañías pasan por eventos similares. Como se sabe, el mercado castigó la acción de Apple el día de la muerte de Steve Jobs, con caídas superiores al 2% el día del anuncio. ¿Tendrían los mercados desarrollados y emergentes el mismo comportamiento?, ¿los eventos de cambio de CEO generan las mismas reacciones en los países emergentes? Grande fue nuestra sorpresa al observar, a nivel local, cambios en la gerencia general como en Backus & Johnston (3/9/2013) sin un efecto significativo en el mercado, pues incluso el mercado no negoció dicho valor hasta el 19/9/2013. Con el objetivo de obtener una respuesta a las consultas inicialmente planteadas, se aplicó la metodología denominada Event Analysis, la cual ya ha sido utilizada para evaluar la existencia de retornos anormales ante cambios en CEO y presidentes de Directorio en mercados desarrollados como EE. UU., Países Bajos, Australia, España, etc., y también en mercados emergentes como Colombia, Chile y México. Nuestro estudio para el mercado peruano consistió en una muestra conformada por las cincuenta empresas cuyas acciones son las de mayor frecuencia de negociación en la BVL. Se tomó en consideración todos los eventos de cambio de CEO y presidentes de Directorio entre 1992 y el 2014. De acuerdo con los resultados de la investigación realizada, la existencia de retornos anormales en los cambios de CEO y presidentes de Directorio no son estadísticamente significativos, por lo que no podrían ser usado para generar estrategias del tipo Event-driven por parte de los hedge funds. La razón predominante es la alta volatilidad de los resultados y la poca profundidad del mercado que cuenta con poca liquidez; asimismo, el hecho de tener eventos relevantes que no son tomados en cuenta por el mercado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper an approach to extreme event control in wastewater treatment plant operation by use of automatic supervisory control is discussed. The framework presented is based on the fact that different operational conditions manifest themselves as clusters in a multivariate measurement space. These clusters are identified and linked to specific and corresponding events by use of principal component analysis and fuzzy c-means clustering. A reduced system model is assigned to each type of extreme event and used to calculate appropriate local controller set points. In earlier work we have shown that this approach is applicable to wastewater treatment control using look-up tables to determine current set points. In this work we focus on the automatic determination of appropriate set points by use of steady state and dynamic predictions. The performance of a relatively simple steady-state supervisory controller is compared with that of a model predictive supervisory controller. Also, a look-up table approach is included in the comparison, as it provides a simple and robust alternative to the steady-state and model predictive controllers, The methodology is illustrated in a simulation study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.