893 resultados para Policy-based network management
Resumo:
Seventy percent of the population in Myanmar lives in rural areas. Although health workers are adequately trained, they are overburdened due to understaffing and insufficient supplies. Literature confirms that information and communication technologies can extend the reach of healthcare. In this paper, we present an SMS-based social network that aims to help health workers to interact with other medical professionals through topic-based message delivery. Topics describe interests of users and the content of message. A message is delivered by matching message content with user interests. Users describe topics as ICD- 10 codes, a comprehensive medical taxonomy. In this ICD-10 coded SMS, a set of prearranged codes provides a common language for users to send structured information that fits inside an SMS.
Resumo:
Electronic waste is a fairly new and largely unknown phenomenon. Accordingly, governments have only recently acknowledged electronic waste as a threat to the environment and public health. In attempting to mitigate the hazards associated with this rapidly growing toxic waste stream, governments at all levels have started to implement e-waste management programs. The legislation enacted to create these programs is based on extended producer responsibility or EPR policy. ^ EPR shifts the burden of final disposal of e-waste from the consumer or municipal solid waste system to the manufacturer of electronic equipment. Applying an EPR policy is intended to send signals up the production chain to the manufacturer. The desired outcome is to change the methods of production in order to reduce production outputs/inputs with the ultimate goal of changing product design. This thesis performs a policy analysis of the current e-waste policies at the federal and state level of government, focusing specifically on Texas e-waste policies. ^ The Texas e-waste law known, as HB 2714 or the Texas Computer TakeBack Law, requires manufacturers to provide individual consumers with a free and convenient method for returning their used computers to manufacturers. The law is based on individual producer responsibility and shared responsibility among consumer, retailers, recyclers, and the TCEQ. ^ Using a set of evaluation criteria created by the Organization for Economic Co-operation and Development, the Texas e-waste law was examined to determine its effectiveness at reducing the threat of e-waste in Texas. Based on the outcomes of the analysis certain recommendations were made for the legislature to incorporate into HB 2714. ^ The results of the policy analysis show that HB 2714 is a poorly constructed law and does not provide the desired results seen in other states with EPR policies. The TakeBack Law does little to change the collection methods of manufacturers and even less to change their production habits. If the e-waste problem is to be taken seriously, HB 2714 must be amended to reflect the proposed changes in this thesis.^
Resumo:
Improving energy efficiency in buildings is one of the goals of the Smart City initiatives and a challenge for the European Union. This paper presents a 6LoWPAN wireless transducer network (BatNet) as part of an open energy management system. This network has been designed to operate in buildings, to collect environmental information (temperature, humidity, illumination and presence) and electrical consumption in real time (voltage, current and power factor). The system has been implemented and tested in the Energy Efficiency Research Facility at CeDInt-UPM.
Resumo:
An integrated approach composed of a random utility-based multiregional input-output model and a road transport network model was developed for evaluating the application of a fee to heavy-goods vehicles (HGVs) in Spain. For this purpose, a distance-based charge scenario (in euros per vehicle kilometer) for HGVs was evaluated for a selected motorway network in Spain. Although the aim of this charging policy was to increase the efficiency of transport, the approach strongly identified direct and indirect impacts on the regional economy. Estimates of the magnitude and extent of indirect effects on aggregated macroeconomic indicators (employment and gross domestic product) are provided. The macroeconomic effects of the charging policy were found to be positive for some regions and negative for other regions.
Resumo:
A participatory modelling process has been conducted in two areas of the Guadiana river (the upper and the middle sub-basins), in Spain, with the aim of providing support for decision making in the water management field. The area has a semi-arid climate where irrigated agriculture plays a key role in the economic development of the region and accounts for around 90% of water use. Following the guidelines of the European Water Framework Directive, we promote stakeholder involvement in water management with the aim to achieve an improved understanding of the water system and to encourage the exchange of knowledge and views between stakeholders in order to help building a shared vision of the system. At the same time, the resulting models, which integrate the different sectors and views, provide some insight of the impacts that different management options and possible future scenarios could have. The methodology is based on a Bayesian network combined with an economic model and, in the middle Guadiana sub-basin, with a crop model. The resulting integrated modelling framework is used to simulate possible water policy, market and climate scenarios to find out the impacts of those scenarios on farm income and on the environment. At the end of the modelling process, an evaluation questionnaire was filled by participants in both sub-basins. Results show that this type of processes are found very helpful by stakeholders to improve the system understanding, to understand each others views and to reduce conflict when it exists. In addition, they found the model an extremely useful tool to support management. The graphical interface, the quantitative output and the explicit representation of uncertainty helped stakeholders to better understand the implications of the scenario tested. Finally, the combination of different types of models was also found very useful, as it allowed exploring in detail specific aspects of the water management problems.
Resumo:
The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.
Resumo:
In this paper we focus on the selection of safeguards in a fuzzy risk analysis and management methodology for information systems (IS). Assets are connected by dependency relationships, and a failure of one asset may affect other assets. After computing impact and risk indicators associated with previously identified threats, we identify and apply safeguards to reduce risks in the IS by minimizing the transmission probabilities of failures throughout the asset network. However, as safeguards have associated costs, the aim is to select the safeguards that minimize costs while keeping the risk within acceptable levels. To do this, we propose a dynamic programming-based method that incorporates simulated annealing to tackle optimizations problems.
Resumo:
La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.
Resumo:
The Habitats Directive has created a European network of protected areas combining environmental protection with social and economic activities. Although not clearly advocated in the Directive, participatory approaches have incrementally emerged in order to ensure an adequate management of the Natura 2000 network. This paper looks at the reasons why the European Commission on one side and the national/local authorities on the other side chose to engage in participatory approaches and assesses the structure, degree and scope of these approaches in the light of input and output legitimacy. Main findings are that participation was mostly implemented as a reaction to conflicts and out of a concern over policy implementation, two elements that continue to drive the philosophy of the Natura 2000 network‘s management. The limits of participation in Brussels are contrasted with the potential for more genuine and effective participation mechanisms on the field.
Resumo:
A lively debate emerged on the proposed “Connected Continent” legislative package presented by the European Commission in September 2013. The package contains a proposed rule on the ‘open Internet’, which was heavily discussed in European Parliament hearings in early December. This commentary argues that while the proposed rule is in principle balanced and appealing, it is utterly impractical due to the enormous uncertainty that its application would entail. At the same time, the rule is very far from what neutrality proponents have argued for almost a decade: rather than the place for internet freedom, it would transform the Web into a place requiring constant micro-management and tutoring of user behaviour. Both arguments lead to the conclusion that the current proposal should be at once reformed and analysed under a more holistic lens. On the one hand, Europe should launch an ambitious project for the future, converged infrastructure by mobilising resources and reforming rules to encourage investment into ubiquitous, converged, ‘always on’ connectivity. On the other hand, enhanced legal certainty for broadband investment could justify a more neutrality-oriented approach to traffic management practices on the Internet. The author proposes a new approach to Internet regulation which, altogether, will lead to a more balanced and sustainable model for the future, without jeopardising user freedom.
Resumo:
When they look at Internet policy, EU policymakers seem mesmerised, if not bewitched, by the word ‘neutrality’. Originally confined to the infrastructure layer, today the neutrality rhetoric is being expanded to multi-sided platforms such as search engines and more generally online intermediaries. Policies for search neutrality and platform neutrality are invoked to pursue a variety of policy objectives, encompassing competition, consumer protection, privacy and media pluralism. This paper analyses this emerging debate and comes to a number of conclusions. First, mandating net neutrality at the infrastructure layer might have some merit, but it certainly would not make the Internet neutral. Second, since most of the objectives initially associated with network neutrality cannot be realistically achieved by such a rule, the case for network neutrality legislation would have to stand on different grounds. Third, the fact that the Internet is not neutral is mostly a good thing for end users, who benefit from intermediaries that provide them with a selection of the over-abundant information available on the Web. Fourth, search neutrality and platform neutrality are fundamentally flawed principles that contradict the economics of the Internet. Fifth, neutrality is a very poor and ineffective recipe for media pluralism, and as such should not be invoked as the basis of future media policy. All these conclusions have important consequences for the debate on the future EU policy for the Digital Single Market.
Resumo:
By switching the level of analysis and aggregating data from the micro-level of individual cases to the macro-level, quantitative data can be analysed within a more case-based approach. This paper presents such an approach in two steps: In a first step, it discusses the combination of Social Network Analysis (SNA) and Qualitative Comparative Analysis (QCA) in a sequential mixed-methods research design. In such a design, quantitative social network data on individual cases and their relations at the micro-level are used to describe the structure of the network that these cases constitute at the macro-level. Different network structures can then be compared by QCA. This strategy allows adding an element of potential causal explanation to SNA, while SNA-indicators allow for a systematic description of the cases to be compared by QCA. Because mixing methods can be a promising, but also a risky endeavour, the methodological part also discusses the possibility that underlying assumptions of both methods could clash. In a second step, the research design presented beforehand is applied to an empirical study of policy network structures in Swiss politics. Through a comparison of 11 policy networks, causal paths that lead to a conflictual or consensual policy network structure are identified and discussed. The analysis reveals that different theoretical factors matter and that multiple conjunctural causation is at work. Based on both the methodological discussion and the empirical application, it appears that a combination of SNA and QCA can represent a helpful methodological design for social science research and a possibility of using quantitative data with a more case-based approach.
Resumo:
There is a growing need for innovative methods of dealing with complex, social problems. New types of collaborative efforts have emerged as a result of the inability of more traditional bureaucratic hierarchical arrangements such as departmental program, to resolve these problems. Network structures are one such arrangement that Is at the forefront of this movement. Although collaboration through network structures establishes an innovative response to dealing with social issues, there remains an expectation that outcomes and processes are based on traditional ways of working. It is necessary for practitioners and policy makers alike to begin to understand the realities of what can be expected from network structures in order to maximize the benefits of these unique mechanisms.