787 resultados para Green IT framework
Resumo:
This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.
Resumo:
Wireless Mesh Networks (WMN) have proven to be a key technology for increased network coverage of Internet infrastructures. The development process for new protocols and architectures in the area of WMN is typically split into evaluation by network simulation and testing of a prototype in a test-bed. Testing a prototype in a real test-bed is time-consuming and expensive. Irrepressible external interferences can occur which makes debugging difficult. Moreover, the test-bed usually supports only a limited number of test topologies. Finally, mobility tests are impractical. Therefore, we propose VirtualMesh as a new testing architecture which can be used before going to a real test-bed. It provides instruments to test the real communication software including the network stack inside a controlled environment. VirtualMesh is implemented by capturing real traffic through a virtual interface at the mesh nodes. The traffic is then redirected to the network simulator OMNeT++. In our experiments, VirtualMesh has proven to be scalable and introduces moderate delays. Therefore, it is suitable for predeployment testing of communication software for WMNs.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
Simbrain is a visually-oriented framework for building and analyzing neural networks. It emphasizes the analysis of networks which control agents embedded in virtual environments, and visualization of the structures which occur in the high dimensional state spaces of these networks. The program was originally intended to facilitate analysis of representational processes in embodied agents, however it is also well suited to teaching neural networks concepts to a broader audience than is traditional for neural networks courses. Simbrain was used to teach a course at a new university, UC Merced, in its inaugural year. Experiences from the course and sample lessons are provided.
Resumo:
Component commonality - the use of the same version of a component across multiple products - is being increasingly considered as a promising way to offer high external variety while retaining low internal variety in operations. However, increasing commonality has both positive and negative cost effects, so that optimization approaches are required to identify an optimal commonality level. As components influence to a greater or lesser extent nearly every process step along the supply chain, it is not surprising that a multitude of diverging commonality problems is being investigated in literature, each of which are developing a specific algorithm designed for the respective commonality problem being considered. The paper on hand aims at a general framework which is flexible and efficient enough to be applied to a wide range of commonality problems. Such a procedure based on a two-stage graph approach is presented and tested. Finally, flexibility of the procedure is shown by customizing the framework to account for different types of commonality problems.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
The Audiovisual Media Services Directive (AVMSD) which regulates broadcasting and on-demand audiovisual media services is at the nexus of current discussions about the convergence of media. The Green Paper of the Commission of April 2013 reflects the struggle of the European Union to come to terms with the phenomenon of convergence and highlights current legal uncertainties. The (theoretical) quest for an appropriate and future-oriented regulatory framework at the European level may be contrasted to the practice of national regulatory authorities. When faced with new media services and new business models, national regulators will inevitably have to make decisions and choices that take into account providers’ interests to offer their services as well as viewers’ interests to receive information. This balancing act performed by national regulators may tip towards the former or latter depending on the national legal framework; social, political and economic considerations; as well as cultural perceptions. This paper thus examines how certain rules contained in the AVMSD are applied by national regulators. It focuses first on the definition of an on-demand audiovisual media service and its scope. Second, it analyses the measures adopted with a view to protection minors in on-demand services and third discusses national approaches towards the promotion of European works in on-demand services. It aims at underlining the significance of national regulatory authorities and the guidelines these adopt to clarify the rules of a key EU Directive of the “media law acquis”.
Resumo:
Mit der Idee eines generischen, an vielfältige Hochschulanforderungen anpassbaren Studierenden-App-Frameworks haben sich innerhalb des Arbeitskreises Web der ZKI ca. 30 Hochschulen zu einem Entwicklungsverbund zusammengefunden. Ziel ist es, an den beteiligten Einrichtungen eine umfassende Zusammenstellung aller elektronischen Studienservices zu evaluieren, übergreifende Daten- und Metadatenmodelle für die Beschreibung dieser Dienste zu erstellen und Schnittstellen zu den gängigen Campusmanagementsystemen sowie zu Infrastrukturen der elektronischen Lehre (LMS, Druckdienste, elektronischen Katalogen usw.) zu entwickeln. In einem abschließenden Schritt werden auf dieser Middleware aufsetzende Studienmanagement-Apps für Studierende erstellt, die die verschiedenen Daten- und Kommunikationsströme der standardisierten Dienste und Kommunikationskanäle bündeln und in eine für den Studierenden leicht zu durchschauende, navigationsfreundliche Aufbereitung kanalisiert. Mit der Konzeption eines dezentralen, über eine Vielzahl von Hochschulen verteilten Entwicklungsprojektes unter einer zentralen Projektleitung wird sichergestellt, dass redundante Entwicklungen vermieden, bundesweit standardisierte Serviceangebote angeboten und Wissenstransferprozesse zwischen einer Vielzahl von Hochschulen zur Nutzung mobiler Devices (Smartphones, Tablets und entsprechende Apps) angeregt werden können. Die Unterstützung der Realisierung klarer Schnittstellenspezifikationen zu Campusmanagementsystemen durch deren Anbieter kann durch diese breite Interessensgemeinschaft ebenfalls gestärkt werden. Weiterhin zentraler Planungsinhalt ist ein Angebot für den App-Nutzer zum Aufbau eines datenschutzrechtlich integeren, persönlichen E-Portfolios. Details finden sich im Kapitel Projektziele weiter unten.
Resumo:
The paper deals with the development of a general as well as integrative and holistic framework to systematize and assess vulnerability, risk and adaptation. The framework is a thinking tool meant as a heuristic that outlines key factors and different dimensions that need to be addressed when assessing vulnerability in the context of natural hazards and climate change. The approach underlines that the key factors of such a common framework are related to the exposure of a society or system to a hazard or stressor, the susceptibility of the system or community exposed, and its resilience and adaptive capacity. Additionally, it underlines the necessity to consider key factors and multiple thematic dimensions when assessing vulnerability in the context of natural and socio-natural hazards. In this regard, it shows key linkages between the different concepts used within the disaster risk management (DRM) and climate change adaptation (CCA) research. Further, it helps to illustrate the strong relationships between different concepts used in DRM and CCA. The framework is also a tool for communicating complexity and stresses the need for societal change in order to reduce risk and to promote adaptation. With regard to this, the policy relevance of the framework and first results of its application are outlined. Overall, the framework presented enhances the discussion on how to frame and link vulnerability, disaster risk, risk management and adaptation concepts.
Resumo:
The regulation of nanomaterials is being discussed at various levels. This article offers a historical description of governmental activities concerning the safety of nanomaterials at the United Nations (UN) level since 2006, with a focus on the UN Strategic Approach to International Chemicals Management (SAICM). The outcomes of the SAICM process were a nanospecific resolution and the addition of new activities on nanotechnologies and manufactured nanomaterials to the SAICM’s Global Plan of Action. The article discusses the implications of these decisions for multilateral environmental agreements. In addition, it studies the consequences of the regulation of nanotechnologies activities on trade governance, in particular the relationship between the SAICM to the legally binding World Trade Organization (WTO) agreements (notably the General Agreement on Tariffs and Trade and the Agreement on Technical Barriers to Trade). The article concludes that the SAICM decisions on manufactured nanomaterials are compatible with WTO law.
Resumo:
The fuzzy online reputation analysis framework, or “foRa” (plural of forum, the Latin word for marketplace) framework, is a method for searching the Social Web to find meaningful information about reputation. Based on an automatic, fuzzy-built ontology, this framework queries the social marketplaces of the Web for reputation, combines the retrieved results, and generates navigable Topic Maps. Using these interactive maps, communications operatives can zero in on precisely what they are looking for and discover unforeseen relationships between topics and tags. Thus, using this framework, it is possible to scan the Social Web for a name, product, brand, or combination thereof and determine query-related topic classes with related terms and thus identify hidden sources. This chapter also briefly describes the youReputation prototype (www.youreputation.org), a free web-based application for reputation analysis. In the course of this, a small example will explain the benefits of the prototype.
Resumo:
The regulation of nanomaterials is being discussed at various levels. This article offers a historical description of governmental activities concerning the safety of nanomaterials at the United Nations (UN) level since 2006, with a focus on the UN Strategic Approach to International Chemicals Management (SAICM). The outcomes of the SAICM process were a nanospecific resolution and the addition of new activities on nanotechnologies and manufactured nanomaterials to the SAICM’s Global Plan of Action. The article discusses the implications of these decisions for multilateral environmental agreements. In addition, it studies the consequences of the regulation of nanotechnologies activities on trade governance, in particular the relationship between the SAICM to the legally binding World Trade Organization (WTO) agreements (notably the General Agreement on Tariffs and Trade and the Agreement on Technical Barriers to Trade). The article concludes that the SAICM decisions on manufactured nanomaterials are compatible with WTO law.
Resumo:
The future Internet architecture aims to reformulate the way the content/service is requested to make it location-independent. Information-Centric Networking is a new network paradigm, which tries to achieve this goal by making content objects identified and requested by name instead of address. In this paper, we extend Information-Centric Networking architecture to support services in order to be requested and invoked by names. We present NextServe framework, which is a service framework with a human-readable self-explanatory naming scheme. NextServe is inspired by the object-oriented programming paradigm and is applicable with real-world scenarios.
Resumo:
The European Commission’s proposals for the Legislative Framework of the Common Agricultural Policy (CAP) in the period 2014-2020 include, inter alia, the introduction of a “strong greening component”. For the first time, all EU farmers in receipt of support are to “go beyond the requirements of cross compliance and deliver environmental and climate benefits as part of their everyday activities crop diversification as a contribution to all EU farmers in receipt of support go beyond the requirements of cross compliance and deliver environmental and climate benefits as part of their everyday activities.” In a legal opinion prepared at the request of APRODEV, the Association of World Council of Churches related Development Organisations in Europe (www.aprodev.eu), Christian Häberli examines the WTO implications of this proposal, as compared with an alternative proposal to rather link direct payments to crop rotation. The conclusions are twofold: 1. Crop rotation is at least as likely to be found Green Box-compatible as crop diversification. Moreover, it will be more difficult to argue that crop diversification is “not more than minimally production-distorting” because it entails for most farmers less cost and work. 2. Even if (either of the two cropping schemes) were to be found “amber”, the EU would not have to relinquish this conditionality. This is because the direct payments involved would in all likelihood not, together with the other price support instruments, exceed the amount available under the presently scheduled maximum.
Resumo:
This paper applies a policy analysis approach to the question of how to effectively regulate micropollution in a sustainable manner. Micropollution is a complex policy problem characterized by a huge number and diversity of chemical substances, as well as various entry paths into the aquatic environment. It challenges traditional water quality management by calling for new technologies in wastewater treatment and behavioral changes in industry, agriculture and civil society. In light of such challenges, the question arises as to how to regulate such a complex phenomenon to ensure water quality is maintained in the future? What can we learn from past experiences in water quality regulation? To answer these questions, policy analysis strongly focuses on the design and choice of policy instruments and the mix of such measures. In this paper, we review instruments commonly used in past water quality regulation. We evaluate their ability to respond to the characteristics of a more recent water quality problem, i.e., micropollution, in a sustainable way. This way, we develop a new framework that integrates both the problem dimension (i.e., causes and effects of a problem) as well as the sustainability dimension (e.g., long-term, cross-sectoral and multi-level) to assess which policy instruments are best suited to regulate micropollution. We thus conclude that sustainability criteria help to identify an appropriate instrument mix of end-of-pipe and source-directed measures to reduce aquatic micropollution.