992 resultados para Multi-deployment
Resumo:
The diversity in the way cloud providers o↵er their services, give their SLAs, present their QoS, or support di↵erent technologies, makes very difficult the portability and interoperability of cloud applications, and favours the well-known vendor lock-in problem. We propose a model to describe cloud applications and the required resources in an agnostic, and providers- and resources-independent way, in which individual application modules, and entire applications, may be re-deployed using different services without modification. To support this model, and after the proposal of a variety of cross-cloud application management tools by different authors, we propose going one step further in the unification of cloud services with a management approach in which IaaS and PaaS services are integrated into a unified interface. We provide support for deploying applications whose components are distributed on different cloud providers, indistinctly using IaaS and PaaS services.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
Smart Cities are designed to be living systems and turn urban dwellers life more comfortable and interactive by keeping them aware of what surrounds them, while leaving a greener footprint. The Future Cities Project [1] aims to create infrastructures for research in smart cities including a vehicular network, the BusNet, and an environmental sensor platform, the Urban Sense. Vehicles within the BusNet are equipped with On Board Units (OBUs) that offer free Wi-Fi to passengers and devices near the street. The Urban Sense platform is composed by a set of Data Collection Units (DCUs) that include a set of sensors measuring environmental parameters such as air pollution, meteorology and noise. The Urban Sense platform is expanding and receptive to add new sensors to the platform. The parnership with companies like TNL were made and the need to monitor garbage street containers emerged as air pollution prevention. If refuse collection companies know prior to the refuse collection which route is the best to collect the maximum amount of garbage with the shortest path, they can reduce costs and pollution levels are lower, leaving behind a greener footprint. This dissertation work arises in the need to monitor the garbage street containers and integrate these sensors into an Urban Sense DCU. Due to the remote locations of the garbage street containers, a network extension to the vehicular network had to be created. This dissertation work also focus on the Multi-hop network designed to extend the vehicular network coverage area to the remote garbage street containers. In locations where garbage street containers have access to the vehicular network, Roadside Units (RSUs) or Access Points (APs), the Multi-hop network serves has a redundant path to send the data collected from DCUs to the Urban Sense cloud database. To plan this highly dynamic network, the Wi-Fi Planner Tool was developed. This tool allowed taking measurements on the field that led to an optimized location of the Multi-hop network nodes with the use of radio propagation models. This tool also allowed rendering a temperature-map style overlay for Google Earth [2] application. For the DCU for garbage street containers the parner company provided the access to a HUB (device that communicates with the sensor inside the garbage containers). The Future Cities use the Raspberry pi as a platform for the DCUs. To collect the data from the HUB a RS485 to RS232 converter was used at the physical level and the Modbus protocol at the application level. To determine the location and status of the vehicles whinin the vehicular network a TCP Server was developed. This application was developed for the OBUs providing the vehicle Global Positioning System (GPS) location as well as information of when the vehicle is stopped, moving, on idle or even its slope. To implement the Multi-hop network on the field some scripts were developed such as pingLED and “shark”. These scripts helped upon node deployment on the field as well as to perform all the tests on the network. Two setups were implemented on the field, an urban setup was implemented for a Multi-hop network coverage survey and a sub-urban setup was implemented to test the Multi-hop network routing protocols, Optimized Link State Routing Protocol (OLSR) and Babel.
Resumo:
EMC2 finds solutions for dynamic adaptability in open systems. It provides handling of mixed criticality multicore applications in r eal-time conditions, withscalability and utmost flexibility, full-scale deployment and management of integrated tool chains, through the entire lifecycle.
Resumo:
This paper describes the development and first results of the “Community Integrated Assessment System” (CIAS), a unique multi-institutional modular and flexible integrated assessment system for modelling climate change. Key to this development is the supporting software infrastructure, SoftIAM. Through it, CIAS is distributed between the community of institutions which has each contributed modules to the CIAS system. At the heart of SoftIAM is the Bespoke Framework Generator (BFG) which enables flexibility in the assembly and composition of individual modules from a pool to form coupled models within CIAS, and flexibility in their deployment onto the available software and hardware resources. Such flexibility greatly enhances modellers’ ability to re-configure the CIAS coupled models to answer different questions, thus tracking evolving policy needs. It also allows rigorous testing of the robustness of IA modelling results to the use of different component modules representing the same processes (for example, the economy). Such processes are often modelled in very different ways, using different paradigms, at the participating institutions. An illustrative application to the study of the relationship between the economy and the earth’s climate system is provided.
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but what amounts to its essence or ”DNA”. Current approaches show insufficient deployment of three types of knowledge that could be brought to bear in providing a finger printing framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Foci of Interest (FoI) in an image or cross media artefact. Thus our proposed framework aims to deliver selective composite fingerprinting that remains responsive to the requirements for protection of whole or parts of an image which may be of particularly interest and be especially vulnerable to attempts at rights violation. This is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals as well as the inevitably needed market intelligence knowledge such as customers’ social networks interests profiling which we can deploy as a crucial component of our Fingerprinting Collateral Knowledge. This is used in selecting the special FoIs within an image or other media content that have to be selectively and collaterally protected.
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but instead what amounts to its essence or 'DNA'. Current approaches show insufficient deployment of various types of knowledge that could be brought to bear in providing a fingerprinting framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Zones of Interest (ZoI) in an image or cross media artefact. The proposed framework aims to deliver selective composite fingerprinting that is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals and also the inevitably needed market intelligence knowledge such as customers' social networks interests profiling which we can deploy as a crucial component of our fingerprinting collateral knowledge.
Resumo:
As tecnologias da informação e comunicação (TIC) estão presentes nas mais diversas áreas e atividades cotidianas, mas, em que pesem as ações de governos e instituições privadas, a informatização da saúde ainda é um desafio em aberto no Brasil. A situação atual leva a um questionamento sobre as dificuldades associadas à informatização das práticas em saúde, assim como, quais efeitos tais dificuldades têm causado à sociedade Brasileira. Com objetivo de discutir as questões acima citadas, esta tese apresenta quatro artigos sobre processo de informação da saúde no Brasil. O primeiro artigo revisa a literatura sobre TIC em saúde e baseado em duas perspectivas teóricas – estudos Europeus acerca dos Sistemas de Informação em Saúde (SIS) nos Países em Desenvolvimento e estudos sobre Informação e Informática em Saúde, no âmbito do Movimento da Reforma Sanitária –, formula um modelo integrado que combina dimensões de análise e fatores contextuais para a compreensão dos SIS no Brasil. Já o segundo artigo apresenta os conceitos e teóricos e metodológicos da Teoria Ator-Rede (ANT), uma abordagem para o estudo de controvérsias associadas às descobertas científicas e inovações tecnológicas, por meio das redes de atores envolvidos em tais ações. Tal abordagem tem embasado estudos de SI desde 1990 e inspirou as análises dois artigos empíricos desta tese. Os dois últimos artigos foram redigidos a partir da análise da implantação de um SIS em um hospital público no Brasil ocorrida entre os anos de 2010 e 2012. Para a análise do caso, seguiram-se os atores envolvidos nas controvérsias que surgiram durante a implantação do SIS. O terceiro artigo se debruçou sobre as atividades dos analistas de sistema e usuários envolvidos na implantação do SIS. As mudanças observadas durante a implantação do sistema revelam que o sucesso do SIS não foi alcançado pela estrita e técnica execução das atividades incialmente planejadas. Pelo contrário, o sucesso foi construído coletivamente, por meio da negociação entre os atores e de dispositivos de interessamento introduzidos durante o projeto. O quarto artigo, baseado no conceito das Infraestruturas de Informação, discutiu como o sistema CATMAT foi incorporado ao E-Hosp. A análise revelou como a base instalada do CATMAT foi uma condição relevante para a sua escolha durante a implantação do E-Hosp. Além disso, descrevem-se negociações e operações heterogêneas que aconteceram durante a incorporação do CATMAT no sistema E-Hosp. Assim, esta tese argumenta que a implantação de um SIS é um empreendimento de construção coletiva, envolvendo analistas de sistema, profissionais de saúde, políticos e artefatos técnicos. Ademais, evidenciou-se como os SIS inscrevem definições e acordos, influenciando as preferências dos atores na área de saúde.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Optical networks based on passive star couplers and employing wavelength-division multiplexing (WDhf) have been proposed for deployment in local and metropolitan areas. Amplifiers are required in such networks to compensate for the power losses due to splitting and attenuation. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus optical amplifier placement becomes a challenging problem. The general problem of minimizing the total amplifier count, subject to the device constraints, is a mixed-integer non-linear problem. Previous studies have attacked the amplifier placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. In this paper, we present a method to solve the minimum amplifier- placement problem while avoiding the equally powered- wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required in several small to medium-sized networks.
Resumo:
In this paper, we report on an optical tolerance analysis of the submillimeter atmospheric multi-beam limb sounder, STEAMR. Physical optics and ray-tracing methods were used to quantify and separate errors in beam pointing and distortion due to reflector misalignment and primary reflector surface deformations. Simulations were performed concurrently with the manufacturing of a multi-beam demonstrator of the relay optical system which shapes and images the beams to their corresponding receiver feed horns. Results from Monte Carlo simulations show that the inserts used for reflector mounting should be positioned with an overall accuracy better than 100 μm (~ 1/10 wavelength). Analyses of primary reflector surface deformations show that a deviation of magnitude 100 μm can be tolerable before deployment, whereas the corresponding variations should be less than 30 μm during operation. The most sensitive optical elements in terms of misalignments are found near the focal plane. This localized sensitivity is attributed to the off-axis nature of the beams at this location. Post-assembly mechanical measurements of the reflectors in the demonstrator show that alignment better than 50 μm could be obtained.
Resumo:
The complexity of planning a wireless sensor network is dependent on the aspects of optimization and on the application requirements. Even though Murphy's Law is applied everywhere in reality, a good planning algorithm will assist the designers to be aware of the short plates of their design and to improve them before the problems being exposed at the real deployment. A 3D multi-objective planning algorithm is proposed in this paper to provide solutions on the locations of nodes and their properties. It employs a developed ray-tracing scheme for sensing signal and radio propagation modelling. Therefore it is sensitive to the obstacles and makes the models of sensing coverage and link quality more practical compared with other heuristics that use ideal unit-disk models. The proposed algorithm aims at reaching an overall optimization on hardware cost, coverage, link quality and lifetime. Thus each of those metrics are modelled and normalized to compose a desirability function. Evolutionary algorithm is designed to efficiently tackle this NP-hard multi-objective optimization problem. The proposed algorithm is applicable for both indoor and outdoor 3D scenarios. Different parameters that affect the performance are analyzed through extensive experiments; two state-of-the-art algorithms are rebuilt and tested with the same configuration as that of the proposed algorithm. The results indicate that the proposed algorithm converges efficiently within 600 iterations and performs better than the compared heuristics.
Resumo:
One of the key factors for a given application to take advantage of cloud computing is the ability to scale in an efficient, fast and reliable way. In centralized multi-party video conferencing, dynamically scaling a running conversation is a complex problem. In this paper we propose a methodology to divide the Multipoint Control Unit (the video conferencing server) into more simple units, broadcasters. Each broadcaster receives the media from a participant, processes it and forwards it to the rest. These broadcasters can be distributed among a group of CPUs. By using this methodology, video conferencing systems can scale in a more granular way, improving the deployment.