821 resultados para Infrastructures sanitaires
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^
Resumo:
The future power grid will effectively utilize renewable energy resources and distributed generation to respond to energy demand while incorporating information technology and communication infrastructure for their optimum operation. This dissertation contributes to the development of real-time techniques, for wide-area monitoring and secure real-time control and operation of hybrid power systems. ^ To handle the increased level of real-time data exchange, this dissertation develops a supervisory control and data acquisition (SCADA) system that is equipped with a state estimation scheme from the real-time data. This system is verified on a specially developed laboratory-based test bed facility, as a hardware and software platform, to emulate the actual scenarios of a real hybrid power system with the highest level of similarities and capabilities to practical utility systems. It includes phasor measurements at hundreds of measurement points on the system. These measurements were obtained from especially developed laboratory based Phasor Measurement Unit (PMU) that is utilized in addition to existing commercially based PMU’s. The developed PMU was used in conjunction with the interconnected system along with the commercial PMU’s. The tested studies included a new technique for detecting the partially islanded micro grids in addition to several real-time techniques for synchronization and parameter identifications of hybrid systems. ^ Moreover, due to numerous integration of renewable energy resources through DC microgrids, this dissertation performs several practical cases for improvement of interoperability of such systems. Moreover, increased number of small and dispersed generating stations and their need to connect fast and properly into the AC grids, urged this work to explore the challenges that arise in synchronization of generators to the grid and through introduction of a Dynamic Brake system to improve the process of connecting distributed generators to the power grid.^ Real time operation and control requires data communication security. A research effort in this dissertation was developed based on Trusted Sensing Base (TSB) process for data communication security. The innovative TSB approach improves the security aspect of the power grid as a cyber-physical system. It is based on available GPS synchronization technology and provides protection against confidentiality attacks in critical power system infrastructures. ^
Resumo:
Le present travail a pour objectif, discuter le dialogue entre deux des formes de penser le processos santé-maladie: la médécine traditionnelle et celle scientifique. Et cette dernière représentée par le Système Unique de Santé, particulièrement par une Équipe de Santé de la Famille. Partant du questionnement “quel est le rôle de chaque système de santé dans les problèmes sanitaires ainsi que dans la prévention des maladies?”, il s‟agit de vérifier si l‟ ”Écologie des Savoirs” discutée par Santos (2007) a été appliquée dans les deux manières de penser et pratiquer les politiques de santé; tenant em compte que chacune d‟elles ayant son champ de possibilites et d‟impossibilités agit directement sur le quotidien des aires géographiques concernés par la recherche: le District de São João do Abade dans le Municipe de Curuçá/PA et l‟Équipe de Santé de la Famille Abade localisée dans ce même district. Cette étude fut réalisée à partir d‟instruments de collecte de données comme la recherche de terrain, l‟observation directe et les interviews semi-structurés (HAGUETTE, 1997) e apports théoriques à partir de certains des concepts clés les plus utilisés par les Épistémologies du Sud où nous mettons en relief l' “Écologie des Savoirs” (Santos, 2007); le “travail de Traduction” (SANTOS, 2008); la “Sociologie des Absences et la Sociologie des Émergences” (SANTOS, 2004); le concept de “Santé-Maladie” (MINAYO, 1988); ainsi que les discussions autor des “Savoirs de la Tradition” (ALMEIDA, 2010; RAMALHO et ALMEIDA, 2011). La stratégie “Santé de la Famille fut pensée à travers la vision de Guedelha (2008) et Vilar et al. (2011) où à partir de cette dernière la discussion sur le concept de “Médécine Communautaire” (DONNANGELO e PEREIRA, 1976) comme recours importante dans la recherche de l‟ “Écologie des Savoirs” dans la Stratégie Santé de la Famille a été possible.
Resumo:
Archaeologists are often considered frontrunners in employing spatial approaches within the social sciences and humanities, including geospatial technologies such as geographic information systems (GIS) that are now routinely used in archaeology. Since the late 1980s, GIS has mainly been used to support data collection and management as well as spatial analysis and modeling. While fruitful, these efforts have arguably neglected the potential contribution of advanced visualization methods to the generation of broader archaeological knowledge. This paper reviews the use of GIS in archaeology from a geographic visualization (geovisual) perspective and examines how these methods can broaden the scope of archaeological research in an era of more user-friendly cyber-infrastructures. Like most computational databases, GIS do not easily support temporal data. This limitation is particularly problematic in archaeology because processes and events are best understood in space and time. To deal with such shortcomings in existing tools, archaeologists often end up having to reduce the diversity and complexity of archaeological phenomena. Recent developments in geographic visualization begin to address some of these issues, and are pertinent in the globalized world as archaeologists amass vast new bodies of geo-referenced information and work towards integrating them with traditional archaeological data. Greater effort in developing geovisualization and geovisual analytics appropriate for archaeological data can create opportunities to visualize, navigate and assess different sources of information within the larger archaeological community, thus enhancing possibilities for collaborative research and new forms of critical inquiry.
Resumo:
GIA acknowledges funding from the Carnegie Trust to undertake fieldwork for this project. SM acknowledges the Israel Science Foundation (ISF grant no. 1436/14) and the Ministry of National Infrastructures, Energy and Water Resources (grant no. #214-17-027). RW was supported by the Israel Science Foundation (ISF grant no. 1245/11). We thank Hugo Ortner and Pedro Alfaro for careful and constructive reviews.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
The protection of cyberspace has become one of the highest security priorities of governments worldwide. The EU is not an exception in this context, given its rapidly developing cyber security policy. Since the 1990s, we could observe the creation of three broad areas of policy interest: cyber-crime, critical information infrastructures and cyber-defence. One of the main trends transversal to these areas is the importance that the private sector has come to assume within them. In particular in the area of critical information infrastructure protection, the private sector is seen as a key stakeholder, given that it currently operates most infrastructures in this area. As a result of this operative capacity, the private sector has come to be understood as the expert in network and information systems security, whose knowledge is crucial for the regulation of the field. Adopting a Regulatory Capitalism framework, complemented by insights from Network Governance, we can identify the shifting role of the private sector in this field from one of a victim in need of protection in the first phase, to a commercial actor bearing responsibility for ensuring network resilience in the second, to an active policy shaper in the third, participating in the regulation of NIS by providing technical expertise. By drawing insights from the above-mentioned frameworks, we can better understand how private actors are involved in shaping regulatory responses, as well as why they have been incorporated into these regulatory networks.
Resumo:
The effects of vehicle speed for Structural Health Monitoring (SHM) of bridges under operational conditions are studied in this paper. The moving vehicle is modelled as a single degree oscillator traversing a damaged beam at a constant speed. The bridge is modelled as simply supported Euler-Bernoulli beam with a breathing crack. The breathing crack is treated as a nonlinear system with bilinear stiffness characteristics related to the opening and closing of crack. The unevenness of the bridge deck is modelled using road classification according to ISO 8606:1995(E). The stochastic description of the unevenness of the road surface is used as an aid to monitor the health of the structure in its operational condition. Numerical simulations are conducted considering the effects of changing vehicle speed with regards to cumulant based statistical damage detection parameters. The detection and calibration of damage at different levels is based on an algorithm dependent on responses of the damaged beam due to passages of the load. Possibilities of damage detection and calibration under benchmarked and non-benchmarked cases are considered. Sensitivity of calibration values is studied. The findings of this paper are important for establishing the expectations from different vehicle speeds on a bridge for damage detection purposes using bridge-vehicle interaction where the bridge does not need to be closed for monitoring. The identification of bunching of these speed ranges provides guidelines for using the methodology developed in the paper.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.
Resumo:
Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.
Resumo:
Since the 1950s the global consumption of natural resources has skyrocketed, both in magnitude and in the range of resources used. Closely coupled with emissions of greenhouse gases, land consumption, pollution of environmental media, and degradation of ecosystems, as well as with economic development, increasing resource use is a key issue to be addressed in order to keep the planet Earth in a safe and just operating space. This requires thinking about absolute reductions in resource use and associated environmental impacts, and, when put in the context of current re-focusing on economic growth at the European level, absolute decoupling, i.e., maintaining economic development while absolutely reducing resource use and associated environmental impacts. Changing behavioural, institutional and organisational structures that lock-in unsustainable resource use is, thus, a formidable challenge as existing world views, social practices, infrastructures, as well as power structures, make initiating change difficult. Hence, policy mixes are needed that will target different drivers in a systematic way. When designing policy mixes for decoupling, the effect of individual instruments on other drivers and on other instruments in a mix should be considered and potential negative effects be mitigated. This requires smart and time-dynamic policy packaging. This Special Issue investigates the following research questions: What is decoupling and how does it relate to resource efficiency and environmental policy? How can we develop and realize policy mixes for decoupling economic development from resource use and associated environmental impacts? And how can we do this in a systemic way, so that all relevant dimensions and linkages—including across economic and social issues, such as production, consumption, transport, growth and wellbeing—are taken into account? In addressing these questions, the overarching goals of this Special Issue are to: address the challenges related to more sustainable resource-use; contribute to the development of successful policy tools and practices for sustainable development and resource efficiency (particularly through the exploration of socio-economic, scientific, and integrated aspects of sustainable development); and inform policy debates and policy-making. The Special Issue draws on findings from the EU and other countries to offer lessons of international relevance for policy mixes for more sustainable resource-use, with findings of interest to policy makers in central and local government and NGOs, decision makers in business, academics, researchers, and scientists.
Resumo:
Community networks are IP-based computer networks that are operated by a community as a common good. In Europe, the most well-known community networks are Guifi in Catalonia, Freifunk in Berlin, Ninux in Italy, Funkfeuer in Vienna and the Athens Wireless Metropolitan Network in Greece. This paper deals with community networks as alternative forms of Internet access and alternative infrastructures and asks: What does sustainability and unsustainability mean in the context of community networks? What advantages do such networks have over conventional forms of Internet access and infrastructure provided by large telecommunications corporations? In addition what disadvantages do they face at the same time? This article provides a framework for thinking dialectically about the un/sustainability of community networks. It provides a framework of practical questions that can be asked when assessing power structures in the context of Internet infrastructures and access. It presents an overview of environmental, economic, political and cultural contradictions that community networks may face as well as a typology of questions that can be asked in order to identify such contradictions.
Resumo:
Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.