821 resultados para Infrastructures sanitaires
Resumo:
The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.
Resumo:
Advances in the development of computer vision, miniature Micro-Electro-Mechanical Systems (MEMS) and Wireless Sensor Network (WSN) offer intriguing possibilities that can radically alter the paradigms underlying existing methods of condition assessment and monitoring of ageing civil engineering infrastructure. This paper describes some of the outcomes of the European Science Foundation project "Micro-Measurement and Monitoring System for Ageing Underground Infrastructures (Underground M3)". The main aim of the project was to develop a system that uses a tiered approach to monitor the degree and rate of tunnel deterioration. The system comprises of (1) Tier 1: Micro-detection using advances in computer vision and (2) Tier 2: Micro-monitoring and communication using advances in MEMS and WSN. These potentially low-cost technologies will be able to reduce costs associated with end-of-life structures, which is essential to the viability of rehabilitation, repair and reuse. The paper describes the actual deployment and testing of these innovative monitoring tools in tunnels of London Underground, Prague Metro and Barcelona Metro. © 2012 Taylor & Francis Group.
Resumo:
Quantum key distribution (QKD) uniquely allows distribution of cryptographic keys with security verified by quantum mechanical limits. Both protocol execution and subsequent applications require the assistance of classical data communication channels. While using separate fibers is one option, it is economically more viable if data and quantum signals are simultaneously transmitted through a single fiber. However, noise-photon contamination arising from the intense data signal has severely restricted both the QKD distances and secure key rates. Here, we exploit a novel temporal-filtering effect for noise-photon rejection. This allows high-bit-rate QKD over fibers up to 90 km in length and populated with error-free bidirectional Gb/s data communications. With high-bit rate and range sufficient for important information infrastructures, such as smart cities and 10 Gbit Ethernet, QKD is a significant step closer towards wide-scale deployment in fiber networks.
Resumo:
The RSA-based Password-Authenticated Key Exchange (PAKE) protocols have been proposed to realize both mutual authentication and generation of secure session keys where a client is sharing his/her password only with a server and the latter should generate its RSA public/private key pair (e, n), (d, n) every time due to the lack of PKI (Public-Key Infrastructures). One of the ways to avoid a special kind of off-line (so called e-residue) attacks in the RSA-based PAKE protocols is to deploy a challenge/response method by which a client verifies the relative primality of e and φ(n) interactively with a server. However, this kind of RSA-based PAKE protocols did not give any proof of the underlying challenge/response method and therefore could not specify the exact complexity of their protocols since there exists another security parameter, needed in the challenge/response method. In this paper, we first present an RSA-based PAKE (RSA-PAKE) protocol that can deploy two different challenge/response methods (denoted by Challenge/Response Method1 and Challenge/Response Method2). The main contributions of this work include: (1) Based on the number theory, we prove that the Challenge/Response Method1 and the Challenge/Response Method2 are secure against e-residue attacks for any odd prime e; (2) With the security parameter for the on-line attacks, we show that the RSA-PAKE protocol is provably secure in the random oracle model where all of the off-line attacks are not more efficient than on-line dictionary attacks; and (3) By considering the Hamming weight of e and its complexity in the RSA-PAKE protocol, we search for primes to be recommended for a practical use. We also compare the RSA-PAKE protocol with the previous ones mainly in terms of computation and communication complexities.
Resumo:
Expansion of economic activities, urbanisation, increased resource use and population growth are continuously increasing the vulnerability of the coastal zone. This vulnerability is now further raised by the threat of climate change and accelerated sea level rise. The potentially severe impacts force policy-makers to also consider long-term planning for climate change and sea level rise. For reasons of efficiency and effectiveness this long-term planning should be integrated with existing short-term plans, thus creating an Integrated Coastal Zone Management programme. As a starting point for coastal zone management, the assessment of a country's or region's vulnerability to accelerated sea level rise is of utmost importance. The Intergovernmental Panel on Climate Change has developed a common methodology for this purpose. Studies carried out according to this Common Methodology have been compared and combined, from which general conclusions on local, regional and global vulnerability have been drawn, the latter in the form of a Global Vulnerability Assessment. In order to address the challenge of coping with climate change and accelerated sea level rise, it is essential to foresee the possible impacts, and to take precautionary action. Because of the long lead times needed for creating the required technical and institutional infrastructures, such action should be taken in the short term. Furthermore, it should be part of a broader coastal zone management and planning context. This will require a holistic view, shared by the different institutional levels that exist, along which different needs and interests should be balanced.
Resumo:
虽然新的制造系统不断涌现 ,企业仍然对CIM给予很大广泛的认可。本文讨论了CIM设计中的一些关键问题。首先 ,根据业务过程重组来实施CIM至关重要 ,其次 ,必须采用CIM的功能体系结构规范和容纳各种功能系统 ,以便更好地执行业务流过程。同时 ,需要有相应的集成基础结构来支持整个企业的数据管理。
Resumo:
The dream of pervasive computing is slowly becoming a reality. A number of projects around the world are constantly contributing ideas and solutions that are bound to change the way we interact with our environments and with one another. An essential component of the future is a software infrastructure that is capable of supporting interactions on scales ranging from a single physical space to intercontinental collaborations. Such infrastructure must help applications adapt to very diverse environments and must protect people's privacy and respect their personal preferences. In this paper we indicate a number of limitations present in the software infrastructures proposed so far (including our previous work). We then describe the framework for building an infrastructure that satisfies the abovementioned criteria. This framework hinges on the concepts of delegation, arbitration and high-level service discovery. Components of our own implementation of such an infrastructure are presented.
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
Spink, S., Urquhart, C., Cox, A. & Higher Education Academy - Information and Computer Sciences Subject Centre. (2007). Procurement of electronic content across the UK National Health Service and Higher Education sectors. Report to JISC executive and LKDN executive. Sponsorship: JISC/LKDN
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.
Resumo:
Scyphomedusae are receiving increasing recognition as key components of marine ecosystems. However, information on their distribution and abundance beyond coastal waters is generally lacking. Organising access to such data is critical to effectively transpose findings from laboratory, mesocosm and small scale studies to the scale of ecological processes. These data are also required to identify the risks of detrimental impacts of jellyfish blooms on human activities. In Ireland, such risks raise concerns among the public, but foremost amongst the professionals of the aquaculture and fishing sectors. The present work looked at the opportunity to get access to new information on the distribution of jellyfish around Ireland mostly by using existing infrastructures and programmes. The analysis of bycatch data collected during the Irish groundfish surveys provided new insights into the distribution of Pelagia noctiluca over an area >160 000 km2, a scale never reached before in a region of the Northeast Atlantic (140 sampling stations). Similarly, 4 years of data collected during the Irish Sea juvenile gadoid fish survey provided the first spatially, explicit, information on the abundance of Aurelia aurita and Cyanea spp. (Cyanea capillata and Cyanea lamarckii) throughout the Irish Sea (> 200 sampling events). In addition, the use of ships of opportunity allowed repeated samplings (N = 37) of an >100 km long transect between Dublin (Ireland) and Holyhead (Wales, UK), therefore providing two years of seasonal monitoring of the occurrence of scyphomedusae in that region. Finally, in order to inform the movements of C. capillata in an area where many negative interactions with bathers occur, the horizontal and vertical movements of 5 individual C. capillata were investigated through acoustic tracking.
Resumo:
This paper presents a design science approach to solving persistent problems in the international shipping eco system by creating the missing common information infrastructures. Specifically, this paper reports on an ongoing dialogue between stakeholders in the shipping industry and information systems researchers engaged in the design and development of a prototype for an innovative IT-artifact called Shipping Information Pipeline which is a kind of “an internet” for shipping information. The instrumental aim is to enable information seamlessly to cross the organizational boundaries and national borders within international shipping which is a rather complex domain. The intellectual objective is to generate and evaluate the efficacy and effectiveness of design principles for inter-organizational information infrastructures in the international shipping domain that can have positive impacts on global trade and local economies.
Resumo:
Most studies on the environmental performance of buildings focus on energy demand and associated greenhouse gas emissions. They often neglect to consider the range of other resource demands and environmental impacts associated with buildings, including water. Studies that assess water use in buildings typically consider only operational water, which excludes the embodied water in building materials or the water associated with the mobility of building occupants. A new framework is presented that quantifies water requirements at the building scale (i.e. the embodied and operational water of the building as well as its maintenance and refurbishment) and at the city scale (i.e. the embodied water of nearby infrastructures such as roads, gas distribution and others) and the transport-related indirect water use of building occupants. A case study house located in Melbourne, Australia, is analysed using the new framework. The results show that each of the embodied, operational and transport requirements is nearly equally important. By integrating these three water requirements, the developed framework provides architects, building designers, planners and decision-makers with a powerful means to understand and effectively reduce the overall water use and associated environmental impacts of residential buildings.