984 resultados para large infrastructure
Resumo:
Participatory Sensing combines the ubiquity of mobile phones with sensing capabilities of Wireless Sensor Networks. It targets pervasive collection of information, e.g., temperature, traffic conditions, or health-related data. As users produce measurements from their mobile devices, voluntary participation becomes essential. However, a number of privacy concerns -- due to the personal information conveyed by data reports -- hinder large-scale deployment of participatory sensing applications. Prior work on privacy protection, for participatory sensing, has often relayed on unrealistic assumptions and with no provably-secure guarantees. The goal of this project is to introduce PEPSI: a Privacy-Enhanced Participatory Sensing Infrastructure. We explore realistic architectural assumptions and a minimal set of (formal) privacy requirements, aiming at protecting privacy of both data producers and consumers. We design a solution that attains privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.
Resumo:
To date, big data applications have focused on the store-and-process paradigm. In this paper we describe an initiative to deal with big data applications for continuous streams of events. In many emerging applications, the volume of data being streamed is so large that the traditional ‘store-then-process’ paradigm is either not suitable or too inefficient. Moreover, soft-real time requirements might severely limit the engineering solutions. Many scenarios fit this description. In network security for cloud data centres, for instance, very high volumes of IP packets and events from sensors at firewalls, network switches and routers and servers need to be analyzed and should detect attacks in minimal time, in order to limit the effect of the malicious activity over the IT infrastructure. Similarly, in the fraud department of a credit card company, payment requests should be processed online and need to be processed as quickly as possible in order to provide meaningful results in real-time. An ideal system would detect fraud during the authorization process that lasts hundreds of milliseconds and deny the payment authorization, minimizing the damage to the user and the credit card company.
Resumo:
The well-documented re-colonisation of the French large river basins of Loire and Rhone by European otter and beaver allowed the analysis of explanatory factors and threats to species movement in the river corridor. To what extent anthropogenic disturbance of the riparian zone influences the corridor functioning is a central question in the understanding of ecological networks and the definition of restoration goals for river networks. The generalist or specialist nature of target species might be determining for the responses to habitat quality and barriers in the riparian corridor. Detailed datasets of land use, human stressors and hydro-morphological characteristics of river segments for the entire river basins allowed identifying the habitat requirements of the two species for the riparian zone. The identified critical factors were entered in a network analysis based on the ecological niche factor approach. Significant responses to riparian corridor quality for forest cover, alterations of channel straightening and urbanisation and infrastructure in the riparian zone are observed for both species, so they may well serve as indicators for corridor functioning. The hypothesis for generalists being less sensitive to human disturbance was withdrawn, since the otter as generalist species responded strongest to hydro-morphological alterations and human presence in general. The beaver responded the strongest to the physical environment as expected for this specialist species. The difference in responses for generalist and specialist species is clearly present and the two species have a strong complementary indicator value. The interpretation of the network analysis outcomes stresses the need for an estimation of ecological requirements of more species in the evaluation of riparian corridor functioning and in conservation planning.
Resumo:
The security event correlation scalability has become a major concern for security analysts and IT administrators when considering complex IT infrastructures that need to handle gargantuan amounts of events or wide correlation window spans. The current correlation capabilities of Security Information and Event Management (SIEM), based on a single node in centralized servers, have proved to be insufficient to process large event streams. This paper introduces a step forward in the current state of the art to address the aforementioned problems. The proposed model takes into account the two main aspects of this ?eld: distributed correlation and query parallelization. We present a case study of a multiple-step attack on the Olympic Games IT infrastructure to illustrate the applicability of our approach.
Resumo:
Between 2003 and 2007 an urban network of road tunnels with a total constructed tubes length of 45 km was built in the city of Madrid. This amazing engineering work, known as “Calle30 Project” counted with different kinds of tunnel typologies and ventilation systems. Due to the length of the tunnels and the impact of the work itself, the tunnels were endowed with a great variety of installations to provide the maximum levels of safety both for users and the infrastructure including,in some parts of the tunnel, fixed fire fighting system based on water mist. Within this framework a large-scale campaign of fire tests were planned to study different aspects related to fire safety in the tunnels including the phenomena of the interaction between ventilation and extinction system. In addition, this large scale fire tests allowed fire brigades of the city of Madrid an opportunity to define operational procedures for specific fire fighting in tunnels and evaluate the possibilities of fixed fire fighting systems. The tests were carried out in the Center of Experimentation "San Pedro of Anes" which counts with a 600 m tunnel with a removable false ceiling for reproducing different ceiling heights and ventilation conditions (transverse and longitudinal ones). Interesting conclusions on the interaction of ventilation and water mist systems were obtained but also on other aspects including performance of water mist system in terms of reduction of gas temperatures or visibility conditions. This paper presents a description of the test’s campaign carried out and some previous results obtained.
Resumo:
Thesis (Master, Computing) -- Queen's University, 2016-05-29 18:11:34.114
Resumo:
This research develops a methodology and model formulation which suggests locations for rapid chargers to help assist infrastructure development and enable greater battery electric vehicle (BEV) usage. The model considers the likely travel patterns of BEVs and their subsequent charging demands across a large road network, where no prior candidate site information is required. Using a GIS-based methodology, polygons are constructed which represent the charging demand zones for particular routes across a real-world road network. The use of polygons allows the maximum number of charging combinations to be considered whilst limiting the input intensity needed for the model. Further polygons are added to represent deviation possibilities, meaning that placement of charge points away from the shortest path is possible, given a penalty function. A validation of the model is carried out by assessing the expected demand at current rapid charging locations and comparing to recorded empirical usage data. Results suggest that the developed model provides a good approximation to real world observations, and that for the provision of charging, location matters. The model is also implemented where no prior candidate site information is required. As such, locations are chosen based on the weighted overlay between several different routes where BEV journeys may be expected. In doing so many locations, or types of locations, could be compared against one another and then analysed in relation to siting practicalities, such as cost, land permission and infrastructure availability. Results show that efficient facility location, given numerous siting possibilities across a large road network can be achieved. Slight improvements to the standard greedy adding technique are made by adding combination weightings which aim to reward important long distance routes that require more than one charge to complete.
Resumo:
To benefit from the advantages that Cloud Computing brings to the IT industry, management policies must be implemented as a part of the operation of the Cloud. Among others, for example, the specification of policies can be used for the management of energy to reduce the cost of running the IT system or also for security policies while handling privacy issues of users. As cloud platforms are large, manual enforcement of policies is not scalable. Hence, autonomic approaches for management policies have recently received a considerable attention. These approaches allow specification of rules that are executed via rule-engines. The process of rules creation starts by the interpretation of the policies drafted by high-rank managers. Then, technical IT staff translate such policies to operational activities to implement them. Such process can start from a textual declarative description and after numerous steps terminates in a set of rules to be executed on a rule engine. To simplify the steps and to bridge the considerable gap between the declarative policies and executable rules, we propose a domain-specific language called CloudMPL. We also design a method of automated transformation of the rules captured in CloudMPL to the popular rule-engine Drools. As the policies are changed over time, code generation will reduce the time required for the implementation of the policies. In addition, using a declarative language for writing the specifications is expected to make the authoring of rules easier. We demonstrate the use of the CloudMPL language into a running example extracted from a management energy consumption case study.
Resumo:
Internet Protocol Television (IPTV) is a system where a digital television service is delivered by using Internet Protocol over a network infrastructure. There is considerable confusion and concern about the IPTV, since two different technologies have to be mended together to provide the end customers with some thing better than the conventional television. In this research, functional architecture of the IPTV system was investigated. Very Large Scale Integration based system for streaming server controller were designed and different ways of hosting a web server which can be used to send the control signals to the streaming server controller were studied. The web server accepts inputs from the keyboard and FPGA board switches and depending on the preset configuration the server will open a selected web page and also sends the control signals to the streaming server controller. It was observed that the applications run faster on PowerPC since it is embedded into the FPGA. Commercial market and Global deployment of IPTV were discussed.
Resumo:
Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.
Resumo:
We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.
Resumo:
We propose a very long baseline atom interferometer test of Einstein's equivalence principle (EEP) with ytterbium and rubidium extending over 10m of free fall. In view of existing parametrizations of EEP violations, this choice of test masses significantly broadens the scope of atom interferometric EEP tests with respect to other performed or proposed tests by comparing two elements with high atomic numbfers. In the first step, our experimental scheme will allow us to reach an accuracy in the Eotvos ratio of 7 . 10(-13). This achievement will constrain violation scenarios beyond our present knowledge and will represent an important milestone for exploring a variety of schemes for further improvements of the tests as outlined in the paper. We will discuss the technical realisation in the new infrastructure of the Hanover Institute of Technology (HITec) and give a short overview of the requirements needed to reach this accuracy. The experiment will demonstrate a variety of techniques, which will be employed in future tests of EEP, high-accuracy gravimetry and gravity gradiometry. It includes operation of a force-sensitive atom interferometer with an alkaline earth-like element in free fall, beam splitting over macroscopic distances and novel source concepts.
Resumo:
Members of the General Assembly asked the Legislative Audit Council to review the operations of the South Carolina Transportation Infrastructure Bank, a state agency that awards grants and loans to local and state agencies primarily for large transportation construction projects. The primary audit objectives were to review compliance with state law and policies regarding: The awarding of grants and loans for transportation construction projects ; The use of project revenues and whether funds dedicated to specific projects have been comingled with funds dedicated to other projects ;• Proper accounting and reporting procedures ; The process for repayment of revenue bonds ; Hiring of consultants, attorneys, and bond credit rating agencies ; Ethics.
Resumo:
Forested areas within cities host a large number of species, responsible for many ecosystem services in urban areas. The biodiversity in these areas is influenced by human disturbances such as atmospheric pollution and urban heat island effect. To ameliorate the effects of these factors, an increase in urban green areas is often considered sufficient. However, this approach assumes that all types of green cover have the same importance for species. Our aim was to show that not all forested green areas are equal in importance for species, but that based on a multi-taxa and functional diversity approach it is possible to value green infrastructure in urban environments. After evaluating the diversity of lichens, butterflies and other-arthropods, birds and mammals in 31 Mediterranean urban forests in south-west Europe (Almada, Portugal), bird and lichen functional groups responsive to urbanization were found. A community shift (tolerant species replacing sensitive ones) along the urbanization gradient was found, and this must be considered when using these groups as indicators of the effect of urbanization. Bird and lichen functional groups were then analyzed together with the characteristics of the forests and their surroundings. Our results showed that, contrary to previous assumptions, vegetation density and more importantly the amount of urban areas around the forest (matrix), are more important for biodiversity than forest quantity alone. This indicated that not all types of forested green areas have the same importance for biodiversity. An index of forest functional diversity was then calculated for all sampled forests of the area. This could help decision-makers to improve the management of urban green infrastructures with the goal of increasing functionality and ultimately ecosystem services in urban areas.
Resumo:
Visualisation provides good support for software analysis. It copes with the intangible nature of software by providing concrete representations of it. By reducing the complexity of software, visualisations are especially useful when dealing with large amounts of code. One domain that usually deals with large amounts of source code data is empirical analysis. Although there are many tools for analysis and visualisation, they do not cope well software corpora. In this paper we present Explora, an infrastructure that is specifically targeted at visualising corpora. We report on early results when conducting a sample analysis on Smalltalk and Java corpora.