893 resultados para OpenFlow, SDN, Software-Defined Networking, Cloud
Resumo:
Wireless sensor networks (WSNs) have shown wide applicability to many fields including monitoring of environmental, civil, and industrial settings. WSNs however are resource constrained by many competing factors that span their hardware, software, and networking. One of the central resource constrains is the charge consumption of WSN nodes. With finite energy supplies, low charge consumption is needed to ensure long lifetimes and success of WSNs. This thesis details the design of a power system to support long-term operation of WSNs. The power system’s development occurs in parallel with a custom WSN from the Queen’s MEMS Lab (QML-WSN), with the goal of supporting a 1+ year lifetime without sacrificing functionality. The final power system design utilizes a TPS62740 DC-DC converter with AA alkaline batteries to efficiently supply the nodes while providing battery monitoring functionality and an expansion slot for future development. Testing tools for measuring current draw and charge consumption were created along with analysis and processing software. Through their use charge consumption of the power system was drastically lowered and issues in QML-WSN were identified and resolved including the proper shutdown of accelerometers, and incorrect microcontroller unit (MCU) power pin connection. Controlled current profiling revealed unexpected behaviour of nodes and detailed current-voltage relationships. These relationships were utilized with a lifetime projection model to estimate a lifetime between 521-551 days, depending on the mode of operation. The power system and QML-WSN were tested over a long term trial lasting 272+ days in an industrial testbed to monitor an air compressor pump. Environmental factors were found to influence the behaviour of nodes leading to increased charge consumption, while a node in an office setting was still operating at the conclusion of the trail. This agrees with the lifetime projection and gives a strong indication that a 1+ year lifetime is achievable. Additionally, a light-weight charge consumption model was developed which allows charge consumption information of nodes in a distributed WSN to be monitored. This model was tested in a laboratory setting demonstrating +95% accuracy for high packet reception rate WSNs across varying data rates, battery supply capacities, and runtimes up to full battery depletion.
Resumo:
The ability to use Software Defined Radio (SDR) in the civilian mobile applications will make it possible for the next generation of mobile devices to handle multi-standard personal wireless devices and ubiquitous wireless devices. The original military standard created many beneficial characteristics for SDR, but resulted in a number of disadvantages as well. Many challenges in commercializing SDR are still the subject of interest in the software radio research community. Four main issues that have been already addressed are performance, size, weight, and power. ^ This investigation presents an in-depth study of SDR inter-components communications in terms of total link delay related to the number of components and packet sizes in systems based on Software Communication Architecture (SCA). The study is based on the investigation of the controlled environment platform. Results suggest that the total link delay does not linearly increase with the number of components and the packet sizes. The closed form expression of the delay was modeled using a logistic function in terms of the number of components and packet sizes. The model performed well when the number of components was large. ^ Based upon the mobility applications, energy consumption has become one of the most crucial limitations. SDR will not only provide flexibility of multi-protocol support, but this desirable feature will also bring a choice of mobile protocols. Having such a variety of choices available creates a problem in the selection of the most appropriate protocol to transmit. An investigation in a real-time algorithm to optimize energy efficiency was also performed. Communication energy models were used including switching estimation to develop a waveform selection algorithm. Simulations were performed to validate the concept.^
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Wireless sensor networks (WSNs) have shown wide applicability to many fields including monitoring of environmental, civil, and industrial settings. WSNs however are resource constrained by many competing factors that span their hardware, software, and networking. One of the central resource constrains is the charge consumption of WSN nodes. With finite energy supplies, low charge consumption is needed to ensure long lifetimes and success of WSNs. This thesis details the design of a power system to support long-term operation of WSNs. The power system’s development occurs in parallel with a custom WSN from the Queen’s MEMS Lab (QML-WSN), with the goal of supporting a 1+ year lifetime without sacrificing functionality. The final power system design utilizes a TPS62740 DC-DC converter with AA alkaline batteries to efficiently supply the nodes while providing battery monitoring functionality and an expansion slot for future development. Testing tools for measuring current draw and charge consumption were created along with analysis and processing software. Through their use charge consumption of the power system was drastically lowered and issues in QML-WSN were identified and resolved including the proper shutdown of accelerometers, and incorrect microcontroller unit (MCU) power pin connection. Controlled current profiling revealed unexpected behaviour of nodes and detailed current-voltage relationships. These relationships were utilized with a lifetime projection model to estimate a lifetime between 521-551 days, depending on the mode of operation. The power system and QML-WSN were tested over a long term trial lasting 272+ days in an industrial testbed to monitor an air compressor pump. Environmental factors were found to influence the behaviour of nodes leading to increased charge consumption, while a node in an office setting was still operating at the conclusion of the trail. This agrees with the lifetime projection and gives a strong indication that a 1+ year lifetime is achievable. Additionally, a light-weight charge consumption model was developed which allows charge consumption information of nodes in a distributed WSN to be monitored. This model was tested in a laboratory setting demonstrating +95% accuracy for high packet reception rate WSNs across varying data rates, battery supply capacities, and runtimes up to full battery depletion.
Resumo:
Un sistema de SDR (Software Defined Radio) es un sistema de radio programable que delega gran parte del procesamiento hecho clásicamente en hardware,en software corriendo en un ordenador. Dos ventajas inmediatas de un dispositivo de SDR frente a un dispositivo de radio tradicional son el abaratamiento del coste del hardware (menos y menos complejo) y la facilidad de modificación de la funcionalidad específica de la radio (implementaciones software, tan simple como programar cualquier protocolo deseado). Debido al abaratamiento de estos productos y su facilidad de programación e interconexión con un ordenador personal, el mundo de la radiocomunicación es bastante más accesible. Cuando un dominio es poco conocido o accesible, es típico que los sistemas no sean seguros por diseño, sino por oscuridad. Si en un corto periodo de tiempo la accesibilidad a ese dominio aumenta considerablemente, los sistemas seguros por oscuridad se encuentran en peligro. Este trabajo pretende estudiar si efectivamente, al ser más accesible el dominio de la radiocomunicación debido a la accesibilidad de los dispositivos de SDR, ciertos sistemas se encuentran expuestos. La investigación del estado del arte y el estudio práctico de sistemas públicos en el ámbito local, nos permitirá entender hasta qué punto existen riesgos reales. Si se encuentra en el ámbito local una vulnerabilidad en algún sistema, se documentará y se propondrá una posible forma de aprovecharla y solucionarla.
Resumo:
To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independent method is used to cross-check that indeed we reach a nanosecond-scale timing accuracy by this correction. First, we operate a "beacon transmitter" which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
Software as a Service (SaaS) is gaining more and more attention from software users and providers recently. This has raised many new challenges to SaaS providers in providing better SaaSes that suit everyone needs at minimum costs. One of the emerging approaches in tackling this challenge is by delivering the SaaS as a composite SaaS. Delivering it in such an approach has a number of benefits, including flexible offering of the SaaS functions and decreased cost of subscription for users. However, this approach also introduces new problems for SaaS resource management in a Cloud data centre. We present the problem of composite SaaS resource management in Cloud data centre, specifically on its initial placement and resource optimization problems aiming at improving the SaaS performance based on its execution time as well as minimizing the resource usage. Our approach differs from existing literature because it addresses the problems resulting from composite SaaS characteristics, where we focus on the SaaS requirements, constraints and interdependencies. The problems are tackled using evolutionary algorithms. Experimental results demonstrate the efficiency and the scalability of the proposed algorithms.
Resumo:
The mobile cloud computing paradigm can offer relevant and useful services to the users of smart mobile devices. Such public services already exist on the web and in cloud deployments, by implementing common web service standards. However, these services are described by mark-up languages, such as XML, that cannot be comprehended by non-specialists. Furthermore, the lack of common interfaces for related services makes discovery and consumption difficult for both users and software. The problem of service description, discovery, and consumption for the mobile cloud must be addressed to allow users to benefit from these services on mobile devices. This paper introduces our work on a mobile cloud service discovery solution, which is utilised by our mobile cloud middleware, Context Aware Mobile Cloud Services (CAMCS). The aim of our approach is to remove complex mark-up languages from the description and discovery process. By means of the Cloud Personal Assistant (CPA) assigned to each user of CAMCS, relevant mobile cloud services can be discovered and consumed easily by the end user from the mobile device. We present the discovery process, the architecture of our own service registry, and service description structure. CAMCS allows services to be used from the mobile device through a user's CPA, by means of user defined tasks. We present the task model of the CPA enabled by our solution, including automatic tasks, which can perform work for the user without an explicit request.
Resumo:
Este trabalho visa propor uma solução contendo um sistema de reconhecimento de fala automático em nuvem. Dessa forma, não há necessidade de um reconhecedor sendo executado na própria máquina cliente, pois o mesmo estará disponível através da Internet. Além do reconhecimento automático de voz em nuvem, outra vertente deste trabalho é alta disponibilidade. A importância desse tópico se d´a porque o ambiente servidor onde se planeja executar o reconhecimento em nuvem não pode ficar indisponível ao usuário. Dos vários aspectos que requerem robustez, tal como a própria conexão de Internet, o escopo desse trabalho foi definido como os softwares livres que permitem a empresas aumentarem a disponibilidade de seus serviços. Dentre os resultados alcançados e para as condições simuladas, mostrou-se que o reconhecedor de voz em nuvem desenvolvido pelo grupo atingiu um desempenho próximo ao do Google.
Resumo:
Il Cloud computing è probabilmente l'argomento attualmente più dibattuto nel mondo dell'Information and Communication Technology (ICT). La diffusione di questo nuovo modo di concepire l'erogazione di servizi IT, è l'evoluzione di una serie di tecnologie che stanno rivoluzionando le modalit à in cui le organizzazioni costruiscono le proprie infrastrutture informatiche. I vantaggi che derivano dall'utilizzo di infrastrutture di Cloud Computing sono ad esempio un maggiore controllo sui servizi, sulla struttura dei costi e sugli asset impiegati. I costi sono proporzionati all'eettivo uso dei servizi (pay-per-use), evitando dunque gli sprechi e rendendo più efficiente il sistema di sourcing. Diverse aziende hanno già cominciato a provare alcuni servizi cloud e molte altre stanno valutando l'inizio di un simile percorso. La prima organizzazione a fornire una piattaforma di cloud computing fu Amazon, grazie al suo Elastic Computer Cloud (EC2). Nel luglio del 2010 nasce OpenStack, un progetto open-source creato dalla fusione dei codici realizzati dall'agenzia governativa della Nasa[10] e dell'azienda statunitense di hosting Rackspace. Il software realizzato svolge le stesse funzioni di quello di Amazon, a differenza di questo, però, è stato rilasciato con licenza Apache, quindi nessuna restrizione di utilizzo e di implementazione. Oggi il progetto Openstack vanta di numerose aziende partner come Dell, HP, IBM, Cisco, e Microsoft. L'obiettivo del presente elaborato è quello di comprendere ed analizzare il funzionamento del software OpenStack. Il fine principale è quello di familiarizzare con i diversi componenti di cui è costituito e di concepire come essi interagiscono fra loro, per poter costruire infrastrutture cloud del tipo Infrastructure as a service (IaaS). Il lettore si troverà di fronte all'esposizione degli argomenti organizzati nei seguenti capitoli. Nel primo capitolo si introduce la definizione di cloud computing, trattandone le principali caratteristiche, si descrivono poi, i diversi modelli di servizio e di distribuzione, delineando vantaggi e svantaggi che ne derivano. Nel secondo capitolo due si parla di una delle tecnologie impiegate per la realizzazione di infrastrutture di cloud computing, la virtualizzazione. Vengono trattate le varie forme e tipologie di virtualizzazione. Nel terzo capitolo si analizza e descrive in dettaglio il funzionamento del progetto OpenStack. Per ogni componente del software, viene illustrata l'architettura, corredata di schemi, ed il relativo meccanismo. Il quarto capitolo rappresenta la parte relativa all'installazione del software e alla configurazione dello stesso. Inoltre si espongono alcuni test effettuati sulla macchina in cui è stato installato il software. Infine nel quinto capitolo si trattano le conclusioni con le considerazioni sugli obiettivi raggiunti e sulle caratteristiche del software preso in esame.
Resumo:
Progettazione e realizzazione di un software gestionale web-based su piattaforma cloud
Resumo:
This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.