813 resultados para wireless networks user-centric networking
Resumo:
Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.
Resumo:
Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications at high data rates, in addition to high efficiency in the spectrum usage. On mobile wireless communication networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations where a terrestrial infrastructure is unavailable. The results show that good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. The dissertation proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. The issue of Cooperative Satellite Communications is solved through a new algorithm that forwards the received data from the fixed node to the mobile node. This algorithm is very efficient because it does not allow unnecessary transmissions and is based on signal to noise ratio (SNR) measures.
Resumo:
The increasing demand for Internet data traffic in wireless broadband access networks requires both the development of efficient, novel wireless broadband access technologies and the allocation of new spectrum bands for that purpose. The introduction of a great number of small cells in cellular networks allied to the complimentary adoption of Wireless Local Area Network (WLAN) technologies in unlicensed spectrum is one of the most promising concepts to attend this demand. One alternative is the aggregation of Industrial, Science and Medical (ISM) unlicensed spectrum to licensed bands, using wireless networks defined by Institute of Electrical and Electronics Engineers (IEEE) and Third Generation Partnership Project (3GPP). While IEEE 802.11 (Wi-Fi) networks are aggregated to Long Term Evolution (LTE) small cells via LTE / WLAN Aggregation (LWA), in proposals like Unlicensed LTE (LTE-U) and LWA the LTE air interface itself is used for transmission on the unlicensed band. Wi-Fi technology is widespread and operates in the same 5 GHz ISM spectrum bands as the LTE proposals, which may bring performance decrease due to the coexistence of both technologies in the same spectrum bands. Besides, there is the need to improve Wi-Fi operation to support scenarios with a large number of neighbor Overlapping Basic Subscriber Set (OBSS) networks, with a large number of Wi-Fi nodes (i.e. dense deployments). It is long known that the overall Wi-Fi performance falls sharply with the increase of Wi-Fi nodes sharing the channel, therefore there is the need for introducing mechanisms to increase its spectral efficiency. This work is dedicated to the study of coexistence between different wireless broadband access systems operating in the same unlicensed spectrum bands, and how to solve the coexistence problems via distributed coordination mechanisms. The problem of coexistence between different networks (i.e. LTE and Wi-Fi) and the problem of coexistence between different networks of the same technology (i.e. multiple Wi-Fi OBSSs) is analyzed both qualitatively and quantitatively via system-level simulations, and the main issues to be faced are identified from these results. From that, distributed coordination mechanisms are proposed and evaluated via system-level simulations, both for the inter-technology coexistence problem and intra-technology coexistence problem. Results indicate that the proposed solutions provide significant gains when compare to the situation without distributed coordination.
Resumo:
Esta dissertação apresenta o trabalho sobre sincronização de receção para sistemas OFDM. Tendo como objetivo a integração da arquitetura desenvolvida no projeto de investigação \CROWN - Co-operative Radio over Fibre for Wireless Networks" atualmente em curso no Instituto de Telecomunicações. Esta arquitetura de receção foi implementada numa plataforma de desenvolvimento baseada em dispositivos programáveis FPGA, recorrendo as ferramentas de desenvolvimento MatLab, System Generator e ISE. O sistema implementado tem a particularidade de ter um princípio de funcionamento assíncrono e recorre aos algoritmos de Van de Beek [1] e Carlos Ribeiro [2] para proceder a estimação e consequente sincronização. Ambos os algoritmos foram utilizados para estimação do CFO, tendo o algoritmo de Van de Beek sido também utilizado para estimação do início de trama. Foram realizadas análises do desempenho do sistema para diferentes condições, sendo o objectivo de analisar o desempenho dos estimadores implementados. A performance foi então analisada de acordo com BER resultante e do erro de estimação do início de trama e do valor do CFO. Para além da análise individual dos resultados, e também feita uma comparação da precisão de ambos os estimadores.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
In order to cope up with the ever increasing demand for larger transmission bandwidth, Radio over Fiber technology is a very beneficial solution. These systems are expected to play a major role within future fifth generation wireless networks due to their inherent capillary distribution properties. Nonlinear compensation techniques are becoming increasingly important to improve the performance of telecommunication channels by compensating for channel nonlinearities. Indeed, significant bounds on the technology usability and performance degradation occur due to nonlinear characteristics of optical transmitter, nonlinear generation of spurious frequencies, which, in the case of RoF links exploiting Directly Modulated Lasers , has the combined effect of laser chirp and optical fiber dispersion among its prevailing causes. The purpose of the research is to analyze some of the main causes of harmonic and intermodulation distortion present in Radio over Fiber (RoF) links, and to suggest a solution to reduce their effects, through a digital predistortion technique. Predistortion is an effective and interesting solution to linearize and this allows to demonstrate that the laser’s chirp and the optical fiber’s dispersion are the main causes which generate harmonic distortion. The improvements illustrated are only theoretical, based on a feasibility point of view. The simulations performed lead to significant improvements for short and long distances of radio over fiber link lengths. The algorithm utilized for simulation has been implemented on MATLAB. The effects of chirp and fiber nonlinearity in a directly modulated fiber transmission system are investigated by simulation, and a cost effective and rather simple technique for compensating these effects is discussed. A detailed description of its functional model is given, and its attractive features both in terms of quality improvement of the received signal, and cost effectiveness of the system are illustrated.
Resumo:
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Resumo:
This paper provides an exploratory study of how rewards-based crowdfunding affects business model development for music industry artists, labels and live sector companies. The empirical methodology incorporated a qualitative, semi-structured, three-stage interview design with fifty seven senior executives from industry crowdfunding platforms and three stakeholder groups. The results and analysis cover new research ground and provide conceptual models to develop theoretical foundations for further research in this field. The findings indicate that the financial model benefits of crowdfunding for independent artists are dependent on fan base demographic variables relating to age group and genre due to sustained apprehension from younger audiences. Furthermore, major labels are now considering a more user-centric financial model as an innovation strategy, and the impact of crowdfunding on their marketing model may already be initiating its development in terms of creativity, strength and artist relations.
Resumo:
Situational Awareness provides a user centric approach to security and privacy. The human factor is often recognised as the weakest link in security, therefore situational perception and risk awareness play a leading role in the adoption and implementation of security mechanisms. In this study we assess the understanding of security and privacy of users in possession of wearable devices. The findings demonstrate privacy complacency, as the majority of users trust the application and the wearable device manufacturer. Moreover the survey findings demonstrate a lack of understanding of security and privacy by the sample population. Finally the theoretical implications of the findings are discussed.
Resumo:
Nowadays, information security is a very important topic. In particular, wireless networks are experiencing an ongoing widespread diffusion, also thanks the increasing number of Internet Of Things devices, which generate and transmit a lot of data: protecting wireless communications is of fundamental importance, possibly through an easy but secure method. Physical Layer Security is an umbrella of techniques that leverages the characteristic of the wireless channel to generate security for the transmission. In particular, the Physical Layer based-Key generation aims at allowing two users to generate a random symmetric keys in an autonomous way, hence without the aid of a trusted third entity. Physical Layer based-Key generation relies on observations of the wireless channel, from which harvesting entropy: however, an attacker might possesses a channel simulator, for example a Ray Tracing simulator, to replicate the channel between the legitimate users, in order to guess the secret key and break the security of the communication. This thesis work is focused on the possibility to carry out a so called Ray Tracing attack: the method utilized for the assessment consist of a set of channel measurements, in different channel conditions, that are then compared with the simulated channel from the ray tracing, to compute the mutual information between the measurements and simulations. Furthermore, it is also presented the possibility of using the Ray Tracing as a tool to evaluate the impact of channel parameters (e.g. the bandwidth or the directivity of the antenna) on the Physical Layer based-Key generation. The measurements have been carried out at the Barkhausen Institut gGmbH in Dresden (GE), in the framework of the existing cooperation agreement between BI and the Dept. of Electrical, Electronics and Information Engineering "G. Marconi" (DEI) at the University of Bologna.
Resumo:
The Internet of Vehicles (IoV) paradigm has emerged in recent times, where with the support of technologies like the Internet of Things and V2X , Vehicular Users (VUs) can access different services through internet connectivity. With the support of 6G technology, the IoV paradigm will evolve further and converge into a fully connected and intelligent vehicular system. However, this brings new challenges over dynamic and resource-constrained vehicular systems, and advanced solutions are demanded. This dissertation analyzes the future 6G enabled IoV systems demands, corresponding challenges, and provides various solutions to address them. The vehicular services and application requests demands proper data processing solutions with the support of distributed computing environments such as Vehicular Edge Computing (VEC). While analyzing the performance of VEC systems it is important to take into account the limited resources, coverage, and vehicular mobility into account. Recently, Non terrestrial Networks (NTN) have gained huge popularity for boosting the coverage and capacity of terrestrial wireless networks. Integrating such NTN facilities into the terrestrial VEC system can address the above mentioned challenges. Additionally, such integrated Terrestrial and Non-terrestrial networks (T-NTN) can also be considered to provide advanced intelligent solutions with the support of the edge intelligence paradigm. In this dissertation, we proposed an edge computing-enabled joint T-NTN-based vehicular system architecture to serve VUs. Next, we analyze the terrestrial VEC systems performance for VUs data processing problems and propose solutions to improve the performance in terms of latency and energy costs. Next, we extend the scenario toward the joint T-NTN system and address the problem of distributed data processing through ML-based solutions. We also proposed advanced distributed learning frameworks with the support of a joint T-NTN framework with edge computing facilities. In the end, proper conclusive remarks and several future directions are provided for the proposed solutions.
Resumo:
The evolution of wireless access technologies and mobile devices, together with the constant demand for video services, has created new Human-Centric Multimedia Networking (HCMN) scenarios. However, HCMN poses several challenges for content creators and network providers to deliver multimedia data with an acceptable quality level based on the user experience. Moreover, human experience and context, as well as network information play an important role in adapting and optimizing video dissemination. In this paper, we discuss trends to provide video dissemination with Quality of Experience (QoE) support by integrating HCMN with cloud computing approaches. We identified five trends coming from such integration, namely Participatory Sensor Networks, Mobile Cloud Computing formation, QoE assessment, QoE management, and video or network adaptation.
Resumo:
Wireless Sensor Networks (WSN) are being used for a number of applications involving infrastructure monitoring, building energy monitoring and industrial sensing. The difficulty of programming individual sensor nodes and the associated overhead have encouraged researchers to design macro-programming systems which can help program the network as a whole or as a combination of subnets. Most of the current macro-programming schemes do not support multiple users seamlessly deploying diverse applications on the same shared sensor network. As WSNs are becoming more common, it is important to provide such support, since it enables higher-level optimizations such as code reuse, energy savings, and traffic reduction. In this paper, we propose a macro-programming framework called Nano-CF, which, in addition to supporting in-network programming, allows multiple applications written by different programmers to be executed simultaneously on a sensor networking infrastructure. This framework enables the use of a common sensing infrastructure for a number of applications without the users having to worrying about the applications already deployed on the network. The framework also supports timing constraints and resource reservations using the Nano-RK operating system. Nano- CF is efficient at improving WSN performance by (a) combining multiple user programs, (b) aggregating packets for data delivery, and (c) satisfying timing and energy specifications using Rate- Harmonized Scheduling. Using representative applications, we demonstrate that Nano-CF achieves 90% reduction in Source Lines-of-Code (SLoC) and 50% energy savings from aggregated data delivery.
Resumo:
Wireless Sensor Networks (WSNs) are increasingly used in various application domains like home-automation, agriculture, industries and infrastructure monitoring. As applications tend to leverage larger geographical deployments of sensor networks, the availability of an intuitive and user friendly programming abstraction becomes a crucial factor in enabling faster and more efficient development, and reprogramming of applications. We propose a programming pattern named sMapReduce, inspired by the Google MapReduce framework, for mapping application behaviors on to a sensor network and enabling complex data aggregation. The proposed pattern requires a user to create a network-level application in two functions: sMap and Reduce, in order to abstract away from the low-level details without sacrificing the control to develop complex logic. Such a two-fold division of programming logic is a natural-fit to typical sensor networking operation which makes sensing and topological modalities accessible to the user.
Resumo:
In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.