862 resultados para Android,Peer to Peer,Wifi,Mesh Network


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lidar is an optical remote sensing instrument that can measure atmospheric parameters. A Raman lidar instrument (UCLID) was established at University College Cork to contribute to the European lidar network, EARLINET. System performance tests were carried out to ensure strict data quality assurance for submission to the EARLINET database. Procedures include: overlap correction, telecover test, Rayleigh test and zero bin test. Raman backscatter coefficients, extinction coefficients and lidar ratio were measured from April 2010 to May 2011 and February 2012 to June 2012. Statistical analysis of the profiles over these periods provided new information about the typical atmospheric scenarios over Southern Ireland in terms of aerosol load in the lower troposphere, the planetary boundary layer (PBL) height, aerosol optical density (AOD) at 532 nm and lidar ratio values. The arithmetic average of the PBL height was found to be 608 ± 138 m with a median of 615 m, while average AOD at 532 nm for clean marine air masses was 0.119 ± 0.023 and for polluted air masses was 0.170 ± 0.036. The lidar ratio showed a seasonal dependence with lower values found in winter and autumn (20 ± 5 sr) and higher during spring and winter (30 ± 12 sr). Detection of volcanic particles from the eruption of the volcano Eyjafjallajökull in Iceland was measured between 21 April and 7 May 2010. The backscatter coefficient of the ash layer varied between 2.5 Mm-1sr-1 and 3.5 Mm-1sr-1, and estimation of the AOD at 532 nm was found to be between 0.090 and 0.215. Several aerosol loads due to Saharan dust particles were detected in Spring 2011 and 2012. Lidar ratio of the dust layers were determine to be between 45 and 77 sr and AOD at 532 nm during the dust events range between 0.84 to 0.494.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Absorption heat transformers are thermodynamic systems which are capable of recycling industrial waste heat energy by increasing its temperature. Triple stage heat transformers (TAHTs) can increase the temperature of this waste heat by up to approximately 145˚C. The principle factors influencing the thermodynamic performance of a TAHT and general points of operating optima were identified using a multivariate statistical analysis, prior to using heat exchange network modelling techniques to dissect the design of the TAHT and systematically reassemble it in order to minimise internal exergy destruction within the unit. This enabled first and second law efficiency improvements of up to 18.8% and 31.5% respectively to be achieved compared to conventional TAHT designs. The economic feasibility of such a thermodynamically optimised cycle was investigated by applying it to an oil refinery in Ireland, demonstrating that in general the capital cost of a TAHT makes it difficult to achieve acceptable rates of return. Decreasing the TAHT's capital cost may be achieved by redesigning its individual pieces of equipment and reducing their size. The potential benefits of using a bubble column absorber were therefore investigated in this thesis. An experimental bubble column was constructed and used to track the collapse of steam bubbles being absorbed into a hotter lithium bromide salt solution. Extremely high mass transfer coefficients of approximately 0.0012m/s were observed, showing significant improvements over previously investigated absorbers. Two separate models were developed, namely a combined heat and mass transfer model describing the rate of collapse of the bubbles, and a stochastic model describing the hydrodynamic motion of the collapsing vapour bubbles taking into consideration random fluctuations observed in the experimental data. Both models showed good agreement with the collected data, and demonstrated that the difference between the solution's temperature and its boiling temperature is the primary factor influencing the absorber's performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The programme of work was commissioned in September 1998 to supply information to underpin the UK’s commitments to protection and conservation of the ecosystems and biodiversity of the marine environment under the 1992 OSPAR Convention on the Protection of the Marine Environment of the North East Atlantic. The programme also provided support for the implementation of the Biodiversity Convention and the EU Habitats Directive. The MarLIN programme initiated a new approach to assessing sensitivity and recoverability characteristics of seabed species and biotopes based on structures (such as the seabed biotopes classification) and criteria (such as for assessing rarity and defining ‘sensitivity’) developed since 1997. It also developed tools to disseminate the information on the Internet. The species researched were those that were listed in conventions and directives, included in Biodiversity Action Plans, or were nationally rare or scarce. In addition, species were researched if they maintained community composition or structure and/or provided a distinctive habitat or were special to or especially abundant in a particular situation or biotope At its conclusion in August 2001, the work carried out under the contract with DETR/DEFRA had: · Developed protocols, criteria and structures for identifying ‘sensitivity’ and ‘recoverability’, which were tested by a programme management group. · Developed a database to hold research data on biology and sensitivity of species and biotopes. · Defined the link between human activities and the environmental factors likely to be affected by those activities. · Developed a user-friendly Web site to access information from the database, on the sensitivity and recoverability characteristics of over 100 species and basic information on over 200 species. Additionally, the project team have: · Brought together and facilitated discussion between current developers and users of electronic resources for environmental management, protection and education in the conference ‘Using Marine Biological Information in the Electronic Age’ (19-21 July 1999). · Contributed to the development of Ecological Quality Objectives for the North Sea (Scheveningen, 11- 3 September 1999 and subsequent papers). · Provided detailed information on species as a supplement to the National Biodiversity Network Gateway demonstration www.searchnbn.net. · Developed a peer-reviewed approach to electronic publication of updateable information. · Promoted the contract results and the MarLIN approach to the support of marine environmental management and protection at European research fora and, through the web site, internationally. The information available through the Web site is now being used by consultants and Government agencies. The DEFRA contract has been of critical importance in establishing the Marine Life Information Network (MarLIN) programme and has encouraged support from other organisations. Other related work in the MarLIN programme is on-going, especially to identify sensitivity of biotopes to support management of SACs (contract from English Nature in collaboration with Scottish Natural Heritage), to access data sources (in collaboration with the National Biodiversity Network) and to establish volunteer recording schemes for marine life. The results of the programme are best viewed on the Web site (www.marlin.ac.uk). Three reports have been produced during the project. A final report detailing the work undertaken, a brochure ‘Identifying the sensitivity of seabed ecosystems’ and a CD-ROM describing the programme and demonstrating the Web site have been delivered as final products in addition to the Web site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of smart grid technologies and appropriate charging strategies are key to accommodating large numbers of Electric Vehicles (EV) charging on the grid. In this paper a general framework is presented for formulating the EV charging optimization problem and three different charging strategies are investigated and compared from the perspective of charging fairness while taking into account power system constraints. Two strategies are based on distributed algorithms, namely, Additive Increase and Multiplicative Decrease (AIMD), and Distributed Price-Feedback (DPF), while the third is an ideal centralized solution used to benchmark performance. The algorithms are evaluated using a simulation of a typical residential low voltage distribution network with 50% EV penetration. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider charging strategies that mitigate the impact of domestic charging of EVs on low-voltage distribution networks and which seek to reduce peak power by responding to time-ofday pricing. The strategies are based on the distributed Additive Increase and Multiplicative Decrease (AIMD) charging algorithms proposed in [5]. The strategies are evaluated using simulations conducted on a custom OpenDSS-Matlab platform for a typical low voltage residential feeder network. Results show that by using AIMD based smart charging 50% EV penetration can be accommodated on our test network, compared to only 10% with uncontrolled charging, without needing to reinforce existing network infrastructure. © Springer-Verlag Berlin Heidelberg 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis contributes to the advancement of Fiber-Wireless (FiWi) access technologies, through the development of algorithms for resource allocation and energy efficient routing. FiWi access networks use both optical and wireless/cellular technologies to provide high bandwidth and ubiquity, required by users and current high demanding services. FiWi access technologies are divided in two parts. In one of the parts, fiber is brought from the central office to near the users, while in the other part wireless routers or base stations take over and provide Internet access to users. Many technologies can be used at both the optical and wireless parts, which lead to different integration and optimization problems to be solved. In this thesis, the focus will be on FiWi access networks that use a passive optical network at the optical section and a wireless mesh network at the wireless section. In such networks, two important aspects that influence network performance are: allocation of resources and traffic routing throughout the mesh section. In this thesis, both problems are addressed. A fair bandwidth allocation algorithm is developed, which provides fairness in terms of bandwidth and in terms of experienced delays among all users. As for routing, an energy efficient routing algorithm is proposed that optimizes sleeping and productive periods throughout the wireless and optical sections. To develop the stated algorithms, game theory and networks formation theory were used. These are powerful mathematical tools that can be used to solve problems involving agents with conflicting interests. Since, usually, these tools are not common knowledge, a brief survey on game theory and network formation theory is provided to explain the concepts that are used throughout the thesis. As such, this thesis also serves as a showcase on the use of game theory and network formation theory to develop new algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores - Área de Especialização de Telecomunicações

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A crescente tendencia no acesso móvel tem sido potenciada pela tecnologia IEEE 802.11. Contudo, estas redes têm alcance rádio limitado. Para a extensão da sua cobertura é possível recorrer a redes emalhadas sem fios baseadas na tecnologia IEEE 802.11, com vantagem do ponto de vista do custo e da flexibilidade de instalação, face a soluções cabladas. Redes emalhadas sem fios constituídas por nós com apenas uma interface têm escalabilidade reduzida. A principal razão dessa limitação deve-se ao uso do mecanismo de acesso ao meio partilhado Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) em topologias multi-hop. Especificamente, o CSMA/CA não evita o problema do nó escondido levando ao aumento do número de colisões e correspondente degradação de desempenho com impacto direto no throughput e na latência. Com a redução da tecnologia rádio torna-se viável a utilização de múltiplos rádios por nó, sem com isso aumentar significativamente o custo da solução final de comunicações. A utilização de mais do que um rádio por nó de comuniações permite superar os problemas de desempenho inerentes ás redes formadas por nós com apenas um rádio. O objetivo desta tese, passa por desenvolver uma nova solução para redes emalhadas multi-cana, duar-radio, utilizando para isso novos mecanismos que complementam os mecanismos definidos no IEEE 802.11 para o estabelecimento de um Basic Service Set (BSS). A solução é baseada na solução WiFIX, um protocolo de routing para redes emalhadas de interface única e reutiliza os mecanismos já implementados nas redes IEEE 802.11 para difundir métricas que permitam à rede escalar de forma eficaz minimizando o impacto na performance. A rede multi-hop é formada por nós equipados com duas interfaces, organizados numa topologia hierárquica sobre múltiplas relações Access Point (AP) – Station (STA). Os resultados experimentais obtidos mostram a eficácia e o bom desempenho da solução proposta face à solução WiFIX original.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile malwares are increasing with the growing number of Mobile users. Mobile malwares can perform several operations which lead to cybersecurity threats such as, stealing financial or personal information, installing malicious applications, sending premium SMS, creating backdoors, keylogging and crypto-ransomware attacks. Knowing the fact that there are many illegitimate Applications available on the App stores, most of the mobile users remain careless about the security of their Mobile devices and become the potential victim of these threats. Previous studies have shown that not every antivirus is capable of detecting all the threats; due to the fact that Mobile malwares use advance techniques to avoid detection. A Network-based IDS at the operator side will bring an extra layer of security to the subscribers and can detect many advanced threats by analyzing their traffic patterns. Machine Learning(ML) will provide the ability to these systems to detect unknown threats for which signatures are not yet known. This research is focused on the evaluation of Machine Learning classifiers in Network-based Intrusion detection systems for Mobile Networks. In this study, different techniques of Network-based intrusion detection with their advantages, disadvantages and state of the art in Hybrid solutions are discussed. Finally, a ML based NIDS is proposed which will work as a subsystem, to Network-based IDS deployed by Mobile Operators, that can help in detecting unknown threats and reducing false positives. In this research, several ML classifiers were implemented and evaluated. This study is focused on Android-based malwares, as Android is the most popular OS among users, hence most targeted by cyber criminals. Supervised ML algorithms based classifiers were built using the dataset which contained the labeled instances of relevant features. These features were extracted from the traffic generated by samples of several malware families and benign applications. These classifiers were able to detect malicious traffic patterns with the TPR upto 99.6% during Cross-validation test. Also, several experiments were conducted to detect unknown malware traffic and to detect false positives. These classifiers were able to detect unknown threats with the Accuracy of 97.5%. These classifiers could be integrated with current NIDS', which use signatures, statistical or knowledge-based techniques to detect malicious traffic. Technique to integrate the output from ML classifier with traditional NIDS is discussed and proposed for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Niagara Grape and Wine Community (NGWC) is an industry that has undergone rapid change and expansion as a result of changes in governmental regulations and consumer preferences. As a result of these changes, the demands of the wine industry workforce have changed to reflect the need to implement new strategies and practices to remain viable and competitive. The influx of people into the community with little or no prior practical experience in grape growing (viticulture) or winemaking (oenology) has created a need for additional training and learning opportunities to meet workforce needs. This case study investigated the learning needs of the members of this community and how these needs are currently being met. The barriers to, and the opportunities for, members acquiring new knowledge and developing skills were also explored. Participants were those involved in all levels of the industry and sectors (viticulture, processing, and retail), and their views on needs and suggestions for programs of study were collected. Through cross analyses of sectors, areas of common and unique interest were identified as well as formats for delivery. A common fundamental component was identified by all sectors - any program must have a significant applied component or demonstration of proficiency and should utilize members as peer instructors, mentors, and collaborators to generate a larger shared collective of knowledge. Through the review of learning organizations, learning communities, communities of practices, and learning networks, the principles for the development of a Grape and Wine Learning Network to meet the learning needs of the NGWC outside of formal institutional or academic programs were developed. The roles and actions of members to make such a network successful are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A distributed Lagrangian moving-mesh finite element method is applied to problems involving changes of phase. The algorithm uses a distributed conservation principle to determine nodal mesh velocities, which are then used to move the nodes. The nodal values are obtained from an ALE (Arbitrary Lagrangian-Eulerian) equation, which represents a generalization of the original algorithm presented in Applied Numerical Mathematics, 54:450--469 (2005). Having described the details of the generalized algorithm it is validated on two test cases from the original paper and is then applied to one-phase and, for the first time, two-phase Stefan problems in one and two space dimensions, paying particular attention to the implementation of the interface boundary conditions. Results are presented to demonstrate the accuracy and the effectiveness of the method, including comparisons against analytical solutions where available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a model for a pair of nonlinear evolving networks, defined over a common set of vertices, sub ject to edgewise competition. Each network may grow new edges spontaneously or through triad closure. Both networks inhibit the other’s growth and encourage the other’s demise. These nonlinear stochastic competition equations yield to a mean field analysis resulting in a nonlinear deterministic system. There may be multiple equilibria; and bifurcations of different types are shown to occur within a reduced parameter space. This situation models competitive peer-to-peer communication networks such as BlackBerry Messenger displacing SMS; or instant messaging displacing emails.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fully-connected mesh networks that can potentially be employed in a range of applications, are inherently associated with major deficiencies in interference management and network capacity improvement. The tree-connected (routing based) mesh networks used in today’s applications have major deficiencies in routing delays and reconfiguration delays in the implementation stage. This paper introduces a CDMA based fully-connected mesh network, which controls the transmission powers of the nodes in order to ensure that the communication channels remain interference-free and minimizes the energy consumption. Moreover, the bounds for the number of nodes and the spatial configuration are provided to ensures that the communication link satisfies the QoS (Quality of Service) requirements at all times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Now days, the online social networks (OSN) have gained considerable popularity. More and more people use OSN to share their interests and make friends, also the OSN helps users overcome the geographical barriers. With the development of OSN, there is an important problem users have to face that is trust evaluation. Before user makes friends with a stranger, the user need to consider the following issues: Can a stranger be trusted? How much the stranger can be trusted? How to measure the trust of a stranger? In this paper, we take two factors, Degree and Contact Interval into consideration, which produce a new trust evaluation model (T-OSN). T-OSN is aimed to solve how to evaluate the trust value of an OSN user, also which is more efficient, more reliable and easy to implement. Base on our research, this model can be used in wide range, such as online social network (OSN) trust evaluation, mobile network message forwarding, ad hoc wireless networking, routing message on Internet and peer-to-peer file sharing network. The T-OSN model has following obvious advantages compare to other trust evaluate methods. First of all, it is not base on features of traditional social network, such as, distance and shortest path. We choose the special features of OSN to build up the model, that is including numbers of friends(Degree) and contact frequency(Contact Interval). These species features makes our model more suitable to evaluate OSN users trust value. Second, the formulations of our model are quite simple but effective. That means, to calculate the result by using our formulations will not cost too much resources. Last but not least, our model is easy to implement for an OSN website, because of the features that we used in our model, such as numbers of friends and contact frequency are easy to obtain. To sum up, our model is using a few resources to obtain a valuable trust value that can help OSN users to solve an important security problem, we believe that will be big step - or development of OSN.