891 resultados para InfoStation-Based Networks
Resumo:
The development of the Internet and in particular of social networks has supposedly given a new view to the different aspects that surround human behavior. It includes those associated with addictions, but specifically the ones that have to do with technologies. Following a correlational descriptive design we present the results of a study, which involved university students from Social and Legal Sciences as participants, about their addiction to the Internet and in particular to social networks. The sample was conformed of 373 participants from the cities of Granada, Sevilla, Málaga, and Córdoba. To gather the data a questionnaire that was design by Young was translated to Spanish. The main research objective was to determine if university students could be considered social network addicts. The most prominent result was that the participants don’t consider themselves to be addicted to the Internet or to social networks; in particular women reflected a major distance from the social networks. It’s important to know that the results differ from those found in the literature review, which opens the question, are the participants in a phase of denial towards the addiction?
Resumo:
In this article, we address the importance and relevance that social networks exhibit in their use as an educational resource. This relevance relies in the possibility of implementing new learning resources or increasing the level of the participant's connectivity, as well as developing learning communities. Also, the risk entailed from their use is discussed, especially for the students that have a low technological education or those having excessive confidence on the media. It is important to highlight that the educational use of social networks is not a simple extension or translation of the student's habitual, recreational use, but that it implies an important change in the roles given to teachers as well as learners; from accommodative learning environments that only encourage memorization to other environments that demand an active, reflective, collaborative and proactive attitude, that require the development/acquisition of technological as well as social abilities, aptitudes and values. It is also important to highlight that a correct implementation and adequate use will not only foment formal learning, but also informal and non-formal learning.
Resumo:
The performance of a new pointer-based medium-access control protocol that was designed to significantly improve the energy efficiency of user terminals in quality-of-service-enabled wireless local area networks was analysed. The new protocol, pointer-controlled slot allocation and resynchronisation protocol (PCSARe), is based on the hybrid coordination function-controlled channel access mode of the IEEE 802.11e standard. PCSARe reduces energy consumption by removing the need for power-saving stations to remain awake for channel listening. Discrete event network simulations were performed to compare the performance of PCSARe with the non-automatic power save delivery (APSD) and scheduled-APSD power-saving modes of IEEE 802.11e. The simulation results show a demonstrable improvement in energy efficiency without significant reduction in performance when using PCSARe. For a wireless network consisting of an access point and eight stations in power-saving mode, the energy saving was up to 39% when using PCSARe instead of IEEE 802.11e non-APSD. The results also show that PCSARe offers significantly reduced uplink access delay over IEEE 802.11e non-APSD, while modestly improving the uplink throughput. Furthermore, although both had the same energy consumption, PCSARe gave a 25% reduction in downlink access delay compared with IEEE 802.11e S-APSD.
Resumo:
Key pre-distribution schemes have been proposed as means to overcome Wireless Sensor Networks constraints such as limited communication and processing power. Two sensor nodes can establish a secure link with some probability based on the information stored in their memories though it is not always possible that two sensor nodes may set up a secure link. In this paper, we propose a new approach that elects trusted common nodes called ”Proxies” which reside on an existing secure path linking two sensor nodes. These sensor nodes are used to send the generated key which will be divided into parts (nuggets) according to the number of elected proxies. Our approach has been assessed against previously developed algorithms and the results show that our algorithm discovers proxies more quickly which are closer to both end nodes, thus producing shorter path lengths. We have also assessed the impact of our algorithm on the average time to establish a secure link when the transmitter and receiver of the sensor nodes are ”ON”. The results show the superiority of our algorithm in this regard. Overall, the proposed algorithm is well suited for Wireless Sensor Networks.
Resumo:
Indoor wireless network based client localisation requires the use of a radio map to relate received signal strength to specific locations. However, signal strength measurements are time consuming, expensive and usually require unrestricted access to all parts of the building concerned. An obvious option for circumventing this difficulty is to estimate the radio map using a propagation model. This paper compares the effect of measured and simulated radio maps on the accuracy of two different methods of wireless network based localisation. The results presented indicate that, although the propagation model used underestimated the signal strength by up to 15 dB at certain locations, there was not a signigicant reduction in localisation performance. In general, the difference in performance between the simulated and measured radio maps was around a 30 % increase in rms error
Resumo:
We present a generic Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per-user or a per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analysed. The results show that these negotiation schemes can be utilised for the benefits of both end user and network provide such as getting the highest individual SLA optimisation in terms of Quality of Service (QoS) and price. A prototype based on an industrial agent platform has also been built to demonstrate the negotiation scenario and this is presented and discussed.
Resumo:
This article presents a novel classification of wavelet neural networks based on the orthogonality/non-orthogonality of neurons and the type of nonlinearity employed. On the basis of this classification different network types are studied and their characteristics illustrated by means of simple one-dimensional nonlinear examples. For multidimensional problems, which are affected by the curse of dimensionality, the idea of spherical wavelet functions is considered. The behaviour of these networks is also studied for modelling of a low-dimension map.
Resumo:
Data identification is a key task for any Internet Service Provider (ISP) or network administrator. As port fluctuation and encryption become more common in P2P traffic wishing to avoid identification, new strategies must be developed to detect and classify such flows. This paper introduces a new method of separating P2P and standard web traffic that can be applied as part of a data mining process, based on the activity of the hosts on the network. Unlike other research, our method is aimed at classifying individual flows rather than just identifying P2P hosts or ports. Heuristics are analysed and a classification system proposed. The accuracy of the system is then tested using real network traffic from a core internet router showing over 99% accuracy in some cases. We expand on this proposed strategy to investigate its application to real-time, early classification problems. New proposals are made and the results of real-time experiments compared to those obtained in the data mining research. To the best of our knowledge this is the first research to use host based flow identification to determine a flows application within the early stages of the connection.
Resumo:
A periodic finite-difference time-domain (FDTD) analysis is presented and applied for the first time in the study of a two-dimensional (2-D) leaky-wave planar antenna based on dipole frequency selective surfaces (FSSs). First, the effect of certain aspects of the FDTD modeling in the modal analysis of complex waves is studied in detail. Then, the FDTD model is used for the dispersion analysis of the antenna of interest. The calculated values of the leaky-wave attenuation constants suggest that, for an antenna of this type and moderate length, a significant amount of power reaches the edges of the antenna, and thus diffraction can play an important role. To test the validity of our dispersion analysis, measured radiation patterns of a fabricated prototype are presented and compared with those predicted by a leaky-wave approach based on the periodic FDTD results.
Resumo:
Planar metarnaterial Surfaces with negative reflection phase values are proposed as ground planes in a high-gain resonant cavity antenna configuration. The antenna is formed by the metarnaterial ground plane (MGP) and a superimposed metallodielectric electromagnetic band gap (MEBG) array that acts as a partially reflective surface (PRS). A single dipole positioned between the PRS and the ground IS utilised as the excitation. Ray analysis is employed to describe the functioning of the antennas and to qualitatively predict the effect of the MGP oil the antenna performance. By employing MGPs with negative reflection phase values, the planar antenna profile is reduced to subwavelength values (less than lambda/6) whilst maintaining high directivity. Full-wave simulations have been carried out with commercially available software (Microstripes (TM)). The effect of the finite PRS size on the antenna radiation performance (directivity and sidelobe level) is studied. A prototype has been fabricated and tested experimentally in order to validate the predictions.
Resumo:
The tailpipe emissions from automotive engines have been subject to steadily reducing legislative limits. This reduction has been achieved through the addition of sub-systems to the basic four-stroke engine which thereby increases its complexity. To ensure the entire system functions correctly, each system and / or sub-systems needs to be continuously monitored for the presence of any faults or malfunctions. This is a requirement detailed within the On-Board Diagnostic (OBD) legislation. To date, a physical model approach has been adopted by me automotive industry for the monitoring requirement of OBD legislation. However, this approach has restrictions from the available knowledge base and computational load required. A neural network technique incorporating Multivariant Statistical Process Control (MSPC) has been proposed as an alternative method of building interrelationships between the measured variables and monitoring the correct operation of the engine. Building upon earlier work for steady state fault detection, this paper details the use of non-linear models based on an Auto-associate Neural Network (ANN) for fault detection under transient engine operation. The theory and use of the technique is shown in this paper with the application to the detection of air leaks within the inlet manifold system of a modern gasoline engine whilst operated on a pseudo-drive cycle. Copyright © 2007 by ASME.
Resumo:
In this paper, the compression of multispectral images is addressed. Such 3-D data are characterized by a high correlation across the spectral components. The efficiency of the state-of-the-art wavelet-based coder 3-D SPIHT is considered. Although the 3-D SPIHT algorithm provides the obvious way to process a multispectral image as a volumetric block and, consequently, maintain the attractive properties exhibited in 2-D (excellent performance, low complexity, and embeddedness of the bit-stream), its 3-D trees structure is shown to be not adequately suited for 3-D wavelet transformed (DWT) multispectral images. The fact that each parent has eight children in the 3-D structure considerably increases the list of insignificant sets (LIS) and the list of insignificant pixels (LIP) since the partitioning of any set produces eight subsets which will be processed similarly during the sorting pass. Thus, a significant portion from the overall bit-budget is wastedly spent to sort insignificant information. Through an investigation based on results analysis, we demonstrate that a straightforward 2-D SPIHT technique, when suitably adjusted to maintain the rate scalability and carried out in the 3-D DWT domain, overcomes this weakness. In addition, a new SPIHT-based scalable multispectral image compression algorithm is used in the initial iterations to exploit the redundancies within each group of two consecutive spectral bands. Numerical experiments on a number of multispectral images have shown that the proposed scheme provides significant improvements over related works.
Resumo:
Traditional Time Division Multiple Access (TDMA) protocol provides deterministic periodic collision free data transmissions. However, TDMA lacks flexibility and exhibits low efficiency in dynamic environments such as wireless LANs. On the other hand contention-based MAC protocols such as the IEEE 802.11 DCF are adaptive to network dynamics but are generally inefficient in heavily loaded or large networks. To take advantage of the both types of protocols, a D-CVDMA protocol is proposed. It is based on the k-round elimination contention (k-EC) scheme, which provides fast contention resolution for Wireless LANs. D-CVDMA uses a contention mechanism to achieve TDMA-like collision-free data transmissions, which does not need to reserve time slots for forthcoming transmissions. These features make the D-CVDMA robust and adaptive to network dynamics such as node leaving and joining, changes in packet size and arrival rate, which in turn make it suitable for the delivery of hybrid traffic including multimedia and data content. Analyses and simulations demonstrate that D-CVDMA outperforms the IEEE 802.11 DCF and k-EC in terms of network throughput, delay, jitter, and fairness.