39 resultados para role-based access control
Resumo:
Various flexible mechanisms related to quality of service (QoS) provisioning have been specified for uplink traffic at the medium access control (MAC) layer in the IEEE 802.16 standards. Among the mechanisms, contention based bandwidth request scheme can be used to indicate bandwidth demands to the base station for the non-real-time polling and best-effort services. These two services are used for most applications with unknown traffic characteristics. Due to the diverse QoS requirements of those applications, service differentiation (SD) is anticipated over the contention based bandwidth request scheme. In this paper we investigate the SD with the bandwidth request scheme by means of assigning different channel access parameters and bandwidth allocation priorities at different packets arrival probability. The effectiveness of the differentiation schemes is evaluated by simulations. It is observed that the initial backoff window can be efficient in SD, and if combined with the bandwidth allocation priority, the SD performances will be better.
Resumo:
With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.
Resumo:
Link quality-based rate adaptation has been widely used for IEEE 802.11 networks. However, network performance is affected by both link quality and random channel access. Selection of transmit modes for optimal link throughput can cause medium access control (MAC) throughput loss. In this paper, we investigate this issue and propose a generalised cross-layer rate adaptation algorithm. It considers jointly link quality and channel access to optimise network throughput. The objective is to examine the potential benefits by cross-layer design. An efficient analytic model is proposed to evaluate rate adaptation algorithms under dynamic channel and multi-user access environments. The proposed algorithm is compared to link throughput optimisation-based algorithm. It is found rate adaptation by optimising link layer throughput can result in large performance loss, which cannot be compensated by the means of optimising MAC access mechanism alone. Results show cross-layer design can achieve consistent and considerable performance gains of up to 20%. It deserves to be exploited in practical design for IEEE 802.11 networks.
Resumo:
Fiber to the premises has promised to increase the capacity in telecommunications access networks for well over 30 years. While it is widely recognized that optical-fiber-based access networks will be a necessity in the shortto medium-term future, its large upfront cost and regulatory issues are pushing many operators to further postpone its deployment, while installing intermediate unambitious solutions such as fiber to the cabinet. Such high investment cost of both network access and core capacity upgrade often derives from poor planning strategies that do not consider the necessity to adequately modify the network architecture to fully exploit the cost benefit that a fiber-centric solution can bring. DISCUS is a European Framework 7 Integrated Project that, building on optical-centric solutions such as long-reach passive optical access and flat optical core, aims to deliver a cost-effective architecture for ubiquitous broadband services. DISCUS analyzes, designs, and demonstrates end-to-end architectures and technologies capable of saving cost and energy by reducing the number of electronic terminations in the network and sharing the deployment costs among a larger number of users compared to current fiber access systems. This article describes the network architecture and the supporting technologies behind DISCUS, giving an overview of the concepts and methodologies that will be used to deliver our end-to-end network solution. © 2013 IEEE.
Resumo:
Background: During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. Methods: We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. Results: 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Conclusions: Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.
Resumo:
The multiple-input multiple-output (MIMO) technique can be used to improve the performance of ad hoc networks. Various medium access control (MAC) protocols with multiple contention slots have been proposed to exploit spatial multiplexing for increasing the transport throughput of MIMO ad hoc networks. However, the existence of multiple request-to-send/clear-to-send (RTS/CTS) contention slots represents a severe overhead that limits the improvement on transport throughput achieved by spatial multiplexing. In addition, when the number of contention slots is fixed, the efficiency of RTS/CTS contention is affected by the transmitting power of network nodes. In this study, a joint optimisation scheme on both transmitting power and contention slots number for maximising the transport throughput is presented. This includes the establishment of an analytical model of a simplified MAC protocol with multiple contention slots, the derivation of transport throughput as a function of both transmitting power and the number of contention slots, and the optimisation process based on the transport throughput formula derived. The analytical results obtained, verified by simulation, show that much higher transport throughput can be achieved using the joint optimisation scheme proposed, compared with the non-optimised cases and the results previously reported.
Resumo:
This research describes the development of a groupware system which adds security services to a Computer Supported Cooperative Work system operating over the Internet. The security services use cryptographic techniques to provide a secure access control service and an information protection service. These security services are implemented as a protection layer for the groupware system. These layers are called External Security Layer (ESL) and Internal Security Layer (ISL) respectively. The security services are sufficiently flexible to allow the groupware system to operate in both synchronous and asynchronous modes. The groupware system developed - known as Secure Software Inspection Groupware (SecureSIG) - provides security for a distributed group performing software inspection. SecureSIG extends previous work on developing flexible software inspection groupware (FlexSIG) Sahibuddin, 1999). The SecureSIG model extends the FlexSIG model, and the prototype system was added to the FlexSIG prototype. The prototype was built by integrating existing software, communication and cryptography tools and technology. Java Cryptography Extension (JCE) and Internet technology were used to build the prototype. To test the suitability and transparency of the system, an evaluation was conducted. A questionnaire was used to assess user acceptability.
Resumo:
As a central integrator of basal ganglia function, the external segment of the globus pallidus (GP) plays a critical role in the control of voluntary movement. Driven by intrinsic mechanisms and excitatory glutamatergic inputs from the subthalamic nucleus, GP neurons receive GABAergic inhibitory input from the striatum (Str-GP) and from local collaterals of neighbouring pallidal neurons (GP-GP). Here we provide electrophysiological evidence for functional differences between these two inhibitory inputs. The basic synaptic characteristics of GP-GP and Str-GP GABAergic synapses were studied using whole-cell recordings with paired-pulse and train stimulation protocols and variance-mean (VM) analysis. We found (i) IPSC kinetics are consistent with local collaterals innervating the soma and proximal dendrites of GP neurons whereas striatal inputs innervate more distal regions. (ii) Compared to GP-GP synapses Str-GP synapses have a greater paired-pulse ratio, indicative of a lower probability of release. This was confirmed using VM analysis. (iii) In response to 20 and 50 Hz train stimulation, GP-GP synapses are weakly facilitatory in 1 mm external calcium and depressant in 2.4 mm calcium. This is in contrast to Str-GP synapses which display facilitation under both conditions. This is the first quantitative study comparing the properties of GP-GP and Str-GP synapses. The results are consistent with the differential location of these inhibitory synapses and subtle differences in their release probability which underpin stable GP-GP responses and robust short-term facilitation of Str-GP responses. These fundamental differences may provide the physiological basis for functional specialization.
Resumo:
As a central integrator of basal ganglia function, the external segment of the globus pallidus (GP) plays a critical role in the control of voluntary movement. The GP is composed of a network of inhibitory GABA-containing projection neurons which receive GABAergic input from axons of the striatum (Str) and local collaterals of GP neurons. Here, using electrophysiological techniques and immunofluorescent labeling we have investigated the differential cellular distribution of a1, a2 and a3 GABAA receptor subunits in relation to striatopallidal (Str-GP) and pallidopallidal (GP-GP) synapses. Electrophysiological investigations showed that zolpidem (100 nm; selective for the a1 subunit) increased the amplitude and the decay time of both Str-GP and GP-GP IPSCs, indicating the presence of the a1 subunits at both synapses. However, the application of drugs selective for the a2, a3 and a5 subunits (zolpidem at 400 nm, L-838,417 and TP003) revealed differential effects on amplitude and decay time of IPSCs, suggesting the nonuniform distribution of non-a1 subunits. Immunofluorescence revealed widespread distribution of the a1 subunit at both soma and dendrites, while double- and triple-immunofluorescent labeling for parvalbumin, enkephalin, gephyrin and the ?2 subunit indicated strong immunoreactivity for GABAAa3 subunits in perisomatic synapses, a region mainly targeted by local axon collaterals. In contrast, immunoreactivity for synaptic GABAAa2 subunits was observed in dendritic compartments where striatal synapses are preferentially located. Due to the kinetic properties which each GABAAa subunit confers, this distribution is likely to contribute differentially to both physiological and pathological patterns of activity.
Resumo:
This thesis explores the interrelationships between the labour process, the development of technology and patterns of gender differentiation. The introduction of front office terminals into building society branches forms the focus of the research. Case studies were carried out in nine branches, three each from three building societies. Statistical data for the whole movement and a survey of ten of the top thirty societies provided the context for the studies. In the process of the research it became clear that it was not technology itself but the way that it was used, that was the main factor in determining outcomes. The introduction of new technologies is occurring at a rapid pace, facilitated by continuing high growth rates, although front office technology could seldom be cost justified. There was great variety between societies in their operating philosophies and their reasons for and approach to computerisation, but all societies foresaw an ultimate saving in staff. Computerisation has resulted in the deskilling of the cashiering role and increased control over work at all stages. Some branch managers experienced a decrease in autonomy and an increase in control over their work. Subsequent to this deskilling there has been a greatly increased use of part time staff which has enabled costs to be reduced. There has also been a polarisation between career and non-career staff which, like the use of part time staff, has occurred along gender lines. There is considerable evidence that societies' policies, structures and managerial attitudes continue to directly and indirectly discriminate against women. It is these practices which confine women to lower grades and ensure their dependence on the family and which create the pool of cheap skilled labour that societies so willingly exploit by increasing part time work. Gender strategies enter management strategies throughout the operations of the organisation.
Resumo:
This thesis examines young children's early collaborative development when engaged in joint tasks with both a peer and a parent. It begins by examining how the term "collaborative" has been applied and researched in previous literature. As collaboration is found to usually require dialogue, and intersubjectivity is seen as an important component in the construction of both collaboration and dialogue, the ability to construct intersubjectivity is the subject of the rest of the chapter. The chapter concludes by introducing the research questions that underpin the experiments that follow. A number of experiments are then described. Experiments 1 and 2 investigate age differences in interaction styles and the communication strategies used by similar aged dyads. Experiments 3 and 4 investigate differences due to the age of the child and/or the status of the information giver (either parent or child) in the styles of interaction and the communication strategies used by parent and child dyads. Experiment 5 investigates the benefits of collaborating with a parent, and finally, Experiment 6 examines the collaborative ability of pre-schools. The thesis identifies a series of skills required for successful collaboration. These include recognition of a joint goal and the need to suppress individual desires, the ability to structure joint interaction, moving from role-based to a negotiating style, and communicative skills, for example, asking for clarification. Other reasons for children's failure in collaborative tasks involve task-related skills, such as the development of spatial terms, and failure to recognise the need for accuracy. The findings support Vygotsky's theory that when working with an adult, children perform at a higher level than when working with a peer. Evidence was also found of parents scaffolding the interaction for their children. However, further research is necessary to establish that such scaffolding skills affect the child's development of collaborative interactive skills.
Resumo:
FULL TEXT: Like many people one of my favourite pastimes over the holiday season is to watch the great movies that are offered on the television channels and new releases in the movie theatres or catching up on those DVDs that you have been wanting to watch all year. Recently we had the new ‘Star Wars’ movie, ‘The Force Awakens’, which is reckoned to become the highest grossing movie of all time, and the latest offering from James Bond, ‘Spectre’ (which included, for the car aficionados amongst you, the gorgeous new Aston Martin DB10). It is always amusing to see how vision correction or eye injury is dealt with by movie makers. Spy movies and science fiction movies have a freehand to design aliens with multiples eyes on stalks or retina scanning door locks or goggles that can see through walls. Eye surgery is usually shown in some kind of day case simplified laser treatment that gives instant results, apart from the great scene in the original ‘Terminator’ movie where Arnold Schwarzenegger's android character encounters an injury to one eye and then proceeds to remove the humanoid covering to this mechanical eye over a bathroom sink. I suppose it is much more difficult to try and include contact lenses in such movies. Although you may recall the film ‘Charlie's Angels’, which did have a scene where one of the Angels wore a contact lens that had a retinal image imprinted on it so she could by-pass a retinal scan door lock and an Eddy Murphy spy movie ‘I-Spy’, where he wore contact lenses that had electronic gadgetry that allowed whatever he was looking at to be beamed back to someone else, a kind of remote video camera device. Maybe we aren’t quite there in terms of devices available but these things are probably not the behest of science fiction anymore as the technology does exist to put these things together. The technology to incorporate electronics into contact lenses is being developed and I am sure we will be reporting on it in the near future. In the meantime we can continue to enjoy the unrealistic scenes of eye swapping as in the film ‘Minority Report’ (with Tom Cruise). Much more closely to home, than in a galaxy far far away, in this issue you can find articles on topics much nearer to the closer future. More and more optometrists in the UK are becoming registered for therapeutic work as independent prescribers and the number is likely to rise in the near future. These practitioners will be interested in the review paper by Michael Doughty, who is a member of the CLAE editorial panel (soon to be renamed the Jedi Council!), on prescribing drugs as part of the management of chronic meibomian gland dysfunction. Contact lenses play an active role in myopia control and orthokeratology has been used not only to help provide refractive correction but also in the retardation of myopia. In this issue there are three articles related to this topic. Firstly, an excellent paper looking at the link between higher spherical equivalent refractive errors and the association with slower axial elongation. Secondly, a paper that discusses the effectiveness and safety of overnight orthokeratology with high-permeability lens material. Finally, a paper that looks at the stabilisation of early adult-onset myopia. Whilst we are always eager for new and exciting developments in contact lenses and related instrumentation in this issue of CLAE there is a demonstration of a novel and practical use of a smartphone to assisted anterior segment imaging and suggestions of this may be used in telemedicine. It is not hard to imagine someone taking an image remotely and transmitting that back to a central diagnostic centre with the relevant expertise housed in one place where the information can be interpreted and instruction given back to the remote site. Back to ‘Star Wars’ and you will recall in the film ‘The Phantom Menace’ when Qui-Gon Jinn first meets Anakin Skywalker on Tatooine he takes a sample of his blood and sends a scan of it back to Obi-Wan Kenobi to send for analysis and they find that the boy has the highest midichlorian count ever seen. On behalf of the CLAE Editorial board (or Jedi Council) and the BCLA Council (the Senate of the Republic) we wish for you a great 2016 and ‘may the contact lens force be with you’. Or let me put that another way ‘the CLAE Editorial Board and BCLA Council, on behalf of, a great 2016, we wish for you!’
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
Recently underwater sensor networks (UWSN) attracted large research interests. Medium access control (MAC) is one of the major challenges faced by UWSN due to the large propagation delay and narrow channel bandwidth of acoustic communications used for UWSN. Widely used slotted aloha (S-Aloha) protocol suffers large performance loss in UWSNs, which can only achieve performance close to pure aloha (P-Aloha). In this paper we theoretically model the performances of S-Aloha and P-Aloha protocols and analyze the adverse impact of propagation delay. According to the observation on the performances of S-Aloha protocol we propose two enhanced S-Aloha protocols in order to minimize the adverse impact of propagation delay on S-Aloha protocol. The first enhancement is a synchronized arrival S-Aloha (SA-Aloha) protocol, in which frames are transmitted at carefully calculated time to align the frame arrival time with the start of time slots. Propagation delay is taken into consideration in the calculation of transmit time. As estimation error on propagation delay may exist and can affect network performance, an improved SA-Aloha (denoted by ISA-Aloha) is proposed, which adjusts the slot size according to the range of delay estimation errors. Simulation results show that both SA-Aloha and ISA-Aloha perform remarkably better than S-Aloha and P-Aloha for UWSN, and ISA-Aloha is more robust even when the propagation delay estimation error is large. © 2011 IEEE.