24 resultados para body area network (BAN)

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In perceptual terms, the human body is a complex 3d shape which has to be interpreted by the observer to judge its attractiveness. Both body mass and shape have been suggested as strong predictors of female attractiveness. Normally body mass and shape co-vary, and it is difficult to differentiate their separate effects. A recent study suggested that altering body mass does not modulate activity in the reward mechanisms of the brain, but shape does. However, using computer generated female body-shaped greyscale images, based on a Principal Component Analysis of female bodies, we were able to construct images which covary with real female body mass (indexed with BMI) and not with body shape (indexed with WHR), and vice versa. Twelve observers (6 male and 6 female) rated these images for attractiveness during an fMRI study. The attractiveness ratings were correlated with changes in BMI and not WHR. Our primary fMRI results demonstrated that in addition to activation in higher visual areas (such as the extrastriate body area), changing BMI also modulated activity in the caudate nucleus, and other parts of the brain reward system. This shows that BMI, not WHR, modulates reward mechanisms in the brain and we infer that this may have important implications for judgements of ideal body size in eating disordered individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many studies have attempted to identify the different cognitive components of body representation (BR). Due to methodological issues, the data reported in these studies are often confusing. Here we summarize the fMRI data from previous studies and explore the possibility of a neural segregation between BR supporting actions (body-schema, BS) or not (non-oriented-to-action-body-representation, NA). We performed a general activation likelihood estimation meta-analysis of 59 fMRI experiments and two individual meta-analyses to identify the neural substrates of different BR. Body processing involves a wide network of areas in occipital, parietal, frontal and temporal lobes. NA selectively activates the somatosensory primary cortex and the supramarginal gyrus. BS involves the primary motor area and the right extrastriate body area. Our data suggest that motor information and recognition of body parts are fundamental to build BS. Instead, sensory information and processing of the egocentric perspective are more important for NA. In conclusion, our results strongly support the idea that different and segregated neural substrates are involved in body representations orient or not to actions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous research suggests that changing consumer and producer knowledge structures play a role in market evolution and that the sociocognitive processes of product markets are revealed in the sensemaking stories of market actors that are rebroadcasted in commercial publications. In this article, the authors lend further support to the story-based nature of market sensemaking and the use of the sociocognitive approach in explaining the evolution of high-technology markets. They examine the content (i.e., subject matter or topic) and volume (i.e., the number) of market stories and the extent to which content and volume of market stories evolve as a technology emerges. Data were obtained from a content analysis of 10,412 article abstracts, published in key trade journals, pertaining to Local Area Network (LAN) technologies and spanning the period 1981 to 2000. Hypotheses concerning the evolving nature (content and volume) of market stories in technology evolution are tested. The analysis identified four categories of market stories - technical, product availability, product adoption, and product discontinuation. The findings show that the emerging technology passes initially through a 'technical-intensive' phase whereby technology related stories dominate, through a 'supply-push' phase, in which stories presenting products embracing the technology tend to exceed technical stories while there is a rise in the number of product adoption reference stories, to a 'product-focus' phase, with stories predominantly focusing on product availability. Overall story volume declines when a technology matures as the need for sensemaking reduces. When stories about product discontinuation surface, these signal the decline of current technology. New technologies that fail to maintain the 'product-focus' stage also reflect limited market acceptance. The article also discusses the theoretical and managerial implications of the study's findings. © 2002 Elsevier Science Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Groupe Spécial Mobile (GSM) has been developed as the pan-European second generation of digital mobile systems. GSM operates in the 900 MHz frequency band and employs digital technology instead of the analogue technology of its predecessors. Digital technology enables the GSM system to operate in much smaller zones in comparison with the analogue systems. The GSM system will offer greater roaming facilities to its subscribers, extended throughout the countries that have installed the system. The GSM system could be seen as a further enhancement to European integration. GSM has adopted a contention-based protocol for multipoint-to-point transmission. In particular, the slotted-ALOHA medium access protocol is used to coordinate the transmission of the channel request messages between the scattered mobile stations. Collision still happens when more than one mobile station having the same random reference number attempts to transmit on the same time-slot. In this research, a modified version of this protocol has been developed in order to reduce the number of collisions and hence increase the random access channel throughput compared to the existing protocol. The performance evaluation of the protocol has been carried out using simulation methods. Due to the growing demand for mobile radio telephony as well as for data services, optimal usage of the scarce availability radio spectrum is becoming increasingly important. In this research, a protocol has been developed whereby the number of transmitted information packets over the GSM system is increased without any additional increase of the allocated radio spectrum. Simulation results are presented to show the improvements achieved by the proposed protocol. Cellular mobile radio networks commonly respond to an increase in the service demand by using smaller coverage areas. As a result, the volume of the signalling exchanges increases. In this research, a proposal for interconnecting the various entitles of the mobile radio network over the future broadband networks based on the IEEE 802.6 Metropolitan Area Network (MAN) is outlined. Simulation results are presented to show the benefits achieved by interconnecting these entities over the broadband Networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computerised production control developments have concentrated on Manufacturing Resources Planning (MRP II) systems. The literature suggests however, that despite the massive investment in hardware, software and management education, successful implementation of such systems in manufacturing industries has proved difficult. This thesis reviews the development of production planning and control systems, in particular, investigates the causes of failures in implementing MRP/MRP II systems in industrial environments and argues that the centralised and top-down planning structure, as well as the routine operational methodology of such systems, is inherently prone to failure. The thesis reviews the control benefits of cellular manufacturing systems but concludes that in more dynamic manufacturing environments, techniques such as Kanban are inappropriate. The basic shortcomings of MRP II systems are highlighted and a new enhanced operational methodology based on distributed planning and control principles is introduced. Distributed Manufacturing Resources Planning (DMRP), was developed as a capacity sensitive production planning and control solution for cellular manufacturing environments. The system utilises cell based, independently operated MRP II systems, integrated into a plant-wide control system through a Local Area Network. The potential benefits of adopting the system in industrial environments is discussed and the results of computer simulation experiments to compare the performance of the DMRP system against the conventional MRP II systems presented. DMRP methodology is shown to offer significant potential advantages which include ease of implementation, cost effectiveness, capacity sensitivity, shorter manufacturing lead times, lower working in progress levels and improved customer service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEEE 802.15.4 networks (also known as ZigBee networks) has the features of low data rate and low power consumption. In this paper we propose an adaptive data transmission scheme which is based on CSMA/CA access control scheme, for applications which may have heavy traffic loads such as smart grids. In the proposed scheme, the personal area network (PAN) coordinator will adaptively broadcast a frame length threshold, which is used by the sensors to make decision whether a data frame should be transmitted directly to the target destinations, or follow a short data request frame. If the data frame is long and prone to collision, use of a short data request frame can efficiently reduce the costs of the potential collision on the energy and bandwidth. Simulation results demonstrate the effectiveness of the proposed scheme with largely improve bandwidth and power efficiency. © 2011 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the performance through numerical simulations of a new modulation format: serial dark soliton (SDS) for wide-area 100-Gb/s applications. We compare the performance of the SDS with conventional dark soliton, amplitude-modulation phase-shift keying (also known as duobinary), nonreturn-to-zero, and return-to-zero modulation formats, when subjected to typical wide-area-network impairments. We show that the SDS has a strong chromatic dispersion and polarization-mode-dispersion tolerance, while maintaining a compact spectrum suitable for strong filtering requirement in ultradense wavelength-division-multiplexing applications. The SDS can be generated using commercially available components for 40-Gb/s applications and is cost efficient when compared with other 100-Gb/s electrical-time-division-multiplexing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The density of Lewy bodies (LB), senile plaques (SP), and neurofibrillary tangles (NFT) was studied in the temporal lobe in four patients diagnosed with ‘pure’ dementia with Lewy bodies (DLB) and eight patients diagnosed with DLB with associated Alzheimer’s disease (DLB/AD). In both patient groups, the density of LB was greatest in the lateral occipitotemporal gyrus (LOT) and least in areaas CA1 and CA4 of the hippocampus. In DLB/AD, the densities of SP and NFT were greatest in the cortical regions and in area CA1 of the hippocampus respectively. Mean LB densities in the temporal lobe were similar in ‘pure’ DLB and DLB/AD patients but mean SP and NFT densities were greater in DLB/AD. No significant correlations were observed between the densities of LB, SP and NFT in any brain region. The data suggest that in the temporal lobe LB and SP/NFT are distributed differently; SP and NFT in DLB/AD are distributed similarly to ‘pure’ AD and also that LB and AD pathologies appear to develop independently. Hence, the data support the hypothesis that some cases of DLB combine the features of DLB and AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The introduction of Regional Development Agencies (RDAs) in the English regions in 1999 presented a new set of collaborative challenges to existing local institutions. The key objectives of the new policy impetus emphasise increased joined-up thinking and holistic regional governance. Partners were enjoined to promote cross-sector collaboration and present a coherent regional voice. This study aims to evaluate the impact of an RDA on the partnership infrastructure of the West Midlands. The RDA network incorporates a wide spectrum of interest and organisations with diverse collaborative histories, competencies and capacities. The study has followed partners through the process over an eighteen-month period and has sought to explore the complexities and tensions of partnership working 'on the ground'. A strong qualitative methodology has been employed in generating 'thick descriptions' of the policy domain. The research has probed beyond the 'rhetoric' of partnerships and explores the sensitivities of the collaboration process. A number of theoretical frameworks have been employed, including policy network theory; partnership and collaboration theory; organisational learning; and trust and social capital. The structural components of the West Midlands RDA network are explored, including the structural configuration of the network and stocks of human and social capital assets. These combine to form the asset base of the network. Three sets of network behaviours are then explored, namely, strategy, the management of perceptions, and learning. The thesis explores how the combination of assets and behaviours affect, and in turn are affected by, each other. The findings contribute to the growing body of knowledge and understanding surrounding policy networks and collaborative governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enhancing the resilience of local communities to weather extremes has gained significant interest over the years, amidst the increased intensity and frequency of such events. The fact that such weather extremes are forecast to further increase in number and severity in future has added extra weight to the importance of the issue. As a local community consists of a number of community groups such as households, businesses and policy makers, the actions of different community groups in combination will determine the resilience of the community as a whole. An important role has to be played by Small and Medium-sized Enterprises (SMEs); which is an integral segment of a local community in the UK, in this regard. While it is recognised that they are vital to the economy of a country and determines the prosperity of communities, they are increasingly vulnerable to effects of extreme weather. This paper discusses some of the exploratory studies conducted in the UK on SMEs and their ability to cope with extreme weather events, specifically flooding. Although a reasonable level of awareness of the risk was observed among the SMEs, this has not always resulted in increased preparedness even if they are located in areas at risk of flooding. The attitude and the motivation to change differed widely between SMEs. The paper presents schemas by which the SMEs can identify their vulnerability better so that they can be populated among a community of SMEs, which can be taken forward to inform policy making in this area. Therefore the main contribution the paper makes to the body of knowledge in the area is a novel way to communicate to SMEs on improving resilience against extreme weather, which will inform some of the policy making initiatives in the UK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic asset rating (DAR) is one of the number of techniques that could be used to facilitate low carbon electricity network operation. Previous work has looked at this technique from an asset perspective. This paper focuses, instead, from a network perspective by proposing a dynamic network rating (DNR) approach. The models available for use with DAR are discussed and compared using measured load and weather data from a trial network area within Milton Keynes in the central area of the U.K. This paper then uses the most appropriate model to investigate, through a network case study, the potential gains in dynamic rating compared to static rating for the different network assets - transformers, overhead lines, and cables. This will inform the network operator of the potential DNR gains on an 11-kV network with all assets present and highlight the limiting assets within each season.