35 resultados para Gender in Performance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead ofbeing another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. There are many kinds of protocols that work over WMNs, such as IEEE 802.11a/b/g, 802.15 and 802.16. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. While transmission rate is a significant part, only a few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published. In this paper we will show MAC, packet loss and physical layer conditions play important role for having good channel condition. Also we perform rate adaption along with multiple packet transmission for better throughput. By allowing for dynamically monitored, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria improvements in performance can be obtained. The proposed method is the detection of channel congestion by measuring the fluctuation of signal to the standard deviation of and the detection of packet loss before channel performance diminishes. We will show that the use of such techniques in WMN can significantly improve performance. The effectiveness of the proposed method is presented in an experimental wireless network testbed via packet-level simulation. Our simulation results show that regardless of the channel condition we were to improve the performance in the throughput.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Switching between tasks produces decreases in performance as compared to repeating the same task. Asymmetrical switch costs occur when switching between two tasks of unequal difficulty. This asymmetry occurs because the cost is greater when switching to the less difficult task than when switching to the more difficult task. Various theories about the origins of these asymmetrical switch costs have emerged from numerous and detailed experiments with adults. There is no documented evidence of asymmetrical switch costs in children. We conducted a series of studies that examined age-related changes in asymmetrical switch costs, within the same paradigm. Similarities in the patterns of asymmetrical switch costs between children and adults suggested that theoretical explanations of the cognitive mechanisms driving asymmetrical switch costs in adults could be applied to children. Age-related differences indicate that these theoretical explanations need to incorporate the relative contributions and interactions of developmental processes and task mastery. © 2006 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance. © 2007 The American Physical Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As a source or sink of reactive power, compensators can be made from a voltage sourced inverter circuit with the a.c. terminals of the inverter connected to the system through an inductive link and with a capacitor connected across the d.c. terminals. Theoretical calculations on linearised models of the compensators have shown that the parameters characterising the performance are the reduced firing angle and the resonance ratio. The resonance ratio is the ratio of the natural frequency of oscillation of the energy storage components in the circuit to the system frequency. The reduced firing angle of the inverter divided by the damping coefficient, β, where β is half the R to X ratio of the link between the inverter and the system. The theoretical results have been verified by computer simulation and experiment. There is a narrow range of values for the resonance ratio below which there is no appreciable improvement in performance, despite an increase in the cost of the energy storage components, and above which the performance of the equipment is poor with the current being dominated by harmonics. The harmonic performance of the equipment is improved by using multiple inverters and phase shifting transformers to increase the pulse number. The optimum value of the resonance ratio increases pulse number, indicating a reduction in the energy storage components needed at high pulse numbers. The reactive power output from the compensator varies linearly with the reduced firing angle while the losses vary as the square of it.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Combining the results of classifiers has shown much promise in machine learning generally. However, published work on combining text categorizers suggests that, for this particular application, improvements in performance are hard to attain. Explorative research using a simple voting system is presented and discussed in the light of a probabilistic model that was originally developed for safety critical software. It was found that typical categorization approaches produce predictions which are too similar for combining them to be effective since they tend to fail on the same records. Further experiments using two less orthodox categorizers are also presented which suggest that combining text categorizers can be successful, provided the essential element of ‘difference’ is considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of Total Productive Maintenance (TPM) is to maximise plant and equipment effectiveness, to create a sense of ownership for operators, and promote continuous improvement through small group activities involving production, engineering and maintenance personnel. This paper describes and analyses a case study of TPM implementation at a newspaper printing house in Singapore. However, rather than adopting more conventional implementation methods such as employing consultants or through a project using external training, a unique approach was adopted based on Action Research using a spiral of cycles of planning, acting observing and reflecting. An Action Research team of company personnel was specially formed to undertake the necessary fieldwork. The team subsequently assisted with administering the resulting action plan. The main sources of maintenance and operational data were from interviews with shop floor workers, participative observation and reviews conducted with members of the team. Content analysis using appropriate statistical techniques was used to test the significance of changes in performance between the start and completion of the TPM programme. The paper identifies the characteristics associated with the Action Research method when used to implement TPM and discusses the applicability of the approach in related industries and processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. © 2013 Burgess.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The intensity of global competition and ever-increasing economic uncertainties has led organizations to search for more efficient and effective ways to manage their business operations. Data envelopment analysis (DEA) has been widely used as a conceptually simple yet powerful tool for evaluating organizational productivity and performance. Fuzzy DEA (FDEA) is a promising extension of the conventional DEA proposed for dealing with imprecise and ambiguous data in performance measurement problems. This book is the first volume in the literature to present the state-of-the-art developments and applications of FDEA. It is designed for students, educators, researchers, consultants and practicing managers in business, industry, and government with a basic understanding of the DEA and fuzzy logic concepts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. This chapter provides a taxonomy and review of the fuzzy DEA (FDEA) methods. We present a classification scheme with six categories, namely, the tolerance approach, the α-level based approach, the fuzzy ranking approach, the possibility approach, the fuzzy arithmetic, and the fuzzy random/type-2 fuzzy set. We discuss each classification scheme and group the FDEA papers published in the literature over the past 30 years. © 2014 Springer-Verlag Berlin Heidelberg.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores experimentally the impairments in performance that are generated when multiple single-sideband (SSB) subcarrier multiplexing (SCM) signals are closely allocated in frequency to establish a spectrally efficient wavelength division multiplexing (WDM) link. The performance of cost-effective SSB WDM/ SCM implementations, without optical filters in the transmitter, presents a strong dependency on the imperfect sideband suppression ratio that can be directly achieved with the electro-optical modulator. A direct detected broadband multichannel SCM link composed of a state-of-the-art optical IQ modulator and five quadrature phase-shift keyed (QPSK) subcarriers per optical channel is presented, showing that a suppression ratio of 20 dB obtained directly with the modulator produced a penalty of 2 dB in overall performance, due to interference between adjacent optical channels.