883 resultados para Performance evolution due time
Resumo:
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Resumo:
This thesis presents a novel high-performance approach to time-division-multiplexing (TDM) fibre Bragg grating (FBG) optical sensors, known as the resonant cavity architecture. A background theory of FBG optical sensing includes several techniques for multiplexing sensors. The limitations of current wavelength-division-multiplexing (WDM) schemes are contrasted against the technological and commercial advantage of TDM. The author’s hypothesis that ‘it should be possible to achieve TDM FBG sensor interrogation using an electrically switched semiconductor optical amplifier (SOA)’ is then explained. Research and development of a commercially viable optical sensor interrogator based on the resonant cavity architecture forms the remainder of this thesis. A fully programmable SOA drive system allows interrogation of sensor arrays 10km long with a spatial resolution of 8cm and a variable gain system provides dynamic compensation for fluctuating system losses. Ratiometric filter- and diffractive-element spectrometer-based wavelength measurement systems are developed and analysed for different commercial applications. The ratiometric design provides a low-cost solution that has picometre resolution and low noise using 4% reflective sensors, but is less tolerant to variation in system loss. The spectrometer design is more expensive, but delivers exceptional performance with picometre resolution, low noise and tolerance to 13dB system loss variation. Finally, this thesis details the interrogator’s peripheral components, its compliance for operation in harsh industrial environments and several examples of commercial applications where it has been deployed. Applications include laboratory instruments, temperature monitoring systems for oil production, dynamic control for wind-energy and battery powered, self-contained sub-sea strain monitoring.
Resumo:
Traditional high speed machinery actuators are powered and coordinated by mechanical linkages driven from a central drive, but these linkages may be replaced by independently synchronised electric drives. Problems associated with utilising such electric drives for this form of machinery were investigated. The research concentrated on a high speed rod-making machine, which required control of high inertias (0.01-0.5kgm2), at continuous high speed (2500 r/min), with low relative phase errors between two drives (0.0025 radians). Traditional minimum energy drive selection techniques for incremental motions were not applicable to continuous applications which require negligible energy dissipation. New selection techniques were developed. A brushless configuration constant enabled the comparison between seven different servo systems; the rate earth brushless drives had the best power rates which is a performance measure. Simulation was used to review control strategies, such that a microprocessor controller with a proportional velocity loop within a proportional position loop with velocity feedforward was designed. Local control schemes were investigated as means of reducing relative errors between drives: the slave of a master/slave scheme compensates for the master's errors: the matched scheme has drives with similar absolute errors so the relative error is minimised, and the feedforward scheme minimises error by adding compensation from previous knowledge. Simulation gave an approximate velocity loop bandwidth and position loop gain required to meet the specification. Theoretical limits for these parameters were defined in terms of digital sampling delays, quantisation, and system phase shifts. Performance degradation due to mechanical backlash was evaluated. Thus any drive could be checked to ensure that the performance specification could be realised. A two drive demonstrator was commissioned with 0.01kgm2 loads. By use of simulation the performance of one drive was improved by increasing the velocity loop bandwidth fourfold. With the master/slave scheme relative errors were within 0.0024 radians at a constant 2500 r/min for two 0.01 kgm^2 loads.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.
Resumo:
This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.
Resumo:
To what extent does competitive entry create a structural change in key marketing metrics? New players may just be a temporal nuisance to incumbents, but could also fundamentally change the latter's performance evolution, or induce them to permanently alter their spending levels and/or pricing decisions. Similarly, the addition of a new marketing channel could permanently shift shopping preferences, or could just create a short-lived migration from existing channels. The steady-state impact of a given entry or channel addition on various marketing metrics is intrinsically an empirical issue for which we need an appropriate testing procedure. In this study, we introduce a testing sequence that allows for the endogenous determination of potential change (break) locations, thereby accounting for lead and/or lagged effects of the introduction of interest. By not restricting the number of potential breaks to one (as is commonly done in the marketing literature), we quantify the impact of the new entrant(s) while controlling for other events that may have taken place in the market. We illustrate the methodology in the context of the Dutch television advertising market, which was characterized by the entry of several late movers. We find that the steady-state growth of private incumbents' revenues was slowed by the quasi-simultaneous entry of three new players. Contrary to industry observers' expectations, such a slowdown was not experienced in the related markets of print and radio advertising.
Resumo:
IEEE 802.15.4 standard is a relatively new standard designed for low power low data rate wireless sensor networks (WSN), which has a wide range of applications, e.g., environment monitoring, e-health, home and industry automation. In this paper, we investigate the problems of hidden devices in coverage overlapped IEEE 802.15.4 WSNs, which is likely to arise when multiple 802.15.4 WSNs are deployed closely and independently. We consider a typical scenario of two 802.15.4 WSNs with partial coverage overlapping and propose a Markov-chain based analytical model to reveal the performance degradation due to the hidden devices from the coverage overlapping. Impacts of the hidden devices and network sleeping modes on saturated throughput and energy consumption are modeled. The analytic model is verified by simulations, which can provide the insights to network design and planning when multiple 802.15.4 WSNs are deployed closely. © 2013 IEEE.
Resumo:
IEEE 802.15.4 standard is a relatively new standard designed for low power low data rate wireless sensor networks (WSN), which has a wide range of applications, e.g., environment monitoring, e-health, home and industry automation. In this paper, we investigate the problems of hidden devices in coverage overlapped IEEE 802.15.4 WSNs, which is likely to arise when multiple 802.15.4 WSNs are deployed closely and independently. We consider a typical scenario of two 802.15.4 WSNs with partial coverage overlapping and propose a Markov-chain based analytical model to reveal the performance degradation due to the hidden devices from the coverage overlapping. Impacts of the hidden devices and network sleeping modes on saturated throughput and energy consumption are modeled. The analytic model is verified by simulations, which can provide the insights to network design and planning when multiple 802.15.4 WSNs are deployed closely. © 2013 IEEE.
Resumo:
In this paper we consider the possibility of using intermediate solutions, in which ideal apodisation profile for a dispersion-free, sharp-reflection profile fibre Bragg grating approximated in different degrees. The ideal apodisation profile for a flat dispersion, 50 GHz bandwidth grating was obtained using the layer-peeling algorithm. To verify the modelled results a version of the 5-section grating has been manufactured with excellent agreement between the model and the experimental results. The performance penalty due to multiple reflections from the FBGs in different situations was studied. The results showed that in the approximated gratings some post-compensation must be included to account for the local deviations from zero dispersion. © 2003 IEEE.
Resumo:
This PhD thesis analyses networks of knowledge flows, focusing on the role of indirect ties in the knowledge transfer, knowledge accumulation and knowledge creation process. It extends and improves existing methods for mapping networks of knowledge flows in two different applications and contributes to two stream of research. To support the underlying idea of this thesis, which is finding an alternative method to rank indirect network ties to shed a new light on the dynamics of knowledge transfer, we apply Ordered Weighted Averaging (OWA) to two different network contexts. Knowledge flows in patent citation networks and a company supply chain network are analysed using Social Network Analysis (SNA) and the OWA operator. The OWA is used here for the first time (i) to rank indirect citations in patent networks, providing new insight into their role in transferring knowledge among network nodes; and to analyse a long chain of patent generations along 13 years; (ii) to rank indirect relations in a company supply chain network, to shed light on the role of indirectly connected individuals involved in the knowledge transfer and creation processes and to contribute to the literature on knowledge management in a supply chain. In doing so, indirect ties are measured and their role as means of knowledge transfer is shown. Thus, this thesis represents a first attempt to bridge the OWA and SNA fields and to show that the two methods can be used together to enrich the understanding of the role of indirectly connected nodes in a network. More specifically, the OWA scores enrich our understanding of knowledge evolution over time within complex networks. Future research can show the usefulness of OWA operator in different complex networks, such as the on-line social networks that consists of thousand of nodes.
Resumo:
A new generation of high-capacity WDM systems with extremely robust performance has been enabled by coherent transmission and digital signal processing. To facilitate widespread deployment of this technology, particularly in the metro space, new photonic components and subsystems are being developed to support cost-effective, compact, and scalable transceivers. We briefly review the recent progress in InP-based photonic components, and report numerical simulation results of an InP-based transceiver comprising a dual-polarization I/Q modulator and a commercial DSP ASIC. Predicted performance penalties due to the nonlinear response, lower bandwidth, and finite extinction ratio of these transceivers are less than 1 and 2 dB for 100-G PM-QPSK and 200-G PM-16QAM, respectively. Using the well-established Gaussian-Noise model, estimated system reach of 100-G PM-QPSK is greater than 600 km for typical ROADM-based metro-regional systems with internode losses up to 20 dB. © 1983-2012 IEEE.
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
The general knowledge of the hydrographic structure of the Southern Ocean is still rather incomplete since observations particularly in the ice covered regions are cumbersome to be carried out. But we know from the available information that thermohaline processes have large amplitudes and cover a wide range of scales in this part of the world ocean. The modification of water masses around Antarctica have indeed a worldwide impact, these processes ultimately determine the cold state of the present climate in the world ocean. We have converted efforts of the German and Russian polar research institutions to collect and validate the presently available temperature, salinity and oxygen data of the ocean south of 30°S latitude. We have carried out this work in spite of the fact that the hydrographic programme of the World Ocean Circulation Experiment (WOCE) will provide more new information in due time, but its contribution to the high latitudes of the Southern Ocean is quite sparse. The modified picture of the hydrographic structure of the Southern Ocean presented in this atlas may serve the oceanographic community in many ways and help to unravel the role of this ocean in the global climate system. This atlas could only be prepared with the altruistic assistance of many colleagues from various institutions worldwide who have provided us with their data and their advice. Their generous help is gratefully acknowledged. During two years scientists from the Arctic and Antarctic Research Institute in St. Petersburg and the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven have cooperated in a fruitful way to establish the atlas and the archive of about 38749 validated hydrographic stations. We hope that both sources of information will be widely applied for future ocean studies and will serve as a reference state for global change considerations.
Resumo:
Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.