46 resultados para Performance evolution due time
em Aston University Research Archive
Resumo:
Orthodox contingency theory links effective organisational performance to compatible relationships between the environment and organisation strategy and structure and assumes that organisations have the capacity to adapt as the environment changes. Recent contributions to the literature on organisation theory claim that the key to effective performance is effective adaptation which in turn requires the simultaneous reconciliation of efficiency and innovation which is afforded by an unique environment-organisation configuration. The literature on organisation theory recognises the continuing confusion caused by the fragmented and often conflicting results from cross-sectional studies. Although the case is made for longitudinal studies which comprehensively describe the evolving relationship between the environment and the organisation there is little to suggest how such studies should be executed in practice. Typically the choice is between the approaches of the historicised case study and statistical analysis of large populations which examine the relationship between environment and organisation strategy and/or structure and ignore the product-process relationship. This study combines the historicised case study and the multi-variable and ordinal scale approach of statistical analysis to construct an analytical framework which tracks and exposes the environment-organisation-performance relationship over time. The framework examines changes in the environment, strategy and structure and uniquely includes an assessment of the organisation's product-process relationship and its contribution to organisational efficiency and innovation. The analytical framework is applied to examine the evolving environment-organisation relationship of two organisations in the same industry over the same twenty-five year period to provide a sector perspective of organisational adaptation. The findings demonstrate the significance of the environment-organisation configuration to the scope and frequency of adaptation and suggest that the level of sector homogeneity may be linked to the level of product-process standardisation.
Resumo:
This collection of papers records a series of studies, carried out over a period of some 50 years, on two aspects of river pollution control - the prevention of pollution by sewage biological filtration and the monitoring of river pollution by biological surveillance. The earlier studies were carried out to develop methods of controlling flies which bred in the filters and caused serious nuisance and possible public health hazard, when they dispersed to surrounding villages. Although the application of insecticides proved effective as an alleviate measure, because it resulted in only a temporary disturbance of the ecological balance, it was considered ecologically unsound as a long-term solution. Subsequent investigations showed that the fly populations in filters were largely determined by the amount of food available to the grazing larval stage in the form of filter film. It was also established that the winter deterioration in filter performance was due to the excessive accumulation of film. Subsequent investigations were therefore carried out to determine the factors responsible for the accumulation of film in different types of filter. Methods of filtration which were considered to control film accumulation by increasing the flushing action of the sewage, were found to control fungal film by creating nutrient limiting conditions. In some filters increasing the hydraulic flushing reduced the grazing fauna population in the surface layers and resulted in an increase in film. The results of these investigations were successfully applied in modifying filters and in the design of a Double Filtration process. These studies on biological filters lead to the conclusion that they should be designed and operated as ecological systems and not merely as hydraulic ones. Studies on the effects of sewage effluents on Birmingham streams confirmed the findings of earlier workers justifying their claim for using biological methods for detecting and assessing river pollution. Further ecological studies showed the sensitivity of benthic riffle communities to organic pollution. Using experimental channels and laboratory studies the different environmental conditions associated with organic pollution were investigated. The degree and duration of the oxygen depletion during the dark hours were found to be a critical factor. The relative tolerance of different taxa to other pollutants, such as ammonia, differed. Although colonisation samplers proved of value in sampling difficult sites, the invertebrate data generated were not suitable for processing as any of the commonly used biotic indexes. Several of the papers, which were written by request for presentation at conferences etc., presented the biological viewpoint on river pollution and water quality issues at the time and advocated the use of biological methods. The information and experiences gained in these investigations was used as the "domain expert" in the development of artificial intelligence systems for use in the biological surveillance of river water quality.
Resumo:
N-doped ZnO/g-C3N4 hybrid core–shell nanoplates have been successfully prepared via a facile, cost-effective and eco-friendly ultrasonic dispersion method for the first time. HRTEM studies confirm the formation of the N-doped ZnO/g-C3N4 hybrid core–shell nanoplates with an average diameter of 50 nm and the g-C3N4 shell thickness can be tuned by varying the content of loaded g-C3N4. The direct contact of the N-doped ZnO surface and g-C3N4 shell without any adhesive interlayer introduced a new carbon energy level in the N-doped ZnO band gap and thereby effectively lowered the band gap energy. Consequently, the as-prepared hybrid core–shell nanoplates showed a greatly enhanced visible-light photocatalysis for the degradation of Rhodamine B compare to that of pure N-doped ZnO surface and g-C3N4. Based on the experimental results, a proposed mechanism for the N-doped ZnO/g-C3N4 photocatalyst was discussed. Interestingly, the hybrid core–shell nanoplates possess high photostability. The improved photocatalytic performance is due to a synergistic effect at the interface of the N-doped ZnO and g-C3N4 including large surface-exposure area, energy band structure and enhanced charge-separation properties. Significantly, the enhanced performance also demonstrates the importance of evaluating new core–shell composite photocatalysts with g-C3N4 as shell material.
Resumo:
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Resumo:
This thesis presents a novel high-performance approach to time-division-multiplexing (TDM) fibre Bragg grating (FBG) optical sensors, known as the resonant cavity architecture. A background theory of FBG optical sensing includes several techniques for multiplexing sensors. The limitations of current wavelength-division-multiplexing (WDM) schemes are contrasted against the technological and commercial advantage of TDM. The author’s hypothesis that ‘it should be possible to achieve TDM FBG sensor interrogation using an electrically switched semiconductor optical amplifier (SOA)’ is then explained. Research and development of a commercially viable optical sensor interrogator based on the resonant cavity architecture forms the remainder of this thesis. A fully programmable SOA drive system allows interrogation of sensor arrays 10km long with a spatial resolution of 8cm and a variable gain system provides dynamic compensation for fluctuating system losses. Ratiometric filter- and diffractive-element spectrometer-based wavelength measurement systems are developed and analysed for different commercial applications. The ratiometric design provides a low-cost solution that has picometre resolution and low noise using 4% reflective sensors, but is less tolerant to variation in system loss. The spectrometer design is more expensive, but delivers exceptional performance with picometre resolution, low noise and tolerance to 13dB system loss variation. Finally, this thesis details the interrogator’s peripheral components, its compliance for operation in harsh industrial environments and several examples of commercial applications where it has been deployed. Applications include laboratory instruments, temperature monitoring systems for oil production, dynamic control for wind-energy and battery powered, self-contained sub-sea strain monitoring.
Resumo:
Traditional high speed machinery actuators are powered and coordinated by mechanical linkages driven from a central drive, but these linkages may be replaced by independently synchronised electric drives. Problems associated with utilising such electric drives for this form of machinery were investigated. The research concentrated on a high speed rod-making machine, which required control of high inertias (0.01-0.5kgm2), at continuous high speed (2500 r/min), with low relative phase errors between two drives (0.0025 radians). Traditional minimum energy drive selection techniques for incremental motions were not applicable to continuous applications which require negligible energy dissipation. New selection techniques were developed. A brushless configuration constant enabled the comparison between seven different servo systems; the rate earth brushless drives had the best power rates which is a performance measure. Simulation was used to review control strategies, such that a microprocessor controller with a proportional velocity loop within a proportional position loop with velocity feedforward was designed. Local control schemes were investigated as means of reducing relative errors between drives: the slave of a master/slave scheme compensates for the master's errors: the matched scheme has drives with similar absolute errors so the relative error is minimised, and the feedforward scheme minimises error by adding compensation from previous knowledge. Simulation gave an approximate velocity loop bandwidth and position loop gain required to meet the specification. Theoretical limits for these parameters were defined in terms of digital sampling delays, quantisation, and system phase shifts. Performance degradation due to mechanical backlash was evaluated. Thus any drive could be checked to ensure that the performance specification could be realised. A two drive demonstrator was commissioned with 0.01kgm2 loads. By use of simulation the performance of one drive was improved by increasing the velocity loop bandwidth fourfold. With the master/slave scheme relative errors were within 0.0024 radians at a constant 2500 r/min for two 0.01 kgm^2 loads.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.
Resumo:
This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.
Resumo:
To what extent does competitive entry create a structural change in key marketing metrics? New players may just be a temporal nuisance to incumbents, but could also fundamentally change the latter's performance evolution, or induce them to permanently alter their spending levels and/or pricing decisions. Similarly, the addition of a new marketing channel could permanently shift shopping preferences, or could just create a short-lived migration from existing channels. The steady-state impact of a given entry or channel addition on various marketing metrics is intrinsically an empirical issue for which we need an appropriate testing procedure. In this study, we introduce a testing sequence that allows for the endogenous determination of potential change (break) locations, thereby accounting for lead and/or lagged effects of the introduction of interest. By not restricting the number of potential breaks to one (as is commonly done in the marketing literature), we quantify the impact of the new entrant(s) while controlling for other events that may have taken place in the market. We illustrate the methodology in the context of the Dutch television advertising market, which was characterized by the entry of several late movers. We find that the steady-state growth of private incumbents' revenues was slowed by the quasi-simultaneous entry of three new players. Contrary to industry observers' expectations, such a slowdown was not experienced in the related markets of print and radio advertising.
Resumo:
IEEE 802.15.4 standard is a relatively new standard designed for low power low data rate wireless sensor networks (WSN), which has a wide range of applications, e.g., environment monitoring, e-health, home and industry automation. In this paper, we investigate the problems of hidden devices in coverage overlapped IEEE 802.15.4 WSNs, which is likely to arise when multiple 802.15.4 WSNs are deployed closely and independently. We consider a typical scenario of two 802.15.4 WSNs with partial coverage overlapping and propose a Markov-chain based analytical model to reveal the performance degradation due to the hidden devices from the coverage overlapping. Impacts of the hidden devices and network sleeping modes on saturated throughput and energy consumption are modeled. The analytic model is verified by simulations, which can provide the insights to network design and planning when multiple 802.15.4 WSNs are deployed closely. © 2013 IEEE.
Resumo:
IEEE 802.15.4 standard is a relatively new standard designed for low power low data rate wireless sensor networks (WSN), which has a wide range of applications, e.g., environment monitoring, e-health, home and industry automation. In this paper, we investigate the problems of hidden devices in coverage overlapped IEEE 802.15.4 WSNs, which is likely to arise when multiple 802.15.4 WSNs are deployed closely and independently. We consider a typical scenario of two 802.15.4 WSNs with partial coverage overlapping and propose a Markov-chain based analytical model to reveal the performance degradation due to the hidden devices from the coverage overlapping. Impacts of the hidden devices and network sleeping modes on saturated throughput and energy consumption are modeled. The analytic model is verified by simulations, which can provide the insights to network design and planning when multiple 802.15.4 WSNs are deployed closely. © 2013 IEEE.
Resumo:
In this paper we consider the possibility of using intermediate solutions, in which ideal apodisation profile for a dispersion-free, sharp-reflection profile fibre Bragg grating approximated in different degrees. The ideal apodisation profile for a flat dispersion, 50 GHz bandwidth grating was obtained using the layer-peeling algorithm. To verify the modelled results a version of the 5-section grating has been manufactured with excellent agreement between the model and the experimental results. The performance penalty due to multiple reflections from the FBGs in different situations was studied. The results showed that in the approximated gratings some post-compensation must be included to account for the local deviations from zero dispersion. © 2003 IEEE.
Resumo:
This PhD thesis analyses networks of knowledge flows, focusing on the role of indirect ties in the knowledge transfer, knowledge accumulation and knowledge creation process. It extends and improves existing methods for mapping networks of knowledge flows in two different applications and contributes to two stream of research. To support the underlying idea of this thesis, which is finding an alternative method to rank indirect network ties to shed a new light on the dynamics of knowledge transfer, we apply Ordered Weighted Averaging (OWA) to two different network contexts. Knowledge flows in patent citation networks and a company supply chain network are analysed using Social Network Analysis (SNA) and the OWA operator. The OWA is used here for the first time (i) to rank indirect citations in patent networks, providing new insight into their role in transferring knowledge among network nodes; and to analyse a long chain of patent generations along 13 years; (ii) to rank indirect relations in a company supply chain network, to shed light on the role of indirectly connected individuals involved in the knowledge transfer and creation processes and to contribute to the literature on knowledge management in a supply chain. In doing so, indirect ties are measured and their role as means of knowledge transfer is shown. Thus, this thesis represents a first attempt to bridge the OWA and SNA fields and to show that the two methods can be used together to enrich the understanding of the role of indirectly connected nodes in a network. More specifically, the OWA scores enrich our understanding of knowledge evolution over time within complex networks. Future research can show the usefulness of OWA operator in different complex networks, such as the on-line social networks that consists of thousand of nodes.
Resumo:
A new generation of high-capacity WDM systems with extremely robust performance has been enabled by coherent transmission and digital signal processing. To facilitate widespread deployment of this technology, particularly in the metro space, new photonic components and subsystems are being developed to support cost-effective, compact, and scalable transceivers. We briefly review the recent progress in InP-based photonic components, and report numerical simulation results of an InP-based transceiver comprising a dual-polarization I/Q modulator and a commercial DSP ASIC. Predicted performance penalties due to the nonlinear response, lower bandwidth, and finite extinction ratio of these transceivers are less than 1 and 2 dB for 100-G PM-QPSK and 200-G PM-16QAM, respectively. Using the well-established Gaussian-Noise model, estimated system reach of 100-G PM-QPSK is greater than 600 km for typical ROADM-based metro-regional systems with internode losses up to 20 dB. © 1983-2012 IEEE.