50 resultados para distributed control and estimation
em Aston University Research Archive
Resumo:
Flow control in Computer Communication systems is generally a multi-layered structure, consisting of several mechanisms operating independently at different levels. Evaluation of the performance of networks in which different flow control mechanisms act simultaneously is an important area of research, and is examined in depth in this thesis. This thesis presents the modelling of a finite resource computer communication network equipped with three levels of flow control, based on closed queueing network theory. The flow control mechanisms considered are: end-to-end control of virtual circuits, network access control of external messages at the entry nodes and the hop level control between nodes. The model is solved by a heuristic technique, based on an equivalent reduced network and the heuristic extensions to the mean value analysis algorithm. The method has significant computational advantages, and overcomes the limitations of the exact methods. It can be used to solve large network models with finite buffers and many virtual circuits. The model and its heuristic solution are validated by simulation. The interaction between the three levels of flow control are investigated. A queueing model is developed for the admission delay on virtual circuits with end-to-end control, in which messages arrive from independent Poisson sources. The selection of optimum window limit is considered. Several advanced network access schemes are postulated to improve the network performance as well as that of selected traffic streams, and numerical results are presented. A model for the dynamic control of input traffic is developed. Based on Markov decision theory, an optimal control policy is formulated. Numerical results are given and throughput-delay performance is shown to be better with dynamic control than with static control.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
The open content creation process has proven itself to be a powerful and influential way of developing text-based content, as demonstrated by the success of Wikipedia and related sites. Distributed individuals independently edit, revise, or refine content, thereby creating knowledge artifacts of considerable breadth and quality. Our study explores the mechanisms that control and guide the content creation process and develops an understanding of open content governance. The repertory grid method is employed to systematically capture the experiences of individuals involved in the open content creation process and to determine the relative importance of the diverse control and guiding mechanisms. Our findings illustrate the important control and guiding mechanisms and highlight the multifaceted nature of open content governance. A range of governance mechanisms is discussed with regard to the varied levels of formality, the different loci of authority, and the diverse interaction environments involved. Limitations and opportunities for future research are provided.
Resumo:
A new approach to optimisation is introduced based on a precise probabilistic statement of what is ideally required of an optimisation method. It is convenient to express the formalism in terms of the control of a stationary environment. This leads to an objective function for the controller which unifies the objectives of exploration and exploitation, thereby providing a quantitative principle for managing this trade-off. This is demonstrated using a variant of the multi-armed bandit problem. This approach opens new possibilities for optimisation algorithms, particularly by using neural network or other adaptive methods for the adaptive controller. It also opens possibilities for deepening understanding of existing methods. The realisation of these possibilities requires research into practical approximations of the exact formalism.
Resumo:
In this article we introduce the notions of knowledge policy and the politics of knowledge. These have to be distinguished from the older, well-known terms of research policy, or science and technology policy. While the latter aim to foster the development of innovations in knowledge and its applications, the former is aware of side effects of new knowledge and tries to address them. While research policy takes the aims of innovations as largely unproblematic (insofar as they help improving national competitiveness), knowledge policy tries to govern (regulate, control, restrict, or even forbid) the production of knowledge.
Resumo:
The methods used by the UK Police to investigate complaints of rape have unsurprisingly come under much scrutiny in recent times, with a 2007 joint report on behalf of HM Crown Prosecution Service Inspectorate and HM Inspectorate of Constabulary concluding that there were many areas where improvements should be made. The research reported here forms part of a larger project which draws on various discourse analytical tools to identify the processes at work during police interviews with women reporting rape. Drawing on a corpus of video recorded police interviews with women reporting rape, this study applies a two pronged analysis to reveal the presence of these ideologies. Firstly, an analysis of the discourse markers ‘well’ and ‘so’ demonstrates the control exerted on the interaction by interviewing officers, as they attach importance to certain facts while omitting much of the information provided by the victim. Secondly, the interpretative repertoires relied upon by officers to ‘make sense’ of victim’s accounts are subject to scrutiny. As well as providing micro-level analyses which demonstrate processes of interactional control at the local level, the findings of these analyses can be shown to relate to a wider context – specifically prevailing ideologies about sexual violence in society as a whole.
Resumo:
The relationship between locus of control, the quality of exchanges between subordinates and leaders (LMX), and a variety of work-related reactions (intrinsic/extrinsic job satisfaction, work-related well-being, and organizational commitment) are examined. It was predicted that people with an internal locus of control develop better quality relations with their manager and this, in turn, results in more favourable work-related reactions. Results from two different samples (N=404, and N=51) supported this prediction, and also showed that LMX either fully, or partially, mediated the relationship between locus of control and all the work-related reactions.
Resumo:
Different types of numerical data can be collected in a scientific investigation and the choice of statistical analysis will often depend on the distribution of the data. A basic distinction between variables is whether they are ‘parametric’ or ‘non-parametric’. When a variable is parametric, the data come from a symmetrically shaped distribution known as the ‘Gaussian’ or ‘normal distribution’ whereas non-parametric variables may have a distribution which deviates markedly in shape from normal. This article describes several aspects of the problem of non-normality including: (1) how to test for two common types of deviation from a normal distribution, viz., ‘skew’ and ‘kurtosis’, (2) how to fit the normal distribution to a sample of data, (3) the transformation of non-normally distributed data and scores, and (4) commonly used ‘non-parametric’ statistics which can be used in a variety of circumstances.
Resumo:
Background Atrial fibrillation (AF) patients with a high risk of stroke are recommended anticoagulation with warfarin. However, the benefit of warfarin is dependent upon time spent within the target therapeutic range (TTR) of their international normalised ratio (INR) (2.0 to 3.0). AF patients possess limited knowledge of their disease and warfarin treatment and this can impact on INR control. Education can improve patients' understanding of warfarin therapy and factors which affect INR control. Methods/Design Randomised controlled trial of an intensive educational intervention will consist of group sessions (between 2-8 patients) containing standardised information about the risks and benefits associated with OAC therapy, lifestyle interactions and the importance of monitoring and control of their International Normalised Ratio (INR). Information will be presented within an 'expert-patient' focussed DVD, revised educational booklet and patient worksheets. 200 warfarin-naïve patients who are eligible for warfarin will be randomised to either the intervention or usual care groups. All patients must have ECG-documented AF and be eligible for warfarin (according to the NICE AF guidelines). Exclusion criteria include: aged < 18 years old, contraindication(s) to warfarin, history of warfarin USE, valvular heart disease, cognitive impairment, are unable to speak/read English and disease likely to cause death within 12 months. Primary endpoint is time spent in TTR. Secondary endpoints include measures of quality of life (AF-QoL-18), anxiety and depression (HADS), knowledge of AF and anticoagulation, beliefs about medication (BMQ) and illness representations (IPQ-R). Clinical outcomes, including bleeding, stroke and interruption to anticoagulation will be recorded. All outcome measures will be assessed at baseline and 1, 2, 6 and 12 months post-intervention. Discussion More data is needed on the clinical benefit of educational intervention with AF patients receiving warfarin. Trial registration ISRCTN93952605
Resumo:
In construction projects, the aim of project control is to ensure projects finish on time, within budget, and achieve other project objectives. During the last few decades, numerous project control methods have been developed and adopted by project managers in practice. However, many existing methods focus on describing what the processes and tasks of project control are; not on how these tasks should be conducted. There is also a potential gap between principles that underly these methods and project control practice. As a result, time and cost overruns are still common in construction projects, partly attributable to deficiencies of existing project control methods and difficulties in implementing them. This paper describes a new project cost and time control model, the project control and inhibiting factors management (PCIM) model, developed through a study involving extensive interaction with construction practitioners in the UK, which better reflects the real needs of project managers. A set of good practice checklist is also developed to facilitate implementation of the model. © 2013 American Society of Civil Engineers.
Resumo:
We present a novel distributed sensor that utilizes the temperature and strain dependence of the frequency at which the Brillouin loss is maximized in the interaction between a cw laser and a pulsed laser. With a 22-km sensing length, a strain resolution of 20 µ? and a temperature resolution of 2°C have been achieved with a spatial resolution of 5 m.