50 resultados para distributed control and estimation
Resumo:
Annual losses of cocoa in Ghana to mirids are significant. Therefore, accurate timing of insecticide application is critical to enhance yields. However, cocoa farmers often lack information on the expected mirid population for each season to enable them to optimise pesticide use. This study assessed farmers’ knowledge and perceptions of mirid control and their willingness to use forecasting systems informing them of expected mirid peaks and time of application of pesticides. A total of 280 farmers were interviewed in the Eastern and Ashanti regions of Ghana with a structured open and closed ended questionnaire. Most farmers (87%) considered mirids as the most important insect pest on cocoa with 47% of them attributing 30-40% annual crop loss to mirid damage. There was wide variation in the timing of insecticide application as a result of farmers using different sources of information to guide the start of application. The majority of farmers (56%) do not have access to information on the type, frequency and timing of insecticides to use. However, respondents who are members of farmer groups had better access to such information. Extension officers were the preferred channel for information transfer to farmers with 72% of farmers preferring them to other available methods of communication. Almost all the respondents (99%) saw the need for a comprehensive forecasting system to help farmers manage cocoa mirids. The importance of accurate timing for mirid control based on forecasted information to farmer groups and extension officers was discussed.
Resumo:
Today, transparency is hailed as a key to good governance and economic efficiency, with national states implementing new laws to allow citizens access to information. It is therefore paradoxical that, as shown by a series of crises and scandals, modern governments and international agencies frequently have paid only lip-service to such ideals. Since Jeremy Bentham first introduced the concept of transparency into the language in 1789, few societal debates have sparked so much interest within the academic community, and across a variety of disciplines, using different approaches and methodologies. Within these current debates, however, one fact is striking: the lack of historical reflection about the development of the concept of transparency, both as a principle and as applied in practice, prior to its inception. Accordingly, the aim of this special issue is to contribute to historicising the ways in which communication and control over fiscal policy and state finances operated in early modern European polities.
Resumo:
An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.
Resumo:
As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delay.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delays.
Resumo:
In Uganda, control of vector-borne diseases is mainly in form of vector control, and chemotherapy. There have been reports that acaricides are being misused in the pastoralist systems in Uganda. This is because of the belief by scientists that intensive application of acaricide is uneconomical and unsustainable particularly in the indigenous cattle. The objective of this study was to investigate the strategies, rationale and effectiveness of vector-borne disease control by pastoralists. To systematically carry out these investigations, a combination of qualitative and quantitative research methods was used, in both the collection and the analysis of data. Cattle keepers were found to control tick-borne diseases (TBDs) mainly through spraying, in contrast with the control of trypanosomosis for which the main method of control was by chemotherapy. The majority of herders applied acaricides weekly and used an acaricide of lower strength than recommended by the manufacturers. They used very little acaricide wash, and spraying was preferred to dipping. Furthermore, pastoralists either treated sick animals themselves or did nothing at all, rather than using veterinary personnel. Oxytetracycline (OTC) was the drug commonly used in the treatment of TBDs. Nevertheless, although pastoralists may not have been following recommended practices in their control of ticks and tick-borne diseases, they were neither wasteful nor uneconomical and their methods appeared to be effective. Trypanosomosis was not a problem either in Sembabule or Mbarara district. Those who used trypanocides were found to use more drugs than were necessary.
Resumo:
This paper presents the Gentle/G integrated system for reach & grasp therapy retraining following brain injury. The design, control and integration of an experimental grasp assistance unit is described for use in robot assisted stroke rehabilitation. The grasp assist unit is intended to work with the hardware and software of the Gentle/S robot although the hardware could be adapted to other rehabilitation applications. When used with the Gentle/S robot a total of 6 active and 3 passive degrees of freedom are available to provide active, active assist or passive grasp retraining in combination with reaching movements in a reach-grasp-transfer-release sequence.
Resumo:
Tycho was conceived in 2003 in response to a need by the GridRM [1] resource-monitoring project for a ldquolight-weightrdquo, scalable and easy to use wide-area distributed registry and messaging system. Since Tycho's first release in 2006 a number of modifications have been made to the system to make it easier to use and more flexible. Since its inception, Tycho has been utilised across a number of application domains including widearea resource monitoring, distributed queries across archival databases, providing services for the nodes of a Cray supercomputer, and as a system for transferring multi-terabyte scientific datasets across the Internet. This paper provides an overview of the initial Tycho system, describes a number of applications that utilise Tycho, discusses a number of new utilities, and how the Tycho infrastructure has evolved in response to experience of building applications with it.
Resumo:
This paper considers left-invariant control systems defined on the Lie groups SU(2) and SO(3). Such systems have a number of applications in both classical and quantum control problems. The purpose of this paper is two-fold. Firstly, the optimal control problem for a system varying on these Lie Groups, with cost that is quadratic in control is lifted to their Hamiltonian vector fields through the Maximum principle of optimal control and explicitly solved. Secondly, the control systems are integrated down to the level of the group to give the solutions for the optimal paths corresponding to the optimal controls. In addition it is shown here that integrating these equations on the Lie algebra su(2) gives simpler solutions than when these are integrated on the Lie algebra so(3).
Resumo:
A novel optimising controller is designed that leads a slow process from a sub-optimal operational condition to the steady-state optimum in a continuous way based on dynamic information. Using standard results from optimisation theory and discrete optimal control, the solution of a steady-state optimisation problem is achieved by solving a receding-horizon optimal control problem which uses derivative and state information from the plant via a shadow model and a state-space identifier. The paper analyzes the steady-state optimality of the procedure, develops algorithms with and without control rate constraints and applies the procedure to a high fidelity simulation study of a distillation column optimisation.
Resumo:
This paper discusses how numerical gradient estimation methods may be used in order to reduce the computational demands on a class of multidimensional clustering algorithms. The study is motivated by the recognition that several current point-density based cluster identification algorithms could benefit from a reduction of computational demand if approximate a-priori estimates of the cluster centres present in a given data set could be supplied as starting conditions for these algorithms. In this particular presentation, the algorithm shown to benefit from the technique is the Mean-Tracking (M-T) cluster algorithm, but the results obtained from the gradient estimation approach may also be applied to other clustering algorithms and their related disciplines.