924 resultados para Adaptive Control Design


Relevância:

90.00% 90.00%

Publicador:

Resumo:

When proposing primary control (changing the world to fit self)/secondary control (changing self to fit the world) theory, Weisz et al. (1984) argued for the importance of the “serenity to accept the things I cannot change, the courage to change the things I can” (p. 967), and the wisdom to choose the right control strategy that fits the context. Although the dual processes of control theory generated hundreds of empirical studies, most of them focused on the dichotomy of PC and SC, with none of these tapped into the critical concept: individuals’ ability to know when to use what. This project addressed this issue by using scenario questions to study the impact of situationally adaptive control strategies on youth well-being. To understand the antecedents of youths’ preference for PC or SC, we also connected PCSC theory with Dweck’s implicit theory about the changeability of the world. We hypothesized that youths’ belief about the world’s changeability impacts how difficult it was for them to choose situationally adaptive control orientation, which then impacts their well-being. This study included adolescents and emerging adults between the ages of 18 and 28 years (Mean = 20.87 years) from the US (n = 98), China (n = 100), and Switzerland (n = 103). Participants answered a questionnaire including a measure of implicit theories about the fixedness of the external world, a scenario-based measure of control orientation, and several measures of well-being. Preliminary analyses of the scenario-based control orientation measures showed striking cross-cultural similarity of preferred control responses: while for three of the six scenarios primary control was the predominately chosen control response in all cultures, for the other three scenarios secondary control was the predominately chosen response. This suggested that youths across cultures are aware that some situations call for primary control, while others demand secondary control. We considered the control strategy winning the majority of the votes to be the strategy that is situationally adaptive. The results of a multi-group structural equation mediation model with the extent of belief in a fixed world as independent variable, the difficulties of carrying out the respective adaptive versus non-adaptive control responses as two mediating variables and the latent well-being variable as dependent variable showed a cross-culturally similar pattern of effects: a belief in a fixed world was significantly related to higher difficulties in carrying out the normative as well as the non-normative control response, but only the difficulty of carrying out the normative control response (be it primary control in situations where primary control is normative or secondary control in situations where secondary control is normative) was significantly related to a lower reported well-being (while the difficulty of carrying out the non-normative response was unrelated to well-being). While previous research focused on cross-cultural differences on the choice of PC or SC, this study shed light on the universal necessity of applying the right kind of control to fit the situation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a novel design of a reconfigurable humanoid robot head, based on biological likeness of human being so that the humanoid robot could agreeably interact with people in various everyday tasks. The proposed humanoid head has a modular and adaptive structural design and is equipped with three main components: frame, neck motion system and omnidirectional stereovision system modules. The omnidirectional stereovision system module being the last module, a motivating contribution with regard to other computer vision systems implemented in former humanoids, it opens new research possibilities for achieving human-like behaviour. A proposal for a real-time catadioptric stereovision system is presented, including stereo geometry for rectifying the system configuration and depth estimation. The methodology for an initial approach for visual servoing tasks is divided into two phases, first related to the robust detection of moving objects, their depth estimation and position calculation, and second the development of attention-based control strategies. Perception capabilities provided allow the extraction of 3D information from a wide range of visions from uncontrolled dynamic environments, and work results are illustrated through a number of experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The subject of this thesis is the real-time implementation of algebraic derivative estimators as observers in nonlinear control of magnetic levitation systems. These estimators are based on operational calculus and implemented as FIR filters, resulting on a feasible real-time implementation. The algebraic method provide a fast, non-asymptotic state estimation. For the magnetic levitation systems, the algebraic estimators may replace the standard asymptotic observers assuring very good performance and robustness. To validate the estimators as observers in closed-loop control, several nonlinear controllers are proposed and implemented in a experimental magnetic levitation prototype. The results show an excellent performance of the proposed control laws together with the algebraic estimators.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a new control design method is proposed for stable processes which can be described using Hammerstein-Wiener models. The internal model control (IMC) framework is extended to accommodate multiple IMC controllers, one for each subsystem. The concept of passive systems is used to construct the IMC controllers which approximate the inverses of the subsystems to achieve dynamic control performance. The Passivity Theorem is used to ensure the closed-loop stability. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The absence of a definitive approach to the design of manufacturing systems signifies the importance of a control mechanism to ensure the timely application of relevant design techniques. To provide effective control, design development needs to be continually assessed in relation to the required system performance, which can only be achieved analytically through computer simulation. The technique providing the only method of accurately replicating the highly complex and dynamic interrelationships inherent within manufacturing facilities and realistically predicting system behaviour. Owing to the unique capabilities of computer simulation, its application should support and encourage a thorough investigation of all alternative designs. Allowing attention to focus specifically on critical design areas and enabling continuous assessment of system evolution. To achieve this system analysis needs to efficient, in terms of data requirements and both speed and accuracy of evaluation. To provide an effective control mechanism a hierarchical or multi-level modelling procedure has therefore been developed, specifying the appropriate degree of evaluation support necessary at each phase of design. An underlying assumption of the proposal being that evaluation is quick, easy and allows models to expand in line with design developments. However, current approaches to computer simulation are totally inappropriate to support the hierarchical evaluation. Implementation of computer simulation through traditional approaches is typically characterized by a requirement for very specialist expertise, a lengthy model development phase, and a correspondingly high expenditure. Resulting in very little and rather inappropriate use of the technique. Simulation, when used, is generally only applied to check or verify a final design proposal. Rarely is the full potential of computer simulation utilized to aid, support or complement the manufacturing system design procedure. To implement the proposed modelling procedure therefore the concept of a generic simulator was adopted, as such systems require no specialist expertise, instead facilitating quick and easy model creation, execution and modification, through simple data inputs. Previously generic simulators have tended to be too restricted, lacking the necessary flexibility to be generally applicable to manufacturing systems. Development of the ATOMS manufacturing simulator, however, has proven that such systems can be relevant to a wide range of applications, besides verifying the benefits of multi-level modelling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The unmitigated transmission of undesirable vibration can result in problems by way of causing human discomfort, machinery and equipment failure, and affecting the quality of a manufacturing process. When identifiable transmission paths are discernible, vibrations from the source can be isolated from the rest of the system and this prevents or minimises the problems. The approach proposed here for vibration isolation is active force cancellation at points close to the vibration source. It uses force feedback for multiple-input and multiple-output control at the mounting locations. This is particularly attractive for rigid mounting of machine on relative flexible base where machine alignment and motions are to be restricted. The force transfer function matrix is used as a disturbance rejection performance specification for the design of MIMO controllers. For machine soft-mounted via flexible isolators, a model for this matrix has been derived. Under certain conditions, a simple multiplicative uncertainty model is obtained that shows the amount of perturbation a flexible base has on the machine-isolator-rigid base transmissibility matrix. Such a model is very suitable for use with robust control design paradigm. A different model is derived for the machine on hard-mounts without the flexible isolators. With this model, the level of force transmitted from a machine to a final mounting structure using the measurements for the machine running on another mounting structure can be determined. The two mounting structures have dissimilar dynamic characteristics. Experiments have verified the usefulness of the expression. The model compares well with other methods in the literature. The disadvantage lies with the large amount of data that has to be collected. Active force cancellation is demonstrated on an experimental rig using an AC industrial motor hard-mounted onto a relative flexible structure. The force transfer function matrix, determined from measurements, is used to design H and Static Output Feedback controllers. Both types of controllers are stable and robust to modelling errors within the identified frequency range. They reduce the RMS of transmitted force by between 30?80% at all mounting locations for machine running at 1340 rpm. At the rated speed of 1440 rpm only the static gain controller is able to provide 30?55% reduction at all locations. The H controllers on the other hand could only give a small reduction at one mount location. This is due in part to the deficient of the model used in the design. Higher frequency dynamics has been ignored in the model. This can be resolved by the use of a higher order model that can result in a high order controller. A low order static gain controller, with some tuning, performs better. But it lacks the analytical framework for analysis and design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this letter, an energy-efficient adaptive code position modulation scheme is proposed for wireless sensor networks to provide the relatively stable bit error ratio (BER) performance expected by the upper layers. The system is designed with focus on the adaptive control of transmission power, which is adjusted based on the measured power density of background noise. Interfaces among the modulation module, packet scheduling module and upper layer are provided for flexible adjustments to adapt to the background noise and deliver expected application quality. Simulations with Signal Processing Worksystem (SPW) validate the effectiveness of the scheme. © 2005 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

* The research work reviewed in this paper has been carried out in the context of the Russian Foundation for Basic Research funded project “Adaptable Intelligent Interfaces Research and Development for Distance Learning Systems”(grant N 02-01-81019). The authors wish to acknowledge the co-operation with the Byelorussian partners of this project.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computational and communication complexities call for distributed, robust, and adaptive control. This paper proposes a promising way of bottom-up design of distributed control in which simple controllers are responsible for individual nodes. The overall behavior of the network can be achieved by interconnecting such controlled loops in cascade control for example and by enabling the individual nodes to share information about data with their neighbors without aiming at unattainable global solution. The problem is addressed by employing a fully probabilistic design, which can cope with inherent uncertainties, that can be implemented adaptively and which provide a systematic rich way to information sharing. This paper elaborates the overall solution, applies it to linear-Gaussian case, and provides simulation results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanoparticles offer an ideal platform for the delivery of small molecule drugs, subunit vaccines and genetic constructs. Besides the necessity of a homogenous size distribution, defined loading efficiencies and reasonable production and development costs, one of the major bottlenecks in translating nanoparticles into clinical application is the need for rapid, robust and reproducible development techniques. Within this thesis, microfluidic methods were investigated for the manufacturing, drug or protein loading and purification of pharmaceutically relevant nanoparticles. Initially, methods to prepare small liposomes were evaluated and compared to a microfluidics-directed nanoprecipitation method. To support the implementation of statistical process control, design of experiment models aided the process robustness and validation for the methods investigated and gave an initial overview of the size ranges obtainable in each method whilst evaluating advantages and disadvantages of each method. The lab-on-a-chip system resulted in a high-throughput vesicle manufacturing, enabling a rapid process and a high degree of process control. To further investigate this method, cationic low transition temperature lipids, cationic bola-amphiphiles with delocalized charge centers, neutral lipids and polymers were used in the microfluidics-directed nanoprecipitation method to formulate vesicles. Whereas the total flow rate (TFR) and the ratio of solvent to aqueous stream (flow rate ratio, FRR) was shown to be influential for controlling the vesicle size in high transition temperature lipids, the factor FRR was found the most influential factor controlling the size of vesicles consisting of low transition temperature lipids and polymer-based nanoparticles. The biological activity of the resulting constructs was confirmed by an invitro transfection of pDNA constructs using cationic nanoprecipitated vesicles. Design of experiments and multivariate data analysis revealed the mathematical relationship and significance of the factors TFR and FRR in the microfluidics process to the liposome size, polydispersity and transfection efficiency. Multivariate tools were used to cluster and predict specific in-vivo immune responses dependent on key liposome adjuvant characteristics upon delivery a tuberculosis antigen in a vaccine candidate. The addition of a low solubility model drug (propofol) in the nanoprecipitation method resulted in a significantly higher solubilisation of the drug within the liposomal bilayer, compared to the control method. The microfluidics method underwent scale-up work by increasing the channel diameter and parallelisation of the mixers in a planar way, resulting in an overall 40-fold increase in throughput. Furthermore, microfluidic tools were developed based on a microfluidics-directed tangential flow filtration, which allowed for a continuous manufacturing, purification and concentration of liposomal drug products.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes the development of an adaptive control algorithm for Computerized Numerical Control (CNC) machines implemented in a multi-axis motion control board based on the TMS320C31 DSP chip. The adaptive process involves two stages: Plant Modeling and Inverse Control Application. The first stage builds a non-recursive model of the CNC system (plant) using the Least-Mean-Square (LMS) algorithm. The second stage consists of the definition of a recursive structure (the controller) that implements an inverse model of the plant by using the coefficients of the model in an algorithm called Forward-Time Calculation (FTC). In this way, when the inverse controller is implemented in series with the plant, it will pre-compensate for the modification that the original plant introduces in the input signal. The performance of this solution was verified at three different levels: Software simulation, implementation in a set of isolated motor-encoder pairs and implementation in a real CNC machine. The use of the adaptive inverse controller effectively improved the step response of the system in all three levels. In the simulation, an ideal response was obtained. In the motor-encoder test, the rise time was reduced by as much as 80%, without overshoot, in some cases. Even with the larger mass of the actual CNC machine, decrease of the rise time and elimination of the overshoot were obtained in most cases. These results lead to the conclusion that the adaptive inverse controller is a viable approach to position control in CNC machinery.