880 resultados para Time-Delayed Systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the last 30 years the Atomic Force Microscopy became the most powerful tool for surface probing in atomic scale. The Tapping-Mode Atomic Force Microscope is used to generate high quality accurate images of the samples surface. However, in this mode of operation the microcantilever frequently presents chaotic motion due to the nonlinear characteristics of the tip-sample forces interactions, degrading the image quality. This kind of irregular motion must be avoided by the control system. In this work, the tip-sample interaction is modelled considering the Lennard-Jones potentials and the two-term Galerkin aproximation. Additionally, the State Dependent Ricatti Equation and Time-Delayed Feedback Control techniques are used in order to force the Tapping-Mode Atomic Force Microscope system motion to a periodic orbit, preventing the microcantilever chaotic motion

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Motorische Bewegungen werden über die visuelle Rückmeldung auf ihre Genauigkeit kontrolliert und ggf. korrigiert. Über einen technischen Eingriff, wie beispielsweise einer Prismenbrille, kann man eine Differenz zwischen optisch wahrgenommener und haptisch erlebter Umwelt erzeugen, um die Fähigkeiten des visuomotorischen Systems zu testen. In dieser Arbeit wurde eine computergestützte Methode entwickelt, eine solche visuomotorische Differenz zu simulieren. Die Versuchspersonen führen eine ballistische Bewegung mit Arm und Hand aus in der Absicht, ein vorgegebenes Ziel zu treffen. Die Trefferpunkte werden durch einen Computer mit Hilfe eines Digitalisierungstablettes aufgenommen. Die visuelle Umwelt, welche den Versuchspersonen präsentiert wird, ist auf einem Monitor dargestellt. Das Monitorabbild – ein Kreuz auf weißem Hintergrund – betrachten die Testpersonen über einen Spiegel. Dieser ist in einem entsprechenden Winkel zwischen Monitor und Digitalisierungstablett angebracht, so dass das Zielbild auf dem Digitalisierungstablett projiziert wird. Die Testpersonen nehmen das Zielkreuz auf dem Digitalisierungstablett liegend wahr. Führt die Versuchsperson eine Zielbewegung aus, können die aufgenommenen Koordinaten als Punkte auf dem Monitor dargestellt werden und die Testperson erhält über diese Punktanzeige ein visuelles Feedback ihrer Bewegung. Der Arbeitsbereich des Digitalisierungstabletts kann über den Computer eingerichtet und so motorische Verschiebungen simuliert werden. Die verschiedenartigen Möglichkeiten dieses Aufbaus wurden zum Teil in Vorversuchen getestet um Fragestellungen, Methodik und technische Einrichtungen aufeinander abzustimmen. Den Hauptversuchen galt besonderes Interesse an der zeitlichen Verzögerung des visuellen Feedbacks sowie dem intermanuellen Transfer. Hierbei ergaben sich folgende Ergebnisse: ● Die Versuchspersonen adaptieren an eine räumlich verschobene Umwelt. Der Adaptationsverlauf lässt sich mit einer Exponentialfunktion mathematisch berechnen und darstellen. ● Dieser Verlauf ist unabhängig von der Art des visuellen Feedbacks. Die Beobachtung der Handbewegung während der Adaptation zeigt die gleiche Zielabfolge wie eine einfache Punktprojektion, die den Trefferort der Bewegung darstellt. ● Der exponentielle Verlauf der Adaptationsbewegung ist unabhängig von den getesteten zeitlichen Verzögerungen des visuellen Feedbacks. ● Die Ergebnisse des Folgeeffektes zeigen, dass bei zunehmender zeitlicher Verzögerung des visuellen Feedbacks während der Adaptationsphase, die Größe des Folgeeffektwertes geringer wird, d.h. die anhaltende Anpassungsleistung an eine visuomotorische Differenz sinkt. ● Die Folgeeffekte weisen individuelle Eigenheiten auf. Die Testpersonen adaptieren verschieden stark an eine simulierte Verschiebung. Ein Vergleich mit den visuomotorischen Herausforderungen im Vorleben der Versuchspersonen ließ vermuten, dass das visuomotorische System des Menschen trainierbar ist und sich - je nach Trainingszustand – unterschiedlich an wahrgenommene Differenzen anpasst. ● Der intermanuelle Transfer konnte unter verschiedenen Bedingungen nachgewiesen werden. ● Ein deutlich stärkerer Folgeeffekt kann beobachtet werden, wenn die wahrgenommene visuomotorische Differenz zwischen Ziel und Trefferpunkt in eine Gehirnhälfte projiziert wird und der Folgeeffekt mit der Hand erfolgt, welche von dieser Hirnhemisphäre gesteuert wird. Der intermanuelle Transfer wird demnach begünstigt, wenn die visuelle Projektion der Fehlerbeobachtung in die Gehirnhälfte erfolgt, die während der Adaptationsphase motorisch passiv ist.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Der zunehmende Anteil von Strom aus erneuerbaren Energiequellen erfordert ein dynamisches Konzept, um Spitzenlastzeiten und Versorgungslücken aus der Wind- und Solarenergie ausgleichen zu können. Biogasanlagen können aufgrund ihrer hohen energetischen Verfügbarkeit und der Speicherbarkeit von Biogas eine flexible Energiebereitstellung ermöglichen und darüber hinaus über ein „Power-to-Gas“-Verfahren bei einem kurzzeitigen Überschuss von Strom eine Überlastung des Stromnetzes verhindern. Ein nachfrageorientierter Betrieb von Biogasanlagen stellt jedoch hohe Anforderungen an die Mikrobiologie im Reaktor, die sich an die häufig wechselnden Prozessbedingungen wie der Raumbelastung im Reaktor anpassen muss. Eine Überwachung des Fermentationsprozesses in Echtzeit ist daher unabdingbar, um Störungen in den mikrobiellen Gärungswegen frühzeitig erkennen und adäquat entgegenwirken zu können. rnBisherige mikrobielle Populationsanalysen beschränken sich auf aufwendige, molekularbiologische Untersuchungen des Gärsubstrates, deren Ergebnisse dem Betreiber daher nur zeitversetzt zur Verfügung stehen. Im Rahmen dieser Arbeit wurde erstmalig ein Laser-Absorptionsspektrometer zur kontinuierlichen Messung der Kohlenstoff-Isotopenverhältnisse des Methans an einer Forschungsbiogasanlage erprobt. Dabei konnten, in Abhängigkeit der Raumbelastung und Prozessbedingungen variierende Isotopenverhältnisse gemessen werden. Anhand von Isolaten aus dem untersuchten Reaktor konnte zunächst gezeigt werden, dass für jeden Methanogenesepfad (hydrogeno-troph, aceto¬klastisch sowie methylotroph) eine charakteristische, natürliche Isotopensignatur im Biogas nachgewiesen werden kann, sodass eine Identifizierung der aktuell dominierenden methanogenen Reaktionen anhand der Isotopen-verhältnisse im Biogas möglich ist. rnDurch den Einsatz von 13C- und 2H-isotopen¬markierten Substraten in Rein- und Mischkulturen und Batchreaktoren, sowie HPLC- und GC-Unter¬suchungen der Stoffwechselprodukte konnten einige bislang unbekannte C-Flüsse in Bioreaktoren festgestellt werden, die sich wiederum auf die gemessenen Isotopenverhältnisse im Biogas auswirken können. So konnte die Entstehung von Methanol sowie dessen mikrobieller Abbauprodukte bis zur finalen CH4-Bildung anhand von fünf Isolaten erstmalig in einer landwirtschaftlichen Biogasanlage rekonstruiert und das Vorkommen methylotropher Methanogenesewege nachgewiesen werden. Mithilfe molekularbiologischer Methoden wurden darüber hinaus methanoxidierende Bakterien zahlreicher, unbekannter Arten im Reaktor detektiert, deren Vorkommen aufgrund des geringen O2-Gehaltes in Biogasanlagen bislang nicht erwartet wurde. rnDurch die Konstruktion eines synthetischen DNA-Stranges mit den Bindesequenzen für elf spezifische Primerpaare konnte eine neue Methode etabliert werden, anhand derer eine Vielzahl mikrobieller Zielorganismen durch die Verwendung eines einheitlichen Kopienstandards in einer real-time PCR quantifiziert werden können. Eine über 70 Tage durchgeführte, wöchentliche qPCR-Analyse von Fermenterproben zeigte, dass die Isotopenverhältnisse im Biogas signifikant von der Zusammensetzung der Reaktormikrobiota beeinflusst sind. Neben den aktuell dominierenden Methanogenesewegen war es auch möglich, einige bakterielle Reaktionen wie eine syntrophe Acetatoxidation, Acetogenese oder Sulfatreduktion anhand der δ13C (CH4)-Werte zu identifizieren, sodass das hohe Potential einer kontinuierlichen Isotopenmessung zur Prozessanalytik in Biogasanlagen aufgezeigt werden konnte.rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Through dedicated measurements in the optical regime we demonstrate that ptychography can be applied to reconstruct complex-valued object functions that vary with time from a sequence of spectral measurements. A probe pulse of approximately 1 ps duration, time delayed in increments of 0.25 ps, is shown to recover dynamics on a ten times faster time scale with an experimental limit of approximately 5 fs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A set of software development tools for building real-time control systems on a simple robotics platform is described in the paper. The tools are being used in a real-time systems course as a basis for student projects. The development platform is a low-cost PC running GNU/Linux, and the target system is LEGO MINDSTORMS NXT, thus keeping the cost of the laboratory low. Real-time control software is developed using a mixed paradigm. Functional code for control algorithms is automatically generated in C from Simulink models. This code is then integrated into a concurrent, real-time software architecture based on a set of components written in Ada. This approach enables the students to take advantage of the high-level, model-oriented features that Simulink oers for designing control algorithms, and the comprehensive support for concurrency and real-time constructs provided by Ada.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of mixed-criticality virtualized multi-core systems poses new challenges that are being subject of active research work. There is an additional complexity: it is now required to identify a set of partitions, and allocate applications to partitions. In this job, a number of issues have to be considered, such as the criticality level of the application, security and dependability requirements, time requirements granularity, etc. MultiPARTES [11] toolset relies on Model Driven Engineering (MDE), which is a suitable approach in this setting, as it helps to bridge the gap between design issues and partitioning concerns. MDE is changing the way systems are developed nowadays, reducing development time. In general, modelling approaches have shown their benefits when applied to embedded systems. These benefits have been achieved by fostering reuse with an intensive use of abstractions, or automating the generation of boiler-plate code.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the suppression of spatiotemporal chaos in the complex GinzburgLandau equation by a combined global and local time-delay feedback. Feedback terms are implemented as a control scheme, i.e., they are proportional to the difference between the time-delayed state of the system and its current state. We perform a linear stability analysis of uniform oscillations with respect to space-dependent perturbations and compare with numerical simulations. Similarly, for the fixed-point solution that corresponds to amplitude death in the spatially extended system, a linear stability analysis with respect to space-dependent perturbations is performed and complemented by numerical simulations. © 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.