953 resultados para shortest paths
Resumo:
This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.
Resumo:
Presentation given at the Al-Azhar Engineering First Conference, AEC’89, Dec. 9-12 1989, Cairo, Egypt. The paper presented at AEC'89 suggests an infinite storage scheme divided into one volume which is online and an arbitrary number of off-line volumes arranged into a linear chain which hold records which haven't been accessed recently. The online volume holds the records in sorted order (e.g. as a B-tree) and contains shortest prefixes of keys of records already pushed offline. As new records enter, older ones are retired to the first volume which is going offline next. Statistical arguments are given for the rate at which an off-line volume needs to be fetched to reload a record which had been retired before. The rate depends on the distribution of access probabilities as a function of time. Applications are medical records, production records or other data which need to be kept for a long time for legal reasons.
Resumo:
In der gesamten Hochschullandschaft begleiten eLearning-Szenarien organisatorische Erneuerungsprozesse und stellen damit ein vielversprechendes Instrument zur Unterstützung und Verbesserung der klassischen Präsenzlehre dar. Davon ausgehend wurde von 2010 bis 2011 das Kasseler Sportspiel-Modell um die integrative Vermittlung der Einkontakt-Rückschlagspiele erweitert (Heyer, Albert, Scheid & Blömeke-Rumpf, 2011) und in einen modularisierten eLearning-Content, bestehend aus insgesamt 4 Modulen (17 Lernkurse, 171 Kursseiten, 73 Grafiken, 73 Videos, 38 Lernkontrollfragen), eingebunden. Dieser Content wurde im Rahmen einer Evaluationsstudie in Blended Learning Seminaren, welche die didaktischen Vorteile von Online- und Präsenzphasen zu einer Seminarform vereinen (Treumann, Ganguin & Arens, 2012), vergleichend zur klassischen Präsenzlehre im Sportstudium betrachtet. Die Studie gliedert sich in insgesamt drei Phasen: 1.) Pilotstudie am IfSS in Kassel (WS 2011/12; N=17, Lehramt), 2.) Hauptuntersuchung I am IfSS in Kassel (SS 2012; N=67, Lehramt) und 3.) Hauptuntersuchung II am IfS in Frankfurt a. M. (WS 2012/13; N=112, BA). Mittels varianzanalytischer Untersuchungsverfahren erfasst die Studie auf drei unterschiedlichen Qualitätsebenen folgende Aspekte der Lehr-Lernforschung: 1.) Ebene der Inputqualität: Bewertung der Seminarform (BS), 2.) Ebene der Prozessqualität: Motivation (SELLMO-ST), Lernstrategien (LIST) und computerbezogene Einstellung (FIDEC), 3.) Ebene der Outcomequalität: Lernleistung (Abschlusstest und Transferaufgabe). In der vergleichenden Betrachtung der beiden Hauptuntersuchungen erfolgt eine Gegenüberstellung von je einem Präsenzseminar zu zwei unterschiedlichen Varianten von Blended Learning Seminaren (BL-1, BL-2). Während der Online-Phasen bearbeiten die Sportstudierenden in BL-1 die Module in Lerngruppen. Die Teilnehmer in BL-2 führen in diesen Phasen zusätzlich persönliche Lerntagebücher. Dies soll zu einer vergleichsweise intensiveren Auseinandersetzung mit den Inhalten der Lernkurse sowie dem eigenen Lernprozess auf kognitiver und metakognitiver Ebene anregen (Hübner, Nückles & Renkl, 2007) und folglich zu besseren Ergebnissen auf den drei Qualitätsebenen führen. Die Ergebnisse der beiden Hauptuntersuchungen zeigen in der direkten, standortbezogenen Gegenüberstellung aller drei Seminarformen überwiegend keine statistisch signifikanten Unterschiede. Der erwartete positive Effekt durch die Einführung des Lerntagebuchs bleibt ebenfalls aus. Im standortübergreifenden Vergleich der Blended-Learning-Seminare ist bemerkenswert, dass die Probanden aus Frankfurt gegenüber ihrer Seminarform eine tendenziell kritischere Haltung einnehmen, was möglicherweise mit den vorherrschenden, unterschiedlichen Studiengängen – Lehramt und BA – korrespondiert. Zusammenfassend lässt sich somit für den untersuchten Bereich der Rückschlagspielvermittlung festhalten, dass Blended-Learning-Seminare eine qualitativ gleichwertige Alternative zur klassischen Präsenzlehre im Sportstudium darstellen.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
A closed-form solution formula for the kinematic control of manipulators with redundancy is derived, using the Lagrangian multiplier method. Differential relationship equivalent to the Resolved Motion Method has been also derived. The proposed method is proved to provide with the exact equilibrium state for the Resolved Motion Method. This exactness in the proposed method fixes the repeatability problem in the Resolved Motion Method, and establishes a fixed transformation from workspace to the joint space. Also the method, owing to the exactness, is demonstrated to give more accurate trajectories than the Resolved Motion Method. In addition, a new performance measure for redundancy control has been developed. This measure, if used with kinematic control methods, helps achieve dexterous movements including singularity avoidance. Compared to other measures such as the manipulability measure and the condition number, this measure tends to give superior performances in terms of preserving the repeatability property and providing with smoother joint velocity trajectories. Using the fixed transformation property, Taylor's Bounded Deviation Paths Algorithm has been extended to the redundant manipulators.
Resumo:
This paper describes a new statistical, model-based approach to building a contact state observer. The observer uses measurements of the contact force and position, and prior information about the task encoded in a graph, to determine the current location of the robot in the task configuration space. Each node represents what the measurements will look like in a small region of configuration space by storing a predictive, statistical, measurement model. This approach assumes that the measurements are statistically block independent conditioned on knowledge of the model, which is a fairly good model of the actual process. Arcs in the graph represent possible transitions between models. Beam Viterbi search is used to match measurement history against possible paths through the model graph in order to estimate the most likely path for the robot. The resulting approach provides a new decision process that can be use as an observer for event driven manipulation programming. The decision procedure is significantly more robust than simple threshold decisions because the measurement history is used to make decisions. The approach can be used to enhance the capabilities of autonomous assembly machines and in quality control applications.
Resumo:
Research on autonomous intelligent systems has focused on how robots can robustly carry out missions in uncertain and harsh environments with very little or no human intervention. Robotic execution languages such as RAPs, ESL, and TDL improve robustness by managing functionally redundant procedures for achieving goals. The model-based programming approach extends this by guaranteeing correctness of execution through pre-planning of non-deterministic timed threads of activities. Executing model-based programs effectively on distributed autonomous platforms requires distributing this pre-planning process. This thesis presents a distributed planner for modelbased programs whose planning and execution is distributed among agents with widely varying levels of processor power and memory resources. We make two key contributions. First, we reformulate a model-based program, which describes cooperative activities, into a hierarchical dynamic simple temporal network. This enables efficient distributed coordination of robots and supports deployment on heterogeneous robots. Second, we introduce a distributed temporal planner, called DTP, which solves hierarchical dynamic simple temporal networks with the assistance of the distributed Bellman-Ford shortest path algorithm. The implementation of DTP has been demonstrated successfully on a wide range of randomly generated examples and on a pursuer-evader challenge problem in simulation.
Resumo:
We have developed a technique called RISE (Random Image Structure Evolution), by which one may systematically sample continuous paths in a high-dimensional image space. A basic RISE sequence depicts the evolution of an object's image from a random field, along with the reverse sequence which depicts the transformation of this image back into randomness. The processing steps are designed to ensure that important low-level image attributes such as the frequency spectrum and luminance are held constant throughout a RISE sequence. Experiments based on the RISE paradigm can be used to address some key open issues in object perception. These include determining the neural substrates underlying object perception, the role of prior knowledge and expectation in object perception, and the developmental changes in object perception skills from infancy to adulthood.
Resumo:
Conventional floating gate non-volatile memories (NVMs) present critical issues for device scalability beyond the sub-90 nm node, such as gate length and tunnel oxide thickness reduction. Nanocrystalline germanium (nc-Ge) quantum dot flash memories are fully CMOS compatible technology based on discrete isolated charge storage nodules which have the potential of pushing further the scalability of conventional NVMs. Quantum dot memories offer lower operating voltages as compared to conventional floating-gate (FG) Flash memories due to thinner tunnel dielectrics which allow higher tunneling probabilities. The isolated charge nodules suppress charge loss through lateral paths, thereby achieving a superior charge retention time. Despite the considerable amount of efforts devoted to the study of nanocrystal Flash memories, the charge storage mechanism remains obscure. Interfacial defects of the nanocrystals seem to play a role in charge storage in recent studies, although storage in the nanocrystal conduction band by quantum confinement has been reported earlier. In this work, a single transistor memory structure with threshold voltage shift, Vth, exceeding ~1.5 V corresponding to interface charge trapping in nc-Ge, operating at 0.96 MV/cm, is presented. The trapping effect is eliminated when nc-Ge is synthesized in forming gas thus excluding the possibility of quantum confinement and Coulomb blockade effects. Through discharging kinetics, the model of deep level trap charge storage is confirmed. The trap energy level is dependent on the matrix which confines the nc-Ge.
Resumo:
This paper sets out to identify the initial positions of the different decision makers who intervene in a group decision making process with a reduced number of actors, and to establish possible consensus paths between these actors. As a methodological support, it employs one of the most widely-known multicriteria decision techniques, namely, the Analytic Hierarchy Process (AHP). Assuming that the judgements elicited by the decision makers follow the so-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al., 1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknown variance, a Bayesian approach is used in the estimation of the relative priorities of the alternatives being compared. These priorities, estimated by way of the median of the posterior distribution and normalised in a distributive manner (priorities add up to one), are a clear example of compositional data that will be used in the search for consensus between the actors involved in the resolution of the problem through the use of Multidimensional Scaling tools
Resumo:
Nowadays, Oceanographic and Geospatial communities are closely related worlds. The problem is that they follow parallel paths in data storage, distributions, modelling and data analyzing. This situation produces different data model implementations for the same features. While Geospatial information systems have 2 or 3 dimensions, the Oceanographic models uses multidimensional parameters like temperature, salinity, streams, ocean colour... This implies significant differences between data models of both communities, and leads to difficulties in dataset analysis for both sciences. These troubles affect directly to the Mediterranean Institute for Advanced Studies ( IMEDEA (CSIC-UIB)). Researchers from this Institute perform intensive processing with data from oceanographic facilities like CTDs, moorings, gliders… and geospatial data collected related to the integrated management of coastal zones. In this paper, we present an approach solution based on THREDDS (Thematic Real-time Environmental Distributed Data Services). THREDDS allows data access through the standard geospatial data protocol Web Coverage Service, inside the European project (European Coastal Sea Operational Observing and Forecasting system). The goal of ECOOP is to consolidate, integrate and further develop existing European coastal and regional seas operational observing and forecasting systems into an integrated pan- European system targeted at detecting environmental and climate changes
Resumo:
One of the most effective techniques offering QoS routing is minimum interference routing. However, it is complex in terms of computation time and is not oriented toward improving the network protection level. In order to include better levels of protection, new minimum interference routing algorithms are necessary. Minimizing the failure recovery time is also a complex process involving different failure recovery phases. Some of these phases depend completely on correct routing selection, such as minimizing the failure notification time. The level of protection also involves other aspects, such as the amount of resources used. In this case shared backup techniques should be considered. Therefore, minimum interference techniques should also be modified in order to include sharing resources for protection in their objectives. These aspects are reviewed and analyzed in this article, and a new proposal combining minimum interference with fast protection using shared segment backups is introduced. Results show that our proposed method improves both minimization of the request rejection ratio and the percentage of bandwidth allocated to backup paths in networks with low and medium protection requirements
Resumo:
A survey of MPLS protection methods and their utilization in combination with online routing methods is presented in this article. Usually, fault management methods pre-establish backup paths to recover traffic after a failure. In addition, MPLS allows the creation of different backup types, and hence MPLS is a suitable method to support traffic-engineered networks. In this article, an introduction of several label switch path backup types and their pros and cons are pointed out. The creation of an LSP involves a routing phase, which should include QoS aspects. In a similar way, to achieve a reliable network the LSP backups must also be routed by a QoS routing method. When LSP creation requests arrive one by one (a dynamic network scenario), online routing methods are applied. The relationship between MPLS fault management and QoS online routing methods is unavoidable, in particular during the creation of LSP backups. Both aspects are discussed in this article. Several ideas on how these actual technologies could be applied together are presented and compared
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced