899 resultados para multi-output distance function
Resumo:
Recently, a generalized passivity concept for linear multivariable systems was obtained which allows circumventing the restrictiveness of the usual passivity concept. The latter is associated with the classical SPR (Strictly Positive Real) condition whereas the new concept of passivity is associated with the so called WSPR condition and its advantage in multivariable systems is that it does not require a restrictive symmetry condition of SPR systems. As a result, it allows the design of multivariable adaptive control that, unlike some existing factorization approaches, does not imply in additional overparameterization of the adaptive controller. In this paper, we complete a previously established WSPR sufficient condition and prove that it is also necessary. We also propose some methods of passification by either premultiplying the system output tracking error vector or the system input vector by an adequate passifying matrix multiplier, so that the resulting input/output transfer function becomes WSPR. The efficiency of our proposals are illustrated by simulation utilizing a well known robotics adaptive visual servoing problem. © 2011 IFAC.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Using the directional distance function we study a cross section of 110 countries to examine the efficiency of management of the tradeoffs between pollution and income. The DEA model is reformulated to permit 'reverse disposability' of the bad output. Further, we interpret the optimal solution of the multiplier form of the DEA model as an iso-inefficiency line. This permits us to measure the shadow cost of the bad output for a country that is in the interior, rather than on the frontier of the production possibilities set. We also compare the relative environmental performance of countries in terms of emission intensity adjusted for technical efficiency. Only 10% of the countries are found to be on the frontier. Also, there is considerable inter-country variation in the imputed opportunity cost of CO2 reduction. Further, differences in technical efficiency contribute substantially to differences in the observed levels of CO2 intensity.
Resumo:
En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.
Resumo:
Bayesian network classifiers are widely used in machine learning because they intuitively represent causal relations. Multi-label classification problems require each instance to be assigned a subset of a defined set of h labels. This problem is equivalent to finding a multi-valued decision function that predicts a vector of h binary classes. In this paper we obtain the decision boundaries of two widely used Bayesian network approaches for building multi-label classifiers: Multi-label Bayesian network classifiers built using the binary relevance method and Bayesian network chain classifiers. We extend our previous single-label results to multi-label chain classifiers, and we prove that, as expected, chain classifiers provide a more expressive model than the binary relevance method.
Resumo:
This paper proposes an adaptive algorithm for clustering cumulative probability distribution functions (c.p.d.f.) of a continuous random variable, observed in different populations, into the minimum homogeneous clusters, making no parametric assumptions about the c.p.d.f.’s. The distance function for clustering c.p.d.f.’s that is proposed is based on the Kolmogorov–Smirnov two sample statistic. This test is able to detect differences in position, dispersion or shape of the c.p.d.f.’s. In our context, this statistic allows us to cluster the recorded data with a homogeneity criterion based on the whole distribution of each data set, and to decide whether it is necessary to add more clusters or not. In this sense, the proposed algorithm is adaptive as it automatically increases the number of clusters only as necessary; therefore, there is no need to fix in advance the number of clusters. The output of the algorithm are the common c.p.d.f. of all observed data in the cluster (the centroid) and, for each cluster, the Kolmogorov–Smirnov statistic between the centroid and the most distant c.p.d.f. The proposed algorithm has been used for a large data set of solar global irradiation spectra distributions. The results obtained enable to reduce all the information of more than 270,000 c.p.d.f.’s in only 6 different clusters that correspond to 6 different c.p.d.f.’s.
Resumo:
This paper investigates the input-output characteristics of structural health monitoring systems for composite plates based on permanently attached piezoelectric transmitter and sensor elements. Using dynamic piezoelectricity theory and a multiple integral transform method to describe the propagating and scattered flexural waves an electro-mechanical model for simulating the voltage input-output transfer function for circular piezoelectric transmitters and sensors adhesively attached to an orthotropic composite plate is developed. The method enables the characterization of all three physical processes, i.e. wave generation, wave propagation and wave reception. The influence of transducer, plate and attached electrical circuit characteristics on the voltage output behaviour of the system is examined through numerical calculations, both in frequency and the time domain. The results show that the input-output behaviour of the system is not properly predicted by the transducers' properties alone. Coupling effects between the transducers and the tested structure have to be taken into account, and adding backing materials to the piezoelectric elements can significantly improve the sensitivity of the system. It is shown that in order to achieve maximum sensitivity, particular piezoelectric transmitters and sensors need to be designed according to the structure to be monitored and the specific frequency regime of interest.
Resumo:
In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non-linear constraints.
Resumo:
In for-profit organizations efficiency measurement with reference to the potential for profit augmentation is particularly important as is its decomposition into technical, and allocative components. Different profit efficiency approaches can be found in the literature to measure and decompose overall profit efficiency. In this paper, we highlight some problems within existing approaches and propose a new measure of profit efficiency based on a geometric mean of input/output adjustments needed for maximizing profits. Overall profit efficiency is calculated through this efficiency measure and is decomposed into its technical and allocative components. Technical efficiency is calculated based on a non-oriented geometric distance function (GDF) that is able to incorporate all the sources of inefficiency, while allocative efficiency is retrieved residually. We also define a measure of profitability efficiency which complements profit efficiency in that it makes it possible to retrieve the scale efficiency of a unit as a component of its profitability efficiency. In addition, the measure of profitability efficiency allows for a dual profitability interpretation of the GDF measure of technical efficiency. The concepts introduced in the paper are illustrated using a numerical example.
Resumo:
Traditional machinery for manufacturing processes are characterised by actuators powered and co-ordinated by mechanical linkages driven from a central drive. Increasingly, these linkages are replaced by independent electrical drives, each performs a different task and follows a different motion profile, co-ordinated by computers. A design methodology for the servo control of high speed multi-axis machinery is proposed, based on the concept of a highly adaptable generic machine model. In addition to the dynamics of the drives and the loads, the model includes the inherent interactions between the motion axes and thus provides a Multi-Input Multi-Output (MIMO) description. In general, inherent interactions such as structural couplings between groups of motion axes are undesirable and needed to be compensated. On the other hand, imposed interactions such as the synchronisation of different groups of axes are often required. It is recognised that a suitable MIMO controller can simultaneously achieve these objectives and reconciles their potential conflicts. Both analytical and numerical methods for the design of MIMO controllers are investigated. At present, it is not possible to implement high order MIMO controllers for practical reasons. Based on simulations of the generic machine model under full MIMO control, however, it is possible to determine a suitable topology for a blockwise decentralised control scheme. The Block Relative Gain array (BRG) is used to compare the relative strength of closed loop interactions between sub-systems. A number of approaches to the design of the smaller decentralised MIMO controllers for these sub-systems has been investigated. For the purpose of illustration, a benchmark problem based on a 3 axes test rig has been carried through the design cycle to demonstrate the working of the design methodology.
Resumo:
After the ten Regional Water Authorities (RWAs) of England and Wales were privatized in November 1989, the successor Water and Sewerage Companies (WASCs) faced a new regulatory regime that was designed to promote economic efficiency while simultaneously improving drinking water and environmental quality. As legally mandated quality improvements necessitated a costly capital investment programme, the industry's economic regulator, the Office of Water Services (Ofwat), implemented a retail price index (RPI)+K pricing system, which was designed to compensate the WASCs for their capital investment programme while also encouraging gains in economic efficiency. In order to analyse jointly the impact of privatization, as well as the impact of increasingly stringent economic and environmental regulation on the WASCs' economic performance, this paper estimates a translog multiple output cost function model for the period 1985–1999. Given the significant costs associated with water quality improvements, the model is augmented to include the impact of drinking water quality and environmental quality on total costs. The model is then employed to determine the extent of scale and scope economies in the water and sewerage industry, as well as the impact of privatization and economic regulation on economic efficiency.
An improved conflicting evidence combination approach based on a new supporting probability distance
Resumo:
To avoid counter-intuitive result of classical Dempster's combination rule when dealing with highly conflict information, many improved combination methods have been developed through modifying the basic probability assignments (BPAs) of bodies of evidence (BOEs) by using a certain measure of the degree of conflict or uncertain information, such as Jousselme's distance, the pignistic probability distance and the ambiguity measure. However, if BOEs contain some non-singleton elements and the differences among their BPAs are larger than 0.5, the current conflict measure methods have limitations in describing the interrelationship among the conflict BOEs and may even lead to wrong combination results. In order to solve this problem, a new distance function, which is called supporting probability distance, is proposed to characterize the differences among BOEs. With the new distance, the information of how much a focal element is supported by the other focal elements in BOEs can be given. Also, a new combination rule based on the supporting probability distance is proposed for the combination of the conflicting evidences. The credibility and the discounting factor of each BOE are generated by the supporting probability distance and the weighted BOEs are combined directly using Dempster's rules. Analytical results of numerical examples show that the new distance has a better capability of describing the interrelationships among BOEs, especially for the highly conflicting BOEs containing non-singleton elements and the proposed new combination method has better applicability and effectiveness compared with the existing methods.
Resumo:
We propose a novel skeleton-based approach to gait recognition using our Skeleton Variance Image. The core of our approach consists of employing the screened Poisson equation to construct a family of smooth distance functions associated with a given shape. The screened Poisson distance function approximation nicely absorbs and is relatively stable to shape boundary perturbations which allows us to define a rough shape skeleton. We demonstrate how our Skeleton Variance Image is a powerful gait cycle descriptor leading to a significant improvement over the existing state of the art gait recognition rate.
Resumo:
The aim of this study was to examine the influence of the performance level of athletes on pacing strategy during a simulated 10-km running race, and the relationship between physiological variables and pacing strategy. Twenty-four male runners performed an incremental exercise test on a treadmill, three 6-min bouts of running at 9, 12 and 15 km h(-1), and a self-paced, 10-km running performance trial; at least 48 h separated each test. Based on 10-km running performance, subjects were divided into terziles, with the lower terzile designated the low-performing (LP) and the upper terzile designated the high-performing (HP) group. For the HP group, the velocity peaked at 18.8 +/- A 1.4 km h(-1) in the first 400 m and was higher than the average race velocity (P < 0.05). The velocity then decreased gradually until 2,000 m (P < 0.05), remaining constant until 9,600 m, when it increased again (P < 0.05). The LP group ran the first 400 m at a significantly lower velocity than the HP group (15.6 +/- A 1.6 km h(-1); P > 0.05) and this initial velocity was not different from LP average racing velocity (14.5 +/- A 0.7 km h(-1)). The velocity then decreased non-significantly until 9,600 m (P > 0.05), followed by an increase at the end (P < 0.05). The peak treadmill running velocity (PV), running economy (RE), lactate threshold (LT) and net blood lactate accumulation at 15 km h(-1) were significantly correlated with the start, middle, last and average velocities during the 10-km race. These results demonstrate that high and low performance runners adopt different pacing strategies during a 10-km race. Furthermore, it appears that important determinants of the chosen pacing strategy include PV, LT and RE.
Resumo:
Standard tools for the analysis of economic problems involving uncertainty, including risk premiums, certainty equivalents and the notions of absolute and relative risk aversion, are developed without making specific assumptions on functional form beyond the basic requirements of monotonicity, transitivity, continuity, and the presumption that individuals prefer certainty to risk. Individuals are not required to display probabilistic sophistication. The approach relies on the distance and benefit functions to characterize preferences relative to a given state-contingent vector of outcomes. The distance and benefit functions are used to derive absolute and relative risk premiums and to characterize preferences exhibiting constant absolute risk aversion (CARA) and constant relative risk aversion (CRRA). A generalization of the notion of Schur-concavity is presented. If preferences are generalized Schur concave, the absolute and relative risk premiums are generalized Schur convex, and the certainty equivalents are generalized Schur concave.