90 resultados para order-picking system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente Trabajo fin Fin de Máster, versa sobre una caracterización preliminar del comportamiento de un robot de tipo industrial, configurado por 4 eslabones y 4 grados de libertad, y sometido a fuerzas de mecanizado en su extremo. El entorno de trabajo planteado es el de plantas de fabricación de piezas de aleaciones de aluminio para automoción. Este tipo de componentes parte de un primer proceso de fundición que saca la pieza en bruto. Para series medias y altas, en función de las propiedades mecánicas y plásticas requeridas y los costes de producción, la inyección a alta presión (HPDC) y la fundición a baja presión (LPC) son las dos tecnologías más usadas en esta primera fase. Para inyección a alta presión, las aleaciones de aluminio más empleadas son, en designación simbólica según norma EN 1706 (entre paréntesis su designación numérica); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). Para baja presión, EN AC AlSi7Mg0,3 (EN AC 42100). En los 3 primeros casos, los límites de Silicio permitidos pueden superan el 10%. En el cuarto caso, es inferior al 10% por lo que, a los efectos de ser sometidas a mecanizados, las piezas fabricadas en aleaciones con Si superior al 10%, se puede considerar que son equivalentes, diferenciándolas de la cuarta. Las tolerancias geométricas y dimensionales conseguibles directamente de fundición, recogidas en normas como ISO 8062 o DIN 1688-1, establecen límites para este proceso. Fuera de esos límites, las garantías en conseguir producciones con los objetivos de ppms aceptados en la actualidad por el mercado, obligan a ir a fases posteriores de mecanizado. Aquellas geometrías que, funcionalmente, necesitan disponer de unas tolerancias geométricas y/o dimensionales definidas acorde a ISO 1101, y no capaces por este proceso inicial de moldeado a presión, deben ser procesadas en una fase posterior en células de mecanizado. En este caso, las tolerancias alcanzables para procesos de arranque de viruta se recogen en normas como ISO 2768. Las células de mecanizado se componen, por lo general, de varios centros de control numérico interrelacionados y comunicados entre sí por robots que manipulan las piezas en proceso de uno a otro. Dichos robots, disponen en su extremo de una pinza utillada para poder coger y soltar las piezas en los útiles de mecanizado, las mesas de intercambio para cambiar la pieza de posición o en utillajes de equipos de medición y prueba, o en cintas de entrada o salida. La repetibilidad es alta, de centésimas incluso, definida según norma ISO 9283. El problema es que, estos rangos de repetibilidad sólo se garantizan si no se hacen esfuerzos o éstos son despreciables (caso de mover piezas). Aunque las inercias de mover piezas a altas velocidades hacen que la trayectoria intermedia tenga poca precisión, al inicio y al final (al coger y dejar pieza, p.e.) se hacen a velocidades relativamente bajas que hacen que el efecto de las fuerzas de inercia sean menores y que permiten garantizar la repetibilidad anteriormente indicada. No ocurre así si se quitara la garra y se intercambia con un cabezal motorizado con una herramienta como broca, mandrino, plato de cuchillas, fresas frontales o tangenciales… Las fuerzas ejercidas de mecanizado generarían unos pares en las uniones tan grandes y tan variables que el control del robot no sería capaz de responder (o no está preparado, en un principio) y generaría una desviación en la trayectoria, realizada a baja velocidad, que desencadenaría en un error de posición (ver norma ISO 5458) no asumible para la funcionalidad deseada. Se podría llegar al caso de que la tolerancia alcanzada por un pretendido proceso más exacto diera una dimensión peor que la que daría el proceso de fundición, en principio con mayor variabilidad dimensional en proceso (y por ende con mayor intervalo de tolerancia garantizable). De hecho, en los CNCs, la precisión es muy elevada, (pudiéndose despreciar en la mayoría de los casos) y no es la responsable de, por ejemplo la tolerancia de posición al taladrar un agujero. Factores como, temperatura de la sala y de la pieza, calidad constructiva de los utillajes y rigidez en el amarre, error en el giro de mesas y de colocación de pieza, si lleva agujeros previos o no, si la herramienta está bien equilibrada y el cono es el adecuado para el tipo de mecanizado… influyen más. Es interesante que, un elemento no específico tan común en una planta industrial, en el entorno anteriormente descrito, como es un robot, el cual no sería necesario añadir por disponer de él ya (y por lo tanto la inversión sería muy pequeña), puede mejorar la cadena de valor disminuyendo el costo de fabricación. Y si se pudiera conjugar que ese robot destinado a tareas de manipulación, en los muchos tiempos de espera que va a disfrutar mientras el CNC arranca viruta, pudiese coger un cabezal y apoyar ese mecanizado; sería doblemente interesante. Por lo tanto, se antoja sugestivo poder conocer su comportamiento e intentar explicar qué sería necesario para llevar esto a cabo, motivo de este trabajo. La arquitectura de robot seleccionada es de tipo SCARA. La búsqueda de un robot cómodo de modelar y de analizar cinemática y dinámicamente, sin limitaciones relevantes en la multifuncionalidad de trabajos solicitados, ha llevado a esta elección, frente a otras arquitecturas como por ejemplo los robots antropomórficos de 6 grados de libertad, muy populares a nivel industrial. Este robot dispone de 3 uniones, de las cuales 2 son de tipo par de revolución (1 grado de libertad cada una) y la tercera es de tipo corredera o par cilíndrico (2 grados de libertad). La primera unión, de tipo par de revolución, sirve para unir el suelo (considerado como eslabón número 1) con el eslabón número 2. La segunda unión, también de ese tipo, une el eslabón número 2 con el eslabón número 3. Estos 2 brazos, pueden describir un movimiento horizontal, en el plano X-Y. El tercer eslabón, está unido al eslabón número 4 por la unión de tipo corredera. El movimiento que puede describir es paralelo al eje Z. El robot es de 4 grados de libertad (4 motores). En relación a los posibles trabajos que puede realizar este tipo de robot, su versatilidad abarca tanto operaciones típicas de manipulación como operaciones de arranque de viruta. Uno de los mecanizados más usuales es el taladrado, por lo cual se elige éste para su modelización y análisis. Dentro del taladrado se elegirá para acotar las fuerzas, taladrado en macizo con broca de diámetro 9 mm. El robot se ha considerado por el momento que tenga comportamiento de sólido rígido, por ser el mayor efecto esperado el de los pares en las uniones. Para modelar el robot se utiliza el método de los sistemas multicuerpos. Dentro de este método existen diversos tipos de formulaciones (p.e. Denavit-Hartenberg). D-H genera una cantidad muy grande de ecuaciones e incógnitas. Esas incógnitas son de difícil comprensión y, para cada posición, hay que detenerse a pensar qué significado tienen. Se ha optado por la formulación de coordenadas naturales. Este sistema utiliza puntos y vectores unitarios para definir la posición de los distintos cuerpos, y permite compartir, cuando es posible y se quiere, para definir los pares cinemáticos y reducir al mismo tiempo el número de variables. Las incógnitas son intuitivas, las ecuaciones de restricción muy sencillas y se reduce considerablemente el número de ecuaciones e incógnitas. Sin embargo, las coordenadas naturales “puras” tienen 2 problemas. El primero, que 2 elementos con un ángulo de 0 o 180 grados, dan lugar a puntos singulares que pueden crear problemas en las ecuaciones de restricción y por lo tanto han de evitarse. El segundo, que tampoco inciden directamente sobre la definición o el origen de los movimientos. Por lo tanto, es muy conveniente complementar esta formulación con ángulos y distancias (coordenadas relativas). Esto da lugar a las coordenadas naturales mixtas, que es la formulación final elegida para este TFM. Las coordenadas naturales mixtas no tienen el problema de los puntos singulares. Y la ventaja más importante reside en su utilidad a la hora de aplicar fuerzas motrices, momentos o evaluar errores. Al incidir sobre la incógnita origen (ángulos o distancias) controla los motores de manera directa. El algoritmo, la simulación y la obtención de resultados se ha programado mediante Matlab. Para realizar el modelo en coordenadas naturales mixtas, es preciso modelar en 2 pasos el robot a estudio. El primer modelo se basa en coordenadas naturales. Para su validación, se plantea una trayectoria definida y se analiza cinemáticamente si el robot satisface el movimiento solicitado, manteniendo su integridad como sistema multicuerpo. Se cuantifican los puntos (en este caso inicial y final) que configuran el robot. Al tratarse de sólidos rígidos, cada eslabón queda definido por sus respectivos puntos inicial y final (que son los más interesantes para la cinemática y la dinámica) y por un vector unitario no colineal a esos 2 puntos. Los vectores unitarios se colocan en los lugares en los que se tenga un eje de rotación o cuando se desee obtener información de un ángulo. No son necesarios vectores unitarios para medir distancias. Tampoco tienen por qué coincidir los grados de libertad con el número de vectores unitarios. Las longitudes de cada eslabón quedan definidas como constantes geométricas. Se establecen las restricciones que definen la naturaleza del robot y las relaciones entre los diferentes elementos y su entorno. La trayectoria se genera por una nube de puntos continua, definidos en coordenadas independientes. Cada conjunto de coordenadas independientes define, en un instante concreto, una posición y postura de robot determinada. Para conocerla, es necesario saber qué coordenadas dependientes hay en ese instante, y se obtienen resolviendo por el método de Newton-Rhapson las ecuaciones de restricción en función de las coordenadas independientes. El motivo de hacerlo así es porque las coordenadas dependientes deben satisfacer las restricciones, cosa que no ocurre con las coordenadas independientes. Cuando la validez del modelo se ha probado (primera validación), se pasa al modelo 2. El modelo número 2, incorpora a las coordenadas naturales del modelo número 1, las coordenadas relativas en forma de ángulos en los pares de revolución (3 ángulos; ϕ1, ϕ 2 y ϕ3) y distancias en los pares prismáticos (1 distancia; s). Estas coordenadas relativas pasan a ser las nuevas coordenadas independientes (sustituyendo a las coordenadas independientes cartesianas del modelo primero, que eran coordenadas naturales). Es necesario revisar si el sistema de vectores unitarios del modelo 1 es suficiente o no. Para este caso concreto, se han necesitado añadir 1 vector unitario adicional con objeto de que los ángulos queden perfectamente determinados con las correspondientes ecuaciones de producto escalar y/o vectorial. Las restricciones habrán de ser incrementadas en, al menos, 4 ecuaciones; una por cada nueva incógnita. La validación del modelo número 2, tiene 2 fases. La primera, al igual que se hizo en el modelo número 1, a través del análisis cinemático del comportamiento con una trayectoria definida. Podrían obtenerse del modelo 2 en este análisis, velocidades y aceleraciones, pero no son necesarios. Tan sólo interesan los movimientos o desplazamientos finitos. Comprobada la coherencia de movimientos (segunda validación), se pasa a analizar cinemáticamente el comportamiento con trayectorias interpoladas. El análisis cinemático con trayectorias interpoladas, trabaja con un número mínimo de 3 puntos máster. En este caso se han elegido 3; punto inicial, punto intermedio y punto final. El número de interpolaciones con el que se actúa es de 50 interpolaciones en cada tramo (cada 2 puntos máster hay un tramo), resultando un total de 100 interpolaciones. El método de interpolación utilizado es el de splines cúbicas con condición de aceleración inicial y final constantes, que genera las coordenadas independientes de los puntos interpolados de cada tramo. Las coordenadas dependientes se obtienen resolviendo las ecuaciones de restricción no lineales con el método de Newton-Rhapson. El método de las splines cúbicas es muy continuo, por lo que si se desea modelar una trayectoria en el que haya al menos 2 movimientos claramente diferenciados, es preciso hacerlo en 2 tramos y unirlos posteriormente. Sería el caso en el que alguno de los motores se desee expresamente que esté parado durante el primer movimiento y otro distinto lo esté durante el segundo movimiento (y así sucesivamente). Obtenido el movimiento, se calculan, también mediante fórmulas de diferenciación numérica, las velocidades y aceleraciones independientes. El proceso es análogo al anteriormente explicado, recordando la condición impuesta de que la aceleración en el instante t= 0 y en instante t= final, se ha tomado como 0. Las velocidades y aceleraciones dependientes se calculan resolviendo las correspondientes derivadas de las ecuaciones de restricción. Se comprueba, de nuevo, en una tercera validación del modelo, la coherencia del movimiento interpolado. La dinámica inversa calcula, para un movimiento definido -conocidas la posición, velocidad y la aceleración en cada instante de tiempo-, y conocidas las fuerzas externas que actúan (por ejemplo el peso); qué fuerzas hay que aplicar en los motores (donde hay control) para que se obtenga el citado movimiento. En la dinámica inversa, cada instante del tiempo es independiente de los demás y tiene una posición, una velocidad y una aceleración y unas fuerzas conocidas. En este caso concreto, se desean aplicar, de momento, sólo las fuerzas debidas al peso, aunque se podrían haber incorporado fuerzas de otra naturaleza si se hubiese deseado. Las posiciones, velocidades y aceleraciones, proceden del cálculo cinemático. El efecto inercial de las fuerzas tenidas en cuenta (el peso) es calculado. Como resultado final del análisis dinámico inverso, se obtienen los pares que han de ejercer los cuatro motores para replicar el movimiento prescrito con las fuerzas que estaban actuando. La cuarta validación del modelo consiste en confirmar que el movimiento obtenido por aplicar los pares obtenidos en la dinámica inversa, coinciden con el obtenido en el análisis cinemático (movimiento teórico). Para ello, es necesario acudir a la dinámica directa. La dinámica directa se encarga de calcular el movimiento del robot, resultante de aplicar unos pares en motores y unas fuerzas en el robot. Por lo tanto, el movimiento real resultante, al no haber cambiado ninguna condición de las obtenidas en la dinámica inversa (pares de motor y fuerzas inerciales debidas al peso de los eslabones) ha de ser el mismo al movimiento teórico. Siendo así, se considera que el robot está listo para trabajar. Si se introduce una fuerza exterior de mecanizado no contemplada en la dinámica inversa y se asigna en los motores los mismos pares resultantes de la resolución del problema dinámico inverso, el movimiento real obtenido no es igual al movimiento teórico. El control de lazo cerrado se basa en ir comparando el movimiento real con el deseado e introducir las correcciones necesarias para minimizar o anular las diferencias. Se aplican ganancias en forma de correcciones en posición y/o velocidad para eliminar esas diferencias. Se evalúa el error de posición como la diferencia, en cada punto, entre el movimiento teórico deseado en el análisis cinemático y el movimiento real obtenido para cada fuerza de mecanizado y una ganancia concreta. Finalmente, se mapea el error de posición obtenido para cada fuerza de mecanizado y las diferentes ganancias previstas, graficando la mejor precisión que puede dar el robot para cada operación que se le requiere, y en qué condiciones. -------------- This Master´s Thesis deals with a preliminary characterization of the behaviour for an industrial robot, configured with 4 elements and 4 degrees of freedoms, and subjected to machining forces at its end. Proposed working conditions are those typical from manufacturing plants with aluminium alloys for automotive industry. This type of components comes from a first casting process that produces rough parts. For medium and high volumes, high pressure die casting (HPDC) and low pressure die casting (LPC) are the most used technologies in this first phase. For high pressure die casting processes, most used aluminium alloys are, in simbolic designation according EN 1706 standard (between brackets, its numerical designation); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). For low pressure, EN AC AlSi7Mg0,3 (EN AC 42100). For the 3 first alloys, Si allowed limits can exceed 10% content. Fourth alloy has admisible limits under 10% Si. That means, from the point of view of machining, that components made of alloys with Si content above 10% can be considered as equivalent, and the fourth one must be studied separately. Geometrical and dimensional tolerances directly achievables from casting, gathered in standards such as ISO 8062 or DIN 1688-1, establish a limit for this process. Out from those limits, guarantees to achieve batches with objetive ppms currently accepted by market, force to go to subsequent machining process. Those geometries that functionally require a geometrical and/or dimensional tolerance defined according ISO 1101, not capable with initial moulding process, must be obtained afterwards in a machining phase with machining cells. In this case, tolerances achievables with cutting processes are gathered in standards such as ISO 2768. In general terms, machining cells contain several CNCs that they are interrelated and connected by robots that handle parts in process among them. Those robots have at their end a gripper in order to take/remove parts in machining fixtures, in interchange tables to modify position of part, in measurement and control tooling devices, or in entrance/exit conveyors. Repeatibility for robot is tight, even few hundredths of mm, defined according ISO 9283. Problem is like this; those repeatibilty ranks are only guaranteed when there are no stresses or they are not significant (f.e. due to only movement of parts). Although inertias due to moving parts at a high speed make that intermediate paths have little accuracy, at the beginning and at the end of trajectories (f.e, when picking part or leaving it) movement is made with very slow speeds that make lower the effect of inertias forces and allow to achieve repeatibility before mentioned. It does not happens the same if gripper is removed and it is exchanged by an spindle with a machining tool such as a drilling tool, a pcd boring tool, a face or a tangential milling cutter… Forces due to machining would create such big and variable torques in joints that control from the robot would not be able to react (or it is not prepared in principle) and would produce a deviation in working trajectory, made at a low speed, that would trigger a position error (see ISO 5458 standard) not assumable for requested function. Then it could be possible that tolerance achieved by a more exact expected process would turn out into a worst dimension than the one that could be achieved with casting process, in principle with a larger dimensional variability in process (and hence with a larger tolerance range reachable). As a matter of fact, accuracy is very tight in CNC, (its influence can be ignored in most cases) and it is not the responsible of, for example position tolerance when drilling a hole. Factors as, room and part temperature, manufacturing quality of machining fixtures, stiffness at clamping system, rotating error in 4th axis and part positioning error, if there are previous holes, if machining tool is properly balanced, if shank is suitable for that machining type… have more influence. It is interesting to know that, a non specific element as common, at a manufacturing plant in the enviroment above described, as a robot (not needed to be added, therefore with an additional minimum investment), can improve value chain decreasing manufacturing costs. And when it would be possible to combine that the robot dedicated to handling works could support CNCs´ works in its many waiting time while CNCs cut, and could take an spindle and help to cut; it would be double interesting. So according to all this, it would be interesting to be able to know its behaviour and try to explain what would be necessary to make this possible, reason of this work. Selected robot architecture is SCARA type. The search for a robot easy to be modeled and kinematically and dinamically analyzed, without significant limits in the multifunctionality of requested operations, has lead to this choice. Due to that, other very popular architectures in the industry, f.e. 6 DOFs anthropomorphic robots, have been discarded. This robot has 3 joints, 2 of them are revolute joints (1 DOF each one) and the third one is a cylindrical joint (2 DOFs). The first joint, a revolute one, is used to join floor (body 1) with body 2. The second one, a revolute joint too, joins body 2 with body 3. These 2 bodies can move horizontally in X-Y plane. Body 3 is linked to body 4 with a cylindrical joint. Movement that can be made is paralell to Z axis. The robt has 4 degrees of freedom (4 motors). Regarding potential works that this type of robot can make, its versatility covers either typical handling operations or cutting operations. One of the most common machinings is to drill. That is the reason why it has been chosen for the model and analysis. Within drilling, in order to enclose spectrum force, a typical solid drilling with 9 mm diameter. The robot is considered, at the moment, to have a behaviour as rigid body, as biggest expected influence is the one due to torques at joints. In order to modelize robot, it is used multibodies system method. There are under this heading different sorts of formulations (f.e. Denavit-Hartenberg). D-H creates a great amount of equations and unknown quantities. Those unknown quatities are of a difficult understanding and, for each position, one must stop to think about which meaning they have. The choice made is therefore one of formulation in natural coordinates. This system uses points and unit vectors to define position of each different elements, and allow to share, when it is possible and wished, to define kinematic torques and reduce number of variables at the same time. Unknown quantities are intuitive, constrain equations are easy and number of equations and variables are strongly reduced. However, “pure” natural coordinates suffer 2 problems. The first one is that 2 elements with an angle of 0° or 180°, give rise to singular positions that can create problems in constrain equations and therefore they must be avoided. The second problem is that they do not work directly over the definition or the origin of movements. Given that, it is highly recommended to complement this formulation with angles and distances (relative coordinates). This leads to mixed natural coordinates, and they are the final formulation chosen for this MTh. Mixed natural coordinates have not the problem of singular positions. And the most important advantage lies in their usefulness when applying driving forces, torques or evaluating errors. As they influence directly over origin variable (angles or distances), they control motors directly. The algorithm, simulation and obtaining of results has been programmed with Matlab. To design the model in mixed natural coordinates, it is necessary to model the robot to be studied in 2 steps. The first model is based in natural coordinates. To validate it, it is raised a defined trajectory and it is kinematically analyzed if robot fulfils requested movement, keeping its integrity as multibody system. The points (in this case starting and ending points) that configure the robot are quantified. As the elements are considered as rigid bodies, each of them is defined by its respectively starting and ending point (those points are the most interesting ones from the point of view of kinematics and dynamics) and by a non-colinear unit vector to those points. Unit vectors are placed where there is a rotating axis or when it is needed information of an angle. Unit vectors are not needed to measure distances. Neither DOFs must coincide with the number of unit vectors. Lengths of each arm are defined as geometrical constants. The constrains that define the nature of the robot and relationships among different elements and its enviroment are set. Path is generated by a cloud of continuous points, defined in independent coordinates. Each group of independent coordinates define, in an specific instant, a defined position and posture for the robot. In order to know it, it is needed to know which dependent coordinates there are in that instant, and they are obtained solving the constraint equations with Newton-Rhapson method according to independent coordinates. The reason to make it like this is because dependent coordinates must meet constraints, and this is not the case with independent coordinates. When suitability of model is checked (first approval), it is given next step to model 2. Model 2 adds to natural coordinates from model 1, the relative coordinates in the shape of angles in revoluting torques (3 angles; ϕ1, ϕ 2 and ϕ3) and distances in prismatic torques (1 distance; s). These relative coordinates become the new independent coordinates (replacing to cartesian independent coordinates from model 1, that they were natural coordinates). It is needed to review if unit vector system from model 1 is enough or not . For this specific case, it was necessary to add 1 additional unit vector to define perfectly angles with their related equations of dot and/or cross product. Constrains must be increased in, at least, 4 equations; one per each new variable. The approval of model 2 has two phases. The first one, same as made with model 1, through kinematic analysis of behaviour with a defined path. During this analysis, it could be obtained from model 2, velocities and accelerations, but they are not needed. They are only interesting movements and finite displacements. Once that the consistence of movements has been checked (second approval), it comes when the behaviour with interpolated trajectories must be kinematically analyzed. Kinematic analysis with interpolated trajectories work with a minimum number of 3 master points. In this case, 3 points have been chosen; starting point, middle point and ending point. The number of interpolations has been of 50 ones in each strecht (each 2 master points there is an strecht), turning into a total of 100 interpolations. The interpolation method used is the cubic splines one with condition of constant acceleration both at the starting and at the ending point. This method creates the independent coordinates of interpolated points of each strecht. The dependent coordinates are achieved solving the non-linear constrain equations with Newton-Rhapson method. The method of cubic splines is very continuous, therefore when it is needed to design a trajectory in which there are at least 2 movements clearly differents, it is required to make it in 2 steps and join them later. That would be the case when any of the motors would keep stopped during the first movement, and another different motor would remain stopped during the second movement (and so on). Once that movement is obtained, they are calculated, also with numerical differenciation formulas, the independent velocities and accelerations. This process is analogous to the one before explained, reminding condition that acceleration when t=0 and t=end are 0. Dependent velocities and accelerations are calculated solving related derivatives of constrain equations. In a third approval of the model it is checked, again, consistence of interpolated movement. Inverse dynamics calculates, for a defined movement –knowing position, velocity and acceleration in each instant of time-, and knowing external forces that act (f.e. weights); which forces must be applied in motors (where there is control) in order to obtain requested movement. In inverse dynamics, each instant of time is independent of the others and it has a position, a velocity, an acceleration and known forces. In this specific case, it is intended to apply, at the moment, only forces due to the weight, though forces of another nature could have been added if it would have been preferred. The positions, velocities and accelerations, come from kinematic calculation. The inertial effect of forces taken into account (weight) is calculated. As final result of the inverse dynamic analysis, the are obtained torques that the 4 motors must apply to repeat requested movement with the forces that were acting. The fourth approval of the model consists on confirming that the achieved movement due to the use of the torques obtained in the inverse dynamics, are in accordance with movements from kinematic analysis (theoretical movement). For this, it is necessary to work with direct dynamics. Direct dynamic is in charge of calculating the movements of robot that results from applying torques at motors and forces at the robot. Therefore, the resultant real movement, as there was no change in any condition of the ones obtained at the inverse dynamics (motor torques and inertial forces due to weight of elements) must be the same than theoretical movement. When these results are achieved, it is considered that robot is ready to work. When a machining external force is introduced and it was not taken into account before during the inverse dynamics, and torques at motors considered are the ones of the inverse dynamics, the real movement obtained is not the same than the theoretical movement. Closed loop control is based on comparing real movement with expected movement and introducing required corrrections to minimize or cancel differences. They are applied gains in the way of corrections for position and/or tolerance to remove those differences. Position error is evaluated as the difference, in each point, between theoretical movemment (calculated in the kinematic analysis) and the real movement achieved for each machining force and for an specific gain. Finally, the position error obtained for each machining force and gains are mapped, giving a chart with the best accuracy that the robot can give for each operation that has been requested and which conditions must be provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel control scheme for bilateral teleoperation of n degree-of-freedom (DOF) nonlinear robotic systems with time-varying communication delay. A major contribution from this work lies in the demonstration that the structure of a state convergence algorithm can be also applied to nth-order nonlinear teleoperation systems. By choosing a Lyapunov Krasovskii functional, we show that the local-remote teleoperation system is asymptotically stable. The time delay of communication channel is assumed to be unknown and randomly time varying, but the upper bounds of the delay interval and the derivative of the delay are assumed to be known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a Glucose-Insulin regulator for Type 1 Diabetes using artificial neural networks (ANN) is proposed. This is done using a discrete recurrent high order neural network in order to identify and control a nonlinear dynamical system which represents the pancreas? beta-cells behavior of a virtual patient. The ANN which reproduces and identifies the dynamical behavior system, is configured as series parallel and trained on line using the extended Kalman filter algorithm to achieve a quickly convergence identification in silico. The control objective is to regulate the glucose-insulin level under different glucose inputs and is based on a nonlinear neural block control law. A safety block is included between the control output signal and the virtual patient with type 1 diabetes mellitus. Simulations include a period of three days. Simulation results are compared during the overnight fasting period in Open-Loop (OL) versus Closed- Loop (CL). Tests in Semi-Closed-Loop (SCL) are made feedforward in order to give information to the control algorithm. We conclude the controller is able to drive the glucose to target in overnight periods and the feedforward is necessary to control the postprandial period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the experimental three-year learning activity developed by a group of teachers in a wind tunnel facility. The authors, leading a team of students, carried out a project consisting of the design, assembly and testing of a wind tunnel. The project included all stages of the process from its initial specifications to its final quality flow assessments, going through the calculation of each element, and the building of the whole wind tunnel. The group of (final year) students was responsible for the whole wind tunnel project as a part of their bachelor degree project. The paper focuses on the development of wind tunnel data acquisition software. This automatic tool is essential to improve the automation of the data acquisition of the wind tunnel facility systems, in particular for a 6DOF multi-axis force/torque sensor. This work can be considered as a typical example of real engineering practice: a set of specifications that has to be modified due to the constraints imposed throughout the project, in order to obtain the final result

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Early and effective identification of developmental disorders during childhood remains a critical task for the international community. The second highest prevalence of common developmental disorders in children are language delays, which are frequently the first symptoms of a possible disorder. Objective: This paper evaluates a Web-based Clinical Decision Support System (CDSS) whose aim is to enhance the screening of language disorders at a nursery school. The common lack of early diagnosis of language disorders led us to deploy an easy-to-use CDSS in order to evaluate its accuracy in early detection of language pathologies. This CDSS can be used by pediatricians to support the screening of language disorders in primary care. Methods: This paper details the evaluation results of the ?Gades? CDSS at a nursery school with 146 children, 12 educators, and 1 language therapist. The methodology embraces two consecutive phases. The first stage involves the observation of each child?s language abilities, carried out by the educators, to facilitate the evaluation of language acquisition level performed by a language therapist. Next, the same language therapist evaluates the reliability of the observed results. Results: The Gades CDSS was integrated to provide the language therapist with the required clinical information. The validation process showed a global 83.6% (122/146) success rate in language evaluation and a 7% (7/94) rate of non-accepted system decisions within the range of children from 0 to 3 years old. The system helped language therapists to identify new children with potential disorders who required further evaluation. This process will revalidate the CDSS output and allow the enhancement of early detection of language disorders in children. The system does need minor refinement, since the therapists disagreed with some questions from the CDSS knowledge base (KB) and suggested adding a few questions about speech production and pragmatic abilities. The refinement of the KB will address these issues and include the requested improvements, with the support of the experts who took part in the original KB development. Conclusions: This research demonstrated the benefit of a Web-based CDSS to monitor children?s neurodevelopment via the early detection of language delays at a nursery school. Current next steps focus on the design of a model that includes pseudo auto-learning capacity, supervised by experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La adquisición de la competencia grupal es algo básico en la docencia universitaria. Esta tarea va a suponer evaluar diferentes factores en un número elevado de alumnos, lo que puede supone gran complejidad y un esfuerzo elevado. De cara a evitar este esfuerzo se puede pensar en emplear los registros de la interacción de los usuarios almacenados en las plataformas de aprendizaje. Para ello el presente trabajo se basa en el desarrollo de un sistema de Learning Analytics que es utilizado como herramienta para analizar las evidencias individuales de los distintos miembros de un equipo de trabajo. El trabajo desarrolla un modelo teórico apoyado en la herramienta, que permite relacionar las evidencias observadas de forma empírica para cada alumno, con indicadores obtenidos tanto de la acción individual como cooperativo de los miembros de un equipo realizadas a través de los foros de trabajo. Abstract — The development of the group work competence is something basic in university teaching. It should be evaluated, but this means to analyze different issues about the participation of a high number of students which is very complex and implies a lot of effort. In order to facilitate this evaluation it is possible to analyze the logs of students’ interaction in Learning Management Systems. The present work describes the development of a Learning Analytics system that analyzes the interaction of each of the members of working group. This tool is supported by a theoretical model, which allows establishing links between the empirical evidences of each student and the indicators of their action in working forums.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document contains detailed description of the design and the implementation of a multi-agent application controlling traffic lights in a city together with a system for simulating traffic and testing. The goal of this thesis is to design and build a simplified intelligent and distributed solution to the problem with the traffic in the big cities following different good practices in order to allow future refining of the model of the real world. The problem of the traffic in the big cities is still a problem that cannot be solved. Not only is the increasing number of cars a reason for the traffic jams, but also the way the traffic is organized. Usually, the intersections with traffic lights are replaced by roundabouts or interchanges to increase the number of cars that can cross the intersection in certain time. But still there are places where the infrastructure cannot be changed and the traffic light semaphores are the only way to control the car flows. In real life, the traffic lights have a predefined plan for change or they receive information from a centralized system when and how they have to change. But what if the traffic lights can cooperate and decide on their own when and how to change? Using this problem, the purpose of the thesis is to explore different agent-based software engineering approaches to design and build a non-conventional distributed system. From the software engineering point of view, the goal of the thesis is to apply the knowledge and use the skills, acquired during the various courses of the master program in Software Engineering, while solving a practical and complex problem such as the traffic in the cities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a novel excitation-system ground-fault location method is described and tested in a 106 MVA synchronous machine. In this unit, numerous rotor ground-fault trips took place always about an hour after the synchronization to the network. However, when the field winding insulation was checked after the trips, there was no failure. The data indicated that the faults in the rotor were caused by centrifugal forces and temperature. Unexpectedly, by applying this new method, the failure was located in a cable between the excitation transformer and the automatic voltage regulator. In addition, several intentional ground faults were performed along the field winding with different fault resistance values, in order to test the accuracy of this method to locate defects in rotor windings of large generators. Therefore, this new on-line rotor ground-fault detection algorithm is tested in high-power synchronous generators with satisfactory results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Things makes use of a huge disparity of technologies at very different levels that help one to the other to accomplish goals that were previously regarded as unthinkable in terms of ubiquity or scalability. If the Internet of Things is expected to interconnect every day devices or appliances and enable communications between them, a broad range of new services, applications and products can be foreseen. For example, monitoring is a process where sensors have widespread use for measuring environmental parameters (temperature, light, chemical agents, etc.) but obtaining readings at the exact physical point they want to be obtained from, or about the exact wanted parameter can be a clumsy, time-consuming task that is not easily adaptable to new requirements. In order to tackle this challenge, a proposal on a system used to monitor any conceivable environment, which additionally is able to monitor the status of its own components and heal some of the most usual issues of a Wireless Sensor Network, is presented here in detail, covering all the layers that give it shape in terms of devices, communications or services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smart and green cities are hot topics in current research because people are becoming more conscious about their impact on the environment and the sustainability of their cities as the population increases. Many researchers are searching for mechanisms that can reduce power consumption and pollution in the city environment. This paper addresses the issue of public lighting and how it can be improved in order to achieve a more energy efficient city. This work is focused on making the process of turning the streetlights on and off more intelligent so that they consume less power and cause less light pollution. The proposed solution is comprised of a radar device and an expert system implemented on a low-cost platform based on a DSP. By analyzing the radar echo in both the frequency and time domains, the system is able to detect and identify objects moving in front of it. This information is used to decide whether or not the streetlight should be turned on. Experimental results show that the proposed system can provide hit rates over 80%, promising a good performance. In addition, the proposed solution could be useful in kind of other applications such as intelligent security and surveillance systems and home automation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic and Partial Reconfiguration (DPR) allows a system to be able to modify certain parts of itself during run-time. This feature gives rise to the capability of evolution: changing parts of the configuration according to the online evaluation of performance or other parameters. The evolution is achieved through a bio-inspired model in which the features of the system are identified as genes. The objective of the evolution may not be a single one; in this work, power consumption is taken into consideration, together with the quality of filtering, as the measure of performance, of a noisy image. Pareto optimality is applied to the evolutionary process, in order to find a representative set of optimal solutions as for performance and power consumption. The main contributions of this paper are: implementing an evolvable system on a low-power Spartan-6 FPGA included in a Wireless Sensor Network node and, by enabling the availability of a real measure of power consumption at run-time, achieving the capability of multi-objective evolution, that yields different optimal configurations, among which the selected one will depend on the relative “weights” of performance and power consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El principal objetivo de la tesis es estudiar el acoplamiento entre los subsistemas de control de actitud y de control térmico de un pequeño satélite, con el fin de buscar la solución a los problemas relacionados con la determinación de los parámetros de diseño. Se considera la evolución de la actitud y de las temperaturas del satélite bajo la influencia de dos estrategias de orientación diferentes: 1) estabilización magnética pasiva de la orientación (PMAS, passive magnetic attitude stabilization), y 2) control de actitud magnético activo (AMAC, active magnetic attitude control). En primer lugar se presenta el modelo matemático del problema, que incluye la dinámica rotacional y el modelo térmico. En el problema térmico se considera un satélite cúbico modelizado por medio de siete nodos (seis externos y uno interno) aplicando la ecuación del balance térmico. Una vez establecido el modelo matemático del problema, se estudia la evolución que corresponde a las dos estrategias mencionadas. La estrategia PMAS se ha seleccionado por su simplicidad, fiabilidad, bajo coste, ahorrando consumo de potencia, masa coste y complejidad, comparado con otras estrategias. Se ha considerado otra estrategia de control que consigue que el satélite gire a una velocidad requerida alrededor de un eje deseado de giro, pudiendo controlar su dirección en un sistema inercial de referencia, ya que frecuentemente el subsistema térmico establece requisitos de giro alrededor de un eje del satélite orientado en una dirección perpendicular a la radiación solar incidente. En relación con el problema térmico, para estudiar la influencia de la velocidad de giro en la evolución de las temperaturas en diversos puntos del satélite, se ha empleado un modelo térmico linealizado, obtenido a partir de la formulación no lineal aplicando un método de perturbaciones. El resultado del estudio muestra que el tiempo de estabilización de la temperatura y la influencia de las cargas periódicas externas disminuye cuando aumenta la velocidad de giro. Los cambios de temperatura se reducen hasta ser muy pequeños para velocidades de rotación altas. En relación con la estrategia PMAC se ha observado que a pesar de su uso extendido entre los micro y nano satélites todavía presenta problemas que resolver. Estos problemas están relacionados con el dimensionamiento de los parámetros del sistema y la predicción del funcionamiento en órbita. Los problemas aparecen debido a la dificultad en la determinación de las características magnéticas de los cuerpos ferromagnéticos (varillas de histéresis) que se utilizan como amortiguadores de oscilaciones en los satélites. Para estudiar este problema se presenta un modelo analítico que permite estimar la eficiencia del amortiguamiento, y que se ha aplicado al estudio del comportamiento en vuelo de varios satélites, y que se ha empleado para comparar los resultados del modelo con los obtenidos en vuelo, observándose que el modelo permite explicar satisfactoriamente el comportamiento registrado. ABSTRACT The main objective of this thesis is to study the coupling between the attitude control and thermal control subsystems of a small satellite, and address the solution to some existing issues concerning the determination of their parameters. Through the thesis the attitude and temperature evolution of the satellite is studied under the influence of two independent attitude stabilization and control strategies: (1) passive magnetic attitude stabilization (PMAS), and (2) active magnetic attitude control (AMAC). In this regard the mathematical model of the problem is explained and presented. The mathematical model includes both the rotational dynamics and the thermal model. The thermal model is derived for a cubic satellite by solving the heat balance equation for 6 external and 1 internal nodes. Once established the mathematical model of the problem, the above mentioned attitude strategies were applied to the system and the temperature evolution of the 7 nodes of the satellite was studied. The PMAS technique has been selected to be studied due to its prevalent use, simplicity, reliability, and cost, as this strategy significantly saves the overall power, weight, cost, and reduces the complexity of the system compared to other attitude control strategies. In addition to that, another control law that provides the satellite with a desired spin rate along a desired axis of the satellite, whose direction can be controlled with respect to the inertial reference frame is considered, as the thermal subsystem of a satellite usually demands a spin requirement around an axis of the satellite which is positioned perpendicular to the direction of the coming solar radiation. Concerning the thermal problem, to study the influence of spin rate on temperature evolution of the satellite a linear approach of the thermal model is used, which is based on perturbation theory applied to the nonlinear differential equations of the thermal model of a spacecraft moving in a closed orbit. The results of this study showed that the temperature stabilization time and the periodic influence of the external thermal loads decreases by increasing the spin rate. However, the changes become insignificant for higher values of spin rate. Concerning the PMAS strategy, it was observed that in spite of its extended application to micro and nano satellites, still there are some issues to be solved regarding this strategy. These issues are related to the sizing of its system parameters and predicting the in-orbit performance. The problems were found to be rooted in the difficulties that exist in determining the magnetic characteristics of the ferromagnetic bodies (hysteresis rods) that are applied as damping devices on-board satellites. To address these issues an analytic model for estimating their damping efficiency is proposed and applied to several existing satellites in order to compare the results with their respective in-flight data. This model can explain the behavior showed by these satellites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.