990 resultados para Graphical programming language
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Analyzing how software engineers use the Integrated Development Environment (IDE) is essential to better understanding how engineers carry out their daily tasks. Spotter is a code search engine for the Pharo programming language. Since its inception, Spotter has been rapidly and broadly adopted within the Pharo community. However, little is known about how practitioners employ Spotter to search and navigate within the Pharo code base. This paper evaluates how software engineers use Spotter in practice. To achieve this, we remotely gather user actions called events. These events are then visually rendered using an adequate navigation tool chain. Sequences of events are represented using a visual alphabet. We found a number of usage patterns and identified underused Spotter features. Such findings are essential for improving Spotter.
Resumo:
Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.
Resumo:
La señalización digital o digital signage es una tecnología de comunicaciones digital que se está usando en los últimos años para reemplazar a la antigua publicidad impresa. Esta tecnología mejora la presentación y promoción de los productos anunciados, así como facilita el intercambio de información gracias a su colocación en lugares públicos o al aire libre. Las aplicaciones con las que cuenta este nuevo método de publicidad son muy variadas, ya que pueden variar desde ambientes privados en empresas, hasta lugares públicos como centros comerciales. Aunque la primera y principal utilidad de la señalización digital es la publicidad para que el usuario sienta una necesidad de adquirir productos, también la posibilidad de ofrecer más información sobre determinados artículos a través de las nuevas tecnologías es muy importante en este campo. La aplicación realizada en este proyecto es el desarrollo de un programa en Adobe Flash a través de lenguaje de programación XML. A través de una pantalla táctil, el usuario de un museo puede interactivamente acceder a un menú en el que aparecen los diferentes estilos de arte en un determinado tiempo de la historia. A través de una línea de tiempo se puede acceder a información sobre cada objeto que esté expuesto en la exhibición. Además se pueden observar imágenes de los detalles más importantes del objeto que no pueden ser vistos a simple vista, ya que no está permitido manipularlos. El empleo de la pantalla interactiva sirve para el usuario de la exhibición como una herramienta extra para recabar información sobre lo que está viendo, a través de una tecnología nueva y fácil de usar para todo el mundo, ya que solo se necesita usar las propias manos. La facilidad de manejo en aplicaciones como estas es muy importante, ya que el usuario final puede no tener conocimientos tecnológicos por lo que la información debe darse claramente. Como conclusión, se puede decir que digital signage es un mercado que está en expansión y que las empresas deben invertir en el desarrollo de contenidos, ya que las tecnologías avanzan aunque el digital signage no lo haga, y este sector podría ser muy útil en un futuro no muy lejano, ya que la información que es capaz de transmitir al espectador en todos los lugares es mucho más válida y útil que la proporcionada por un simple póster impreso en una valla publicitaria. Abstract The Digital signage is a digital communications technology being used in recent years to replace the old advertising printed. This technology improves the presentation and promotion of the advertised products, and makes easy the exchange of information with its placement in public places or outdoors. The applications that account this new method of advertising are several; they can range from private rooms in companies, to public places like malls. Although the first major utility of Digital signage is the advertising that makes the user feel and need of purchasing products. In addition, the chance of providing more information about certain items through new technologies is very important in this field. The application made in this project is the development of a program in Adobe Flash via XML programming language. Through a touch-screen, a museum user can interactively access a menu in which different styles of art in a particular time in history appears. Through a timeline you can access to information about each object that is exposed on display. Also you can see pictures of the most important details of the object that can not be seen with the naked eye, since it is not allowed to manipulate it. The use of the interactive screen serves to the user exhibition as an extra tool to gather information about what is seeing through a new technology and easy to use for everyone, since only need to use one’s own hands. The ease of handling in applications such as this is very important as the end user may not have expertise technology so the information should be clearly. As conclusion, one can say digital signage is an expansion market and companies must invest in content development, as although digital technologies advance digital signage does not, and this sector could be very useful in a near future, because information that is able of transmitting the everywhere viewer is much more valid and useful than that provided by a simple printed poster on a billboard.
Resumo:
We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking.
Resumo:
This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.
Resumo:
While logic programming languages offer a great deal of scope for parallelism, there is usually some overhead associated with the execution of goals in parallel because of the work involved in task creation and scheduling. In practice, therefore, the "granularity" of a goal, i.e. an estimate of the work available under it, should be taken into account when deciding whether or not to execute a goal concurrently as a sepárate task. This paper describes a method for estimating the granularity of a goal at compile time. The runtime overhead associated with our approach is usually quite small, and the performance improvements resulting from the incorporation of grainsize control can be quite good. This is shown by means of experimental results.
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
El objetivo de este proyecto de investigación es analizar el conjunto de las diez series temporales, relativas a los precios de diez metales (plata, aluminio, oro, cobre, níquel, paladio, plomo, platino, estaño y zinc), comprendidos en el periodo de enero de 2008 a septiembre de 2013, con el objetivo de reducir la dimensionalidad del conjunto de datos y facilitar la comparación de los mismos y la predicción de valores futuros. Para ello pretendemos aplicar una metodología, que nos permitirá deducir, a partir de una serie resumen y de unos coeficientes multiplicativos, la evolución del comportamiento de cualquier otro metal. Pese a la escasísima documentación existente al respecto, se valorará e intentará aplicarse una reciente metodología denominada “Metodología del haz de rectas” Como herramienta de trabajo para los programas informáticos desarrollados y para representar las gráficas asociadas al proyecto, se utilizó Matlab®, habida cuenta de su enorme potencia y por disponer de un lenguaje de programación sencillo y lo suficientemente versátil para las tareas que necesitamos. Abstract. The objective of this research project is to analyze the set of ten time series on prices of the ten metals (silver, aluminum, gold, copper, nickel, palladium, lead, platinum, tin and zinc), included in the period January 2008 to September 2013, with the aim of reducing the dimensionality of the data set and to facilitate comparison of the data and the prediction of future values To apply this methodology, allowing us to predict, from a series summarizes the evolution of the behavior of any other metal. This methodology is called "straight beam Methodology" As a tool for developing computer software and associated graphs to represent the project, Matlab® was used in view of its enormous power and have a simple programming language and versatile enough for the tasks we need
Resumo:
La consola portátil Nintendo DS es una plataforma de desarrollo muy presente entre la comunidad de desarrolladores independientes, con una extensa y nutrida escena homebrew. Si bien las capacidades 2D de la consola están muy aprovechadas, dado que la mayor parte de los esfuerzos de los creadores amateur están enfocados en este aspecto, el motor 3D de ésta (el que se encarga de representar en pantalla modelos tridimensionales) no lo está de igual manera. Por lo tanto, en este proyecto se tiene en vista determinar las capacidades gráficas de la Nintendo DS. Para ello se ha realizado una biblioteca de funciones en C que permite aprovechar las posibilidades que ofrece la consola en el terreno 3D y que sirve como herramienta para la comunidad homebrew para crear aplicaciones 3D de forma sencilla, dado que se ha diseñado como un sistema modular y accesible. En cuanto al proceso de renderizado se han sacado varias conclusiones. En primer lugar se ha determinado la posibilidad de asignar varias componentes de color a un mismo vértice (color material reactivo a la iluminación, color por vértice directo y color de textura), tanto de forma independiente como simultáneamente, pudiéndose utilizar para aplicar diversos efectos al modelo, como iluminación pre-calculada o simulación de una textura mediante color por vértice, ahorrando en memoria de video. Por otro lado se ha implementado un sistema de renderizado multi-capa, que permite realizar varias pasadas de render, pudiendo, de esta forma, aplicar al modelo una segunda textura mezclada con la principal o realizar un efecto de reflexión esférica. Uno de los principales avances de esta herramienta con respecto a otras existentes se encuentra en el apartado de animación. El renderizador desarrollado permite por un lado animación por transformación, consistente en la animación de mallas o grupos de vértices del modelo mediante el movimiento de una articulación asociada que determina su posición y rotación en cada frame de animación. Por otro lado se ha implementado un sistema de animación por muestreo de vértices mediante el cual se determina la posición de éstos en cada instante de la animación, generando frame a frame las poses que componen el movimiento (siendo este último método necesario cuando no se puede animar una malla por transformación). Un mismo modelo puede contener diferentes esqueletos, animados independientemente entre sí, y cada uno de ellos tener definidas varias costumbres de animación que correspondan a movimientos contextuales diferentes (andar, correr, saltar, etc). Además, el sistema permite extraer cualquier articulación para asociar su transformación a un objeto estático externo y que éste siga el movimiento de la animación, pudiendo así, por ejemplo, equipar un objeto en la mano de un personaje. Finalmente se han implementado varios efectos útiles en la creación de escenas tridimensionales, como el billboarding (tanto esférico como cilíndrico), que restringe la rotación de un modelo para que éste siempre mire a cámara y así poder emular la apariencia de un objeto tridimensional mediante una imagen plana, ahorrando geometría, o emplearlo para realizar efectos de partículas. Por otra parte se ha implementado un sistema de animación de texturas por subimágenes que permite generar efectos de movimiento mediante imágenes, sin necesidad de transformar geometría. ABSTRACT. The Nintendo DS portable console has received great interest within the independent developers’ community, with a huge homebrew scene. The 2D capabilities of this console are well known and used since most efforts of the amateur creators has been focused on this point. However its 3D engine (which handles with the representation of three-dimensional models) is not equally used. Therefore, in this project the main objective is to assess the Nintendo DS graphic capabilities. For this purpose, a library of functions in C programming language has been coded. This library allows the programmer to take advantage of the possibilities that the 3D area brings. This way the library can be used by the homebrew community as a tool to create 3D applications in an easy way, since it has been designed as a modular and accessible system. Regarding the render process, some conclusions have been drawn. First, it is possible to assign several colour components to the same vertex (material colour, reactive to the illumination, colour per vertex and texture colour), independently and simultaneously. This feature can be useful to apply certain effects on the model, such as pre-calculated illumination or the simulation of a texture using colour per vertex, providing video memory saving. Moreover, a multi-layer render system has been implemented. This system allows the programmer to issue several render passes on the same model. This new feature brings the possibility to apply to the model a second texture blended with the main one or simulate a spherical reflection effect. One of the main advances of this tool over existing ones consists of its animation system. The developed renderer includes, on the one hand, transform animation, which consists on animating a mesh or groups of vertices of the model by the movement of an associated joint. This joint determines position and rotation of the mesh at each frame of the animation. On the other hand, this tool also implements an animation system by vertex sampling, where the position of vertices is determined at every instant of the animation, generating the poses that build up the movement (the latter method is mandatory when a mesh cannot be animated by transform). A model can contain multiple skeletons, animated independently, each of them being defined with several animation customs, corresponding to different contextual movements (walk, run, jump, etc). Besides, the system allows extraction of information from any joint in order to associate its transform to a static external object, which will follow the movement of the animation. This way, any object could be equipped, for example, on the hand of a character. Finally, some useful effects for the creation of three-dimensional scenes have been implemented. These effects include billboarding (both spherical and cylindrical), which constraints the rotation of a model so it always looks on the camera's direction. This feature can provide the ability to emulate the appearance of a three-dimensional model through a flat image (saving geometry). It can also be helpful in the implementation of particle effects. Moreover, a texture animation system using sub-images has also been implemented. This system allows the generation of movement by using images as textures, without having to transform geometry.
Resumo:
Este proyecto tiene como objetivo el desarrollo de una interfaz MIDI, basada en técnicas de procesamiento digital de la imagen, capaz de controlar diversos parámetros de un software de audio mediante información gestual: el movimiento de las manos. La imagen es capturada por una cámara Kinect comercial y los datos obtenidos por ésta son procesados en tiempo real. La finalidad es convertir la posición de varios puntos de control de nuestro cuerpo en información de control musical MIDI. La interfaz ha sido desarrollada en el lenguaje y entorno de programación Processing, el cual está basado en Java, es de libre distribución y de fácil utilización. El software de audio seleccionado es Ableton Live, versión 8.2.2, elegido porque es útil tanto para la composición musical como para la música en directo, y esto último es la principal utilidad que se le pretende dar a la interfaz. El desarrollo del proyecto se divide en dos bloques principales: el primero, diseño gráfico del controlador, y el segundo, la gestión de la información musical. En el primer apartado se justifica el diseño del controlador, formado por botones virtuales: se explica el funcionamiento y, brevemente, la función de cada botón. Este último tema es tratado en profundidad en el Anexo II: Manual de usuario. En el segundo bloque se explica el camino que realiza la información MIDI desde el procesador gestual hasta el sintetizador musical. Este camino empieza en Processing, desde donde se mandan los mensajes que más tarde son interpretados por el secuenciador seleccionado, Ableton Live. Una vez terminada la explicación con detalle del desarrollo del proyecto se exponen las conclusiones del autor acerca del desarrollo del proyecto, donde se encuentran los pros y los contras a tener en cuenta para poder sacar el máximo provecho en el uso del controlador . En este mismo bloque de la memoria se exponen posibles líneas futuras a desarrollar. Se facilita también un presupuesto, desglosado en costes materiales y de personal. ABSTRACT. The aim of this project is the development of a MIDI interface based on image digital processing techniques, able to control several parameters of an audio software using gestural information, the movement of the hands. The image is captured by a commercial Kinect camera and the data obtained by it are processed in real time. The purpose is to convert the position of various points of our body into MIDI musical control information. The interface has been developed in the Processing programming language and environment which is based on Java, freely available and easy to used. The audio software selected is Ableton Live, version 8.2.2, chosen because it is useful for both music composition and live music, and the latter is the interface main intended utility. The project development is divided into two main blocks: the controller graphic design, and the information management. The first section justifies the controller design, consisting of virtual buttons: it is explained the operation and, briefly, the function of each button. This latter topic is covered in detail in Annex II: user manual. In the second section it is explained the way that the MIDI information makes from the gestural processor to the musical synthesizer. It begins in Processing, from where the messages, that are later interpreted by the selected sequencer, Ableton Live, are sent. Once finished the detailed explanation of the project development, the author conclusions are presented, among which are found the pros and cons to take into account in order to take full advantage in the controller use. In this same block are explained the possible future aspects to develop. It is also provided a budget, broken down into material and personal costs.