30 resultados para Online application
Resumo:
Applied colorimetry is an important module in the program of the elective subject "Colour Science: industrial applications”. This course is taught in the Optics and Optometry Degree and it has been used as a testing for the application of new teaching and assessment techniques consistent with the new European Higher Education Area. In particular, the main objective was to reduce the attendance to lessons and encourage the individual and collective work of students. The reason for this approach is based on the idea that students are able to work at their own learning pace. Within this dynamic work, we propose online lab practice based on Excel templates that our research group has developed ad-hoc for different aspects of colorimetry, such as conversion to different colour spaces, calculation of perceptual descriptors (hue, saturation, lightness), calculation of colour differences, colour matching dyes, etc. The practice presented in this paper is focused on the learning of colour differences. The session is based on a specific Excel template to compute the colour differences and to plot different graphs with these colour differences defined at different colour spaces: CIE ΔE, CIE ΔE94 and the CIELAB colour space. This template is implemented on a website what works by addressing the student work at a proper and organized way. The aim was to unify all the student work from a website, therefore the student is able to learn in an autonomous and sequential way and in his own pace. To achieve this purpose, all the tools, links and documents are collected for each different proposed activity to achieve guided specific objectives. In the context of educational innovation, this type of website is normally called WebQuest. The design of a WebQuest is established according to the criteria of usability and simplicity. There are great advantages of using WebQuests versus the toolbox “Campus Virtual” available in the University of Alicante. The Campus Virtual is an unfriendly environment for this specific purpose as the activities are organized in different sectors depending on whether the activity is a discussion, an activity, a self-assessment or the download of materials. With this separation, it is more difficult that the student follows an organized sequence. However, our WebQuest provides a more intuitive graphical environment, and besides, all the tasks and resources needed to complete them are grouped and organized according to a linear sequence. In this way, the student guided learning is optimized. Furthermore, with this simplification, the student focuses on learning and not to waste resources. Finally, this tool has a wide set of potential applications: online courses of colorimetry applied for postgraduate students, Open Course Ware, etc.
Resumo:
Paper submitted to MML 2013, 6th International Workshop on Machine Learning and Music, Prague, September 23, 2013.
Resumo:
This paper shows an iOS application to guide visual disabled people in the campus of the University of Alicante by voice indications. The user interface is adapted to visual disabled people, using a bigger visual typography and a bigger area for the tactile buttons. Moreover, the application provides voice indications when users touch any of the elements in the interface, telling them where they are and how they can reach destination.
Resumo:
Self-organising neural models have the ability to provide a good representation of the input space. In particular the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time-consuming, especially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This paper proposes a Graphics Processing Unit (GPU) parallel implementation of the GNG with Compute Unified Device Architecture (CUDA). In contrast to existing algorithms, the proposed GPU implementation allows the acceleration of the learning process keeping a good quality of representation. Comparative experiments using iterative, parallel and hybrid implementations are carried out to demonstrate the effectiveness of CUDA implementation. The results show that GNG learning with the proposed implementation achieves a speed-up of 6× compared with the single-threaded CPU implementation. GPU implementation has also been applied to a real application with time constraints: acceleration of 3D scene reconstruction for egomotion, in order to validate the proposal.
Resumo:
Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
With advances in the synthesis and design of chemical processes there is an increasing need for more complex mathematical models with which to screen the alternatives that constitute accurate and reliable process models. Despite the wide availability of sophisticated tools for simulation, optimization and synthesis of chemical processes, the user is frequently interested in using the ‘best available model’. However, in practice, these models are usually little more than a black box with a rigid input–output structure. In this paper we propose to tackle all these models using generalized disjunctive programming to capture the numerical characteristics of each model (in equation form, modular, noisy, etc.) and to deal with each of them according to their individual characteristics. The result is a hybrid modular–equation based approach that allows synthesizing complex processes using different models in a robust and reliable way. The capabilities of the proposed approach are discussed with a case study: the design of a utility system power plant that has been decomposed into its constitutive elements, each treated differently numerically. And finally, numerical results and conclusions are presented.
Resumo:
A microwave-based thermal nebulizer (MWTN) has been employed for the first time as on-line preconcentration device in inductively coupled plasma atomic emission spectrometry (ICP-AES). By the appropriate selection of the experimental conditions, the MWTN could be either operated as a conventional thermal nebulizer or as on-line analyte preconcentration and nebulization device. Thus, when operating at microwave power values above 100 W and highly concentrated alcohol solutions, the amount of energy per solvent mass liquid unit (EMR) is high enough to completely evaporate the solvent inside the system and, as a consequence, the analyte is deposited (and then preconcentrated) on the inner walls of the MWTN capillary. When reducing the EMR to the appropriate value (e.g., by reducing the microwave power at a constant sample uptake rate) the retained analyte is swept along by the liquid-gas stream and an analyte-enriched aerosol is generated and next introduced into the plasma cell. Emission signals obtained with the MWTN operating in preconcentration-nebulization mode improved when increasing preconcentration time and sample uptake rate as well as when decreasing the nozzle inner diameter. When running with pure ethanol solution at its optimum experimental conditions, the MWTN in preconcentration-nebulization mode afforded limits of detection up to one order of magnitude lowers than those obtained operating the MWTN exclusively as a nebulizer. To validate the method, the multi-element analysis (i.e. Al, Ca, Cd, Cr, Cu, Fe, K, Mg, Mn, Na, Pb and Zn) of different commercial spirit samples in ICP-AES has been performed. Analyte recoveries for all the elements studied ranged between 93% and 107% and the dynamic linear range covered up to 4 orders of magnitude (i.e. from 0.1 to 1000 μg L−1). In these analysis, both MWTN operating modes afforded similar results. Nevertheless, the preconcentration-nebulization mode permits to determine a higher number of analytes due to its higher detection capabilities.
Resumo:
Customizing shoe manufacturing is one of the great challenges in the footwear industry. It is a production model change where design adopts not only the main role, but also the main bottleneck. It is therefore necessary to accelerate this process by improving the accuracy of current methods. Rapid prototyping techniques are based on the reuse of manufactured footwear lasts so that they can be modified with CAD systems leading rapidly to new shoe models. In this work, we present a shoe last fast reconstruction method that fits current design and manufacturing processes. The method is based on the scanning of shoe last obtaining sections and establishing a fixed number of landmarks onto those sections to reconstruct the shoe last 3D surface. Automated landmark extraction is accomplished through the use of the self-organizing network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates up to 12 times the surface reconstruction and filtering processes used by the current shoe last design software. The proposed method offers higher accuracy compared with methods with similar efficiency as voxel grid.
Resumo:
Automatic Text Summarization has been shown to be useful for Natural Language Processing tasks such as Question Answering or Text Classification and other related fields of computer science such as Information Retrieval. Since Geographical Information Retrieval can be considered as an extension of the Information Retrieval field, the generation of summaries could be integrated into these systems by acting as an intermediate stage, with the purpose of reducing the document length. In this manner, the access time for information searching will be improved, while at the same time relevant documents will be also retrieved. Therefore, in this paper we propose the generation of two types of summaries (generic and geographical) applying several compression rates in order to evaluate their effectiveness in the Geographical Information Retrieval task. The evaluation has been carried out using GeoCLEF as evaluation framework and following an Information Retrieval perspective without considering the geo-reranking phase commonly used in these systems. Although single-document summarization has not performed well in general, the slight improvements obtained for some types of the proposed summaries, particularly for those based on geographical information, made us believe that the integration of Text Summarization with Geographical Information Retrieval may be beneficial, and consequently, the experimental set-up developed in this research work serves as a basis for further investigations in this field.
Resumo:
The present work refers to clay–graphene nanomaterials prepared by a green way using caramel from sucrose and two types of natural clays (montmorillonite and sepiolite) as precursors, with the aim of evaluating their potential use in hydrogen storage. The impregnation of the clay substrates by caramel in aqueous media, followed by a thermal treatment in the absence of oxygen of these clay–caramel intermediates gives rise to graphene-like materials, which remain strongly bound to the silicate support. The nature of the resulting materials was characterized by different techniques such as XRD, Raman spectroscopy and TEM, as well as by adsorption isotherms of N2, CO2 and H2O. These carbon–clay nanocomposites can act as adsorbents for hydrogen storage, achieving, at 298 K and 20 MPa, over 0.1 wt% of hydrogen adsorption excess related to the total mass of the system, and a maximum value close to 0.4 wt% of hydrogen specifically related to the carbon mass. The very high isosteric heat for hydrogen sorption determined from adsorption isotherms at different temperatures (14.5 kJ mol−1) fits well with the theoretical values available for hydrogen storage on materials that show a strong stabilization of the H2 molecule upon adsorption.
Resumo:
This work reports on the synthesis of nanosheets of layered titanosilicate JDF-L1 supported on commercial E-type glass fibers with the aim of developing novel nanoarchitectures useful as robust and easy to handle hydrogen adsorbents. The preparation of those materials is carried out by hydrothermal reaction from the corresponding gel precursor in the presence of the glass support. Because of the basic character of the synthesis media, silica from the silicate-based glass fibers can be involved in the reaction, cementing its associated titanosilicate and giving rise to strong linkages on the support with the result of very stable heterostructures. The nanoarchitectures built up by this approach promote the growth and disposition of the titanosilicate nanosheets as a house-of-cards radially distributed around the fiber axis. Such an open arrangement represents suitable geometry for potential uses in adsorption and catalytic applications where the active surface has to be available. The content of the titanosilicate crystalline phase in the system represents about 12 wt %, and this percentage of the adsorbent fraction can achieve, at 298 K and 20 MPa, 0.14 wt % hydrogen adsorption with respect to the total mass of the system. Following postsynthesis treatments, small amounts of Pd (<0.1 wt %) have been incorporated into the resulting nanoarchitectures in order to improve their hydrogen adsorption capacity. In this way, Pd-layered titanosilicate supported on glass fibers has been tested as a hydrogen adsorbent at diverse pressures and temperatures, giving rise to values around 0.46 wt % at 298 K and 20 MPa. A mechanism of hydrogen spillover involving the titanosilicate framework and the Pd nanoparticules has been proposed to explain the high increase in the hydrogen uptake capacity after the incorporation of Pd into the nanoarchitecture.
Resumo:
Different kinds of algorithms can be chosen so as to compute elementary functions. Among all of them, it is worthwhile mentioning the shift-and-add algorithms due to the fact that they have been specifically designed to be very simple and to save computer resources. In fact, almost the only operations usually involved with these methods are additions and shifts, which can be easily and efficiently performed by a digital processor. Shift-and-add algorithms allow fairly good precision with low cost iterations. The most famous algorithm belonging to this type is CORDIC. CORDIC has the capability of approximating a wide variety of functions with only the help of a slight change in their iterations. In this paper, we will analyze the requirements of some engineering and industrial problems in terms of type of operands and functions to approximate. Then, we will propose the application of shift-and-add algorithms based on CORDIC to these problems. We will make a comparison between the different methods applied in terms of the precision of the results and the number of iterations required.
Resumo:
In this study, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks. The low chroma values of the granite, which yield similar stimulation of the three color channels of the camera, proved to be the most challenging aspect of the task. The appropriate parameters for converting the device-dependent RGB color space into a device-independent color space were established. For this purpose, the color of a large number of Munsell samples (corresponding to the previously defined color gamut of granite) was measured with a digital camera and with a spectrophotometer (reference instrument). The color data were then compared using the CIELAB color formulae. The best correlations between measurements were obtained when the camera works to 10-bits and the spectrophotometric measures in SCI mode. Finally, the calibrated instrument was used successfully to measure the color of six commercial varieties of Spanish granite.
Resumo:
Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance.