51 resultados para Reinforcement Learning,resource-constrained devices,iOS devices,on-device machine learning
Resumo:
Motivado por los últimos hallazgos realizados gracias a los recientes avances tecnológicos y misiones espaciales, el estudio de los asteroides ha despertado el interés de la comunidad científica. Tal es así que las misiones a asteroides han proliferado en los últimos años (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) incentivadas por su enorme interés científico. Los asteroides son constituyentes fundamentales en la evolución del Sistema Solar, son además grandes concentraciones de valiosos recursos naturales, y también pueden considerarse como objectivos estratégicos para la futura exploración espacial. Desde hace tiempo se viene especulando con la posibilidad de capturar objetos próximos a la Tierra (NEOs en su acrónimo anglosajón) y acercarlos a nuestro planeta, permitiendo así un acceso asequible a los mismos para estudiarlos in-situ, explotar sus recursos u otras finalidades. Por otro lado, las asteroides se consideran con frecuencia como posibles peligros de magnitud planetaria, ya que impactos de estos objetos con la Tierra suceden constantemente, y un asteroide suficientemente grande podría desencadenar eventos catastróficos. Pese a la gravedad de tales acontecimientos, lo cierto es que son ciertamente difíciles de predecir. De hecho, los ricos aspectos dinámicos de los asteroides, su modelado complejo y las incertidumbres observaciones hacen que predecir su posición futura con la precisión necesaria sea todo un reto. Este hecho se hace más relevante cuando los asteroides sufren encuentros próximos con la Tierra, y más aún cuando estos son recurrentes. En tales situaciones en las cuales fuera necesario tomar medidas para mitigar este tipo de riesgos, saber estimar con precisión sus trayectorias y probabilidades de colisión es de una importancia vital. Por ello, se necesitan herramientas avanzadas para modelar su dinámica y predecir sus órbitas con precisión, y son también necesarios nuevos conceptos tecnológicos para manipular sus órbitas llegado el caso. El objetivo de esta Tesis es proporcionar nuevos métodos, técnicas y soluciones para abordar estos retos. Las contribuciones de esta Tesis se engloban en dos áreas: una dedicada a la propagación numérica de asteroides, y otra a conceptos de deflexión y captura de asteroides. Por lo tanto, la primera parte de este documento presenta novedosos avances de apliación a la propagación dinámica de alta precisión de NEOs empleando métodos de regularización y perturbaciones, con especial énfasis en el método DROMO, mientras que la segunda parte expone ideas innovadoras para la captura de asteroides y comenta el uso del “ion beam shepherd” (IBS) como tecnología para deflectarlos. Abstract Driven by the latest discoveries enabled by recent technological advances and space missions, the study of asteroids has awakened the interest of the scientific community. In fact, asteroid missions have become very popular in the recent years (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) motivated by their outstanding scientific interest. Asteroids are fundamental constituents in the evolution of the Solar System, can be seen as vast concentrations of valuable natural resources, and are also considered as strategic targets for the future of space exploration. For long it has been hypothesized with the possibility of capturing small near-Earth asteroids and delivering them to the vicinity of the Earth in order to allow an affordable access to them for in-situ science, resource utilization and other purposes. On the other side of the balance, asteroids are often seen as potential planetary hazards, since impacts with the Earth happen all the time, and eventually an asteroid large enough could trigger catastrophic events. In spite of the severity of such occurrences, they are also utterly hard to predict. In fact, the rich dynamical aspects of asteroids, their complex modeling and observational uncertainties make exceptionally challenging to predict their future position accurately enough. This becomes particularly relevant when asteroids exhibit close encounters with the Earth, and more so when these happen recurrently. In such situations, where mitigation measures may need to be taken, it is of paramount importance to be able to accurately estimate their trajectories and collision probabilities. As a consequence, advanced tools are needed to model their dynamics and accurately predict their orbits, as well as new technological concepts to manipulate their orbits if necessary. The goal of this Thesis is to provide new methods, techniques and solutions to address these challenges. The contributions of this Thesis fall into two areas: one devoted to the numerical propagation of asteroids, and another to asteroid deflection and capture concepts. Hence, the first part of the dissertation presents novel advances applicable to the high accuracy dynamical propagation of near-Earth asteroids using regularization and perturbations techniques, with a special emphasis in the DROMO method, whereas the second part exposes pioneering ideas for asteroid retrieval missions and discusses the use of an “ion beam shepherd” (IBS) for asteroid deflection purposes.
Resumo:
This approach aims at aligning, unifying and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. A sentiment lexicon is a critical and essential resource for tagging subjective corpora on the web or elsewhere. In many situations, the multilingual property of the sentiment lexicon is important because the writer is using two languages alternately in the same text, message or post. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and the UnifiedMetrics procedure for CPU and GPU, respectively.
Resumo:
This paper describes the UPM system for translation task at the EMNLP 2011 workshop on statistical machine translation (http://www.statmt.org/wmt11/), and it has been used for both directions: Spanish-English and English-Spanish. This system is based on Moses with two new modules for pre and post processing the sentences. The main contribution is the method proposed (based on the similarity with the source language test set) for selecting the sentences for training the models and adjusting the weights. With system, we have obtained a 23.2 BLEU for Spanish-English and 21.7 BLEU for EnglishSpanish
Resumo:
This paper proposes an architecture, based on statistical machine translation, for developing the text normalization module of a text to speech conversion system. The main target is to generate a language independent text normalization module, based on data and flexible enough to deal with all situa-tions presented in this task. The proposed architecture is composed by three main modules: a tokenizer module for splitting the text input into a token graph (tokenization), a phrase-based translation module (token translation) and a post-processing module for removing some tokens. This paper presents initial exper-iments for numbers and abbreviations. The very good results obtained validate the proposed architecture.
Resumo:
This paper describes the UPM system for the Spanish-English translation task at the NAACL 2012 workshop on statistical machine translation. This system is based on Moses. We have used all available free corpora, cleaning and deleting some repetitions. In this paper, we also propose a technique for selecting the sentences for tuning the system. This technique is based on the similarity with the sentences to translate. With our approach, we improve the BLEU score from 28.37% to 28.57%. And as a result of the WMT12 challenge we have obtained a 31.80% BLEU with the 2012 test set. Finally, we explain different experiments that we have carried out after the competition.
Resumo:
This paper describes the text normalization module of a text to speech fully-trainable conversion system and its application to number transcription. The main target is to generate a language independent text normalization module, based on data instead of on expert rules. This paper proposes a general architecture based on statistical machine translation techniques. This proposal is composed of three main modules: a tokenizer for splitting the text input into a token graph, a phrase-based translation module for token translation, and a post-processing module for removing some tokens. This architecture has been evaluated for number transcription in several languages: English, Spanish and Romanian. Number transcription is an important aspect in the text normalization problem.
Resumo:
El principal problema que impide actualmente una mayor utilización de las máquinas paralelas es la falta de herramientas de programación que permitan generar programas transportables a máquinas con diferentes prestaciones. En este trabajo se ha estudiado si los lenguajes con paralelismo explícito cumplen este requisito y son, por lo tanto, adecuados para programar este tipo de máquinas. El exceso de paralelismo, esto es, el uso de mayor paralelismo en el programa que el proporcionado por la máquina para esconder la latencia en la comunicación, se presenta en este trabajo como una solución a los problemas de eficiencia de los programas con paralelismo explícito cuando se ejecutan en máquinas que no tienen una granularidad adecuada. Con esta técnica, por lo tanto, los programas escritos con estos lenguajes pueden transportarse con eficiencia a diferentes máquinas. Para llevar a cabo el estudio de los lenguajes con paralelismo explícito, se ha desarrollado un modelo abstracto de paralelismo, en el cual un sistema está formado por una jerarquía de máquinas virtuales paralelas. Este modelo permite realizar un análisis genérico de la implementación de este tipo de lenguajes, ya sea sobre una máquina con sistema operativo o directamente sobre la máquina física. Este análisis genérico se ha aplicado a un lenguaje de este tipo, el lenguaje Ada. Se han estudiado las características específicas de Ada que pueden influir en la implementación eficiente del lenguaje, analizando también la propuesta de modificación del lenguaje correspondiente al proceso de revisión Ada 9X. Dentro del marco del modelo de paralelismo, se analiza también la problemática específica de las implementaciones del lenguaje sobre el sistema operativo. En este tipo de implementaciones, las interacciones de un programa con el entorno externo pueden causar ciertos problemas, como el bloqueo del proceso correspondiente del sistema operativo, que disminuyen el rendimiento del programa. Se analizan estos problemas y se proponen soluciones a los mismos. Se desarrolla en profundidad un ejemplo de este tipo de problemas: El acceso al estándar gráfico GKS desde Ada.---ABSTRACT---The major obstacle to the widespread utilization of the parallel machines is the lack of programming tools allowing the development of software portable between machines with different performance. This dissertation analyzes whether languages with explicit parallelism fulfil this requirement. The approach of using programs with more parallelism than available on the machine (parallel slackness) is presented. This technique can solve the efficiency problems appearing in the execution of programs with explicit parallelism over machines with a too coarse granularity. Therefore, with this approach programs can run efficiently on different machines. A new abstract model of parallelism allowing the generic study of the implementation of languages with explicit parallelism is developed. In this model, a parallel system is described by a hierarchy of parallel virtual machines. This generic analysis is applied to Ada language. Ada specific features with problematic implementation are identified and analyzed. The change proposals to Ada language in the frame of Ada 9X revisión process are also analyzed. The specific problematic of the language implementation on top of the operating system is studied under the scope of the parallelism model. With this kind of implementation, program interactions with extemal environments can lead to problems, like the blocking of the corresponding operating system process, decreasing the program execution performance. A practical example of this kind of problems, the access to GKS (Graphic Kernel System) from Ada programs, is analyzed and the implemented solution is described.
Resumo:
It is well known that the response of any photovoltaic solar cell is dependent on the spectral characteristics of the incident radiation. This dependency is crucial in the output characteristics of a multijunction (MJ) cell where the spectral composition of the radiation determines the overall photocurrent produced, as either the top or the middle subcell will be limiting its response. The current mismatching between top and middle subcell is translated into energy losses, affecting the yield of the system. For research and commercial purposes it is interesting to measure accurately the incident solar radiation on a MJ cell, in terms of its spectral composition. This measurement will allows us to determine the photocurrent generated in each band of the multijunction device. Nowadays, the only way of measuring the photocurrent generated by each subcell is done with isotype cells or with spectroradiometers but there is no device capable of directly measuring each subcell photocurrent. In this paper it is described a device based on a commercial multijunction solar cell that is capable of measuring the direct irradiance for the top and middle bands thus it offers information of the limiting subcell (top or middle) in outdoors conditions.
Resumo:
Liquid crystal properties make them useful for the development of security devices in applications of authentication and detection of fakes. Induced orientation of liquid crystal molecules and birefringence are the two main properties used in security devices. Employing liquid crystal and dichroic colorants, we have developed devices that show, with the aid of a polarizer, multiple images on each side of the device. Rubbed polyimide is used as alignment layer on each substrate of the LC cell. By rubbing the polyimide in different directions in each substrate it is possible to create any kind of symbols, drawings or motifs with a greyscale; the more complex the created device is, the more difficult is to fake it. To identify the motifs it is necessary to use polarized light. Depending on whether the polarizer is located in front of the LC cell or behind it, different motifs from one or the other substrate are shown. The effect arises from the dopant colour dye added to the liquid crystal, the induced orientation and the twist structure. In practice, a grazing reflection on a dielectric surface is polarized enough to see the effect. Any LC flat panel display can obviously be used as backlight as well.
Resumo:
Application of arc erosion to the patterning of metallic contacts in organic devices is presented. A home-made systems and details of the working principles are described. Advantages and drawbacks of this novel technology are discussed.
Resumo:
Abstract Due to recent scientific and technological advances in information sys¬tems, it is now possible to perform almost every application on a mobile device. The need to make sense of such devices more intelligent opens an opportunity to design data mining algorithm that are able to autonomous execute in local devices to provide the device with knowledge. The problem behind autonomous mining deals with the proper configuration of the algorithm to produce the most appropriate results. Contextual information together with resource information of the device have a strong impact on both the feasibility of a particu¬lar execution and on the production of the proper patterns. On the other hand, performance of the algorithm expressed in terms of efficacy and efficiency highly depends on the features of the dataset to be analyzed together with values of the parameters of a particular implementation of an algorithm. However, few existing approaches deal with autonomous configuration of data mining algorithms and in any case they do not deal with contextual or resources information. Both issues are of particular significance, in particular for social net¬works application. In fact, the widespread use of social networks and consequently the amount of information shared have made the need of modeling context in social application a priority. Also the resource consumption has a crucial role in such platforms as the users are using social networks mainly on their mobile devices. This PhD thesis addresses the aforementioned open issues, focusing on i) Analyzing the behavior of algorithms, ii) mapping contextual and resources information to find the most appropriate configuration iii) applying the model for the case of a social recommender. Four main contributions are presented: - The EE-Model: is able to predict the behavior of a data mining algorithm in terms of resource consumed and accuracy of the mining model it will obtain. - The SC-Mapper: maps a situation defined by the context and resource state to a data mining configuration. - SOMAR: is a social activity (event and informal ongoings) recommender for mobile devices. - D-SOMAR: is an evolution of SOMAR which incorporates the configurator in order to provide updated recommendations. Finally, the experimental validation of the proposed contributions using synthetic and real datasets allows us to achieve the objectives and answer the research questions proposed for this dissertation.
Resumo:
In this paper, an AlN/free-standing nanocrystalline diamond (NCD) system is proposed in order to process high frequency surface acoustic wave (SAW) resonators for sensing applications. The main problem of synthetic diamond is its high surface roughness that worsens the sputtered AlN quality and hence the device response. In order to study the feasibility of this structure, AlN films from 150 nm up to 1200 nm thick have been deposited on free-standing NCD. We have then analysed the influence of the AlN layer thickness on its crystal quality and device response. Optimized thin films of 300 nm have been used to fabricate of one-port SAW resonators operating in the 10–14 GHz frequency range. A SAW based sensor pressure with a sensibility of 0.33 MHz/bar has been fabricated.
Resumo:
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
Resumo:
Abstract?We consider a mathematical model related to the stationary regime of a plasma of fusion nuclear, magnetically confined in a Stellarator device. Using the geometric properties of the fusion device, a suitable system of coordinates and averaging methods, the mathematical problem may be reduced to a two dimensional free boundary problem of nonlocal type, where the corresponding differential equation is of the Grad?Shafranov type. The current balance within each flux magnetic gives us the possibility to define the third covariant magnetic field component with respect to the averaged poloidal flux function. We present here some numerical experiences and we give some numerical approach for the averaged poloidal flux and for the third covariant magnetic field component.
Resumo:
In this paper we propose to employ an instability that occurs in bistable devices as a control signal at the reception stage to generate the clock signal. One of the adopted configurations is composed of two semiconductor optical amplifiers arranged in a cascaded structure. This configuration has an output equivalent to that obtained from Self-Electrooptic Effect Devices (SEEDs), and it can implement the main Boolean functions of two binary inputs. These outputs, obtained from the addition of two binary signals, show a short spike in the transition from "1" to "2" in the internal processing. A similar result is obtained for a simple semiconductor amplifier with bistable behavior. The paper will show how these structures may help recover clock signals in any optical transmission system