963 resultados para automatic generation
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Utilizando-se dados do campo de Camorim (Bacia de Sergipe-Alagoas), foi testado e aplicado um conjunto de técnicas estatísticas multivariantes (análises de agrupamentos, de componentes principais e discriminante) no intuito de identificar as fácies, previamente definidas em poços testemunhados, por meio dos perfis, viabilizando o reconhecimento das mesmas nos demais poços não testemunhados da área. A segunda etapa do processo de determinação das fácies consistiu no emprego de métodos auxiliares (análises composicional e de seqüência de fácies), que, combinados com as técnicas multivariantes, propiciaram melhores resultados na calibração rocha-perfil. A determinação das fácies, uma vez estabelecida, possibilitou o refinamento do processo de avaliação de formações ao viabilizar o exame de cada fácies-reservatório, isoladamente. Assim, esse procedimento tornou possível a escolha, para cada litologia, dos parâmetros utilizados na interpretação dos perfis ao mesmo tempo em que permitiu a totalização em separado dos valores de espessura, porosidade e saturação dos fluidos, bem como a adoção de diferentes valores de corte (cut-offs) para cada grupo considerado. Outras aplicações incluíram a melhoria na estimativa da porosidade e da permeabilidade, a adaptação de algoritmos para o cálculo preliminar de porosidade, a confecção de mapas de fácies e a geração automática de seções estratigráficas. Finalmente, foram destacadas a perspectiva de integração desse estudo com sistemas estatísticos de descrição de reservatórios, outras técnicas de determinação de fácies em desenvolvimento e a retomada da utilização de métodos estatísticos multivariantes em dados de perfis, como ferramenta de exploração.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
With the increasing production of information from e-government initiatives, there is also the need to transform a large volume of unstructured data into useful information for society. All this information should be easily accessible and made available in a meaningful and effective way in order to achieve semantic interoperability in electronic government services, which is a challenge to be pursued by governments round the world. Our aim is to discuss the context of e-Government Big Data and to present a framework to promote semantic interoperability through automatic generation of ontologies from unstructured information found in the Internet. We propose the use of fuzzy mechanisms to deal with natural language terms and present some related works found in this area. The results achieved in this study are based on the architectural definition and major components and requirements in order to compose the proposed framework. With this, it is possible to take advantage of the large volume of information generated from e-Government initiatives and use it to benefit society.
Resumo:
The use of XML for representation of eLearning-content and for automatic generation of different kinds of teaching media from this material is with all its advantages nowadays stateof-the-art. In the last years there have been numerous projects that leveraged XML-based production environments. At the end of the financial advancement the created materials have to be maintained with limited (human) resources. In the majority of cases this is only possible, if the authors care for their teaching material without extensive IT-support. From our point of view there has so far been a lack of intuitive usable XML editors. The prototype of such an XML editor “aXess” is introduced with the intention to encourage a broad discussion about the required features to manage the content of eLearning-materials.
Resumo:
CampusContent (CC) is a DFG-funded competence center for eLearning with its own portal. It links content and people who support sharing and reuse of high quality learning materials and codified pedagogical know-how, such as learning objectives, pedagogical scenarios, recommended learning activities, and learning paths. The heart of the portal is a distributed repository whose contents are linked to various other CampusContent portals. Integrated into each portal are user-friendly tools for designing reusable learning content, exercises, and templates for learning units and courses. Specialized authoring tools permit the configuration, adaption, and automatic generation of interactive Flash animations using Adobe's Flexbuilder technology. More coarse-grained content components such as complete learning units and entire courses, in which contents and materials taken from the repository are embedded, can be created with XML-based authoring tools. Open service interface allow the deep or shallow integration of the portal provider's preferred authoring and learning tools. The portal is built on top of the Enterprise Content Management System Alfresco, which comes with social networking functionality that has been adapted to accommmodate collaboration, sharing and reuse within trusted communities of practice.
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
The implementation of abstract machines involves complex decisions regarding, e.g., data representation, opcodes, or instruction specialization levéis, all of which affect the final performance of the emulator and the size of the bytecode programs in ways that are often difficult to foresee. Besides, studying alternatives by implementing abstract machine variants is a time-consuming and error-prone task because of the level of complexity and optimization of competitive implementations, which makes them generally difficult to understand, maintain, and modify. This also makes it hard to genérate specific implementations for particular purposes. To ameliorate those problems, we propose a systematic approach to the automatic generation of implementations of abstract machines. Different parts of their definition (e.g., the instruction set or the infernal data and bytecode representation) are kept sepárate and automatically assembled in the generation process. Alternative versions of the abstract machine are therefore easier to produce, and variants of their implementation can be created mechanically, with specific characteristics for a particular application if necessary. We illustrate the practicality of the approach by reporting on an implementation of a generator of production-quality WAMs which are specialized for executing a particular fixed (set of) program(s). The experimental results show that the approach is effective in reducing emulator size.
Resumo:
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
Resumo:
We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code.
Resumo:
La utilización de una cámara fotogramétrica digital redunda en el aumento demostrable de calidad radiométrica debido a la mejor relación señal/ruido y a los 12 bits de resolución radiométrica por cada pixel de la imagen. Simultáneamente se consigue un notable ahorro de tiempo y coste gracias a la eliminación de las fases de revelado y escaneado de la película y al aumento de las horas de vuelo por día. De otra parte, el sistema láser aerotransportado (LIDAR - Light Detection and Ranging) es un sistema con un elevado rendimiento y rentabilidad para la captura de datos de elevaciones para generar un modelo digital del terreno (MDT) y también de los objetos sobre el terreno, permitiendo así alcanzar alta precisión y densidad de información. Tanto el sistema LIDAR como el sistema de cámara fotogramétrica digital se combinan con otras técnicas bien conocidas: el sistema de posicionamiento global (GPS - Global Positioning System) y la orientación de la unidad de medida inercial (IMU - Inertial Measure Units), que permiten reducir o eliminar el apoyo de campo y realizar la orientación directa de los sensores utilizando datos de efemérides precisas de los satélites. Combinando estas tecnologías, se va a proponer y poner en práctica una metodología para generación automática de ortofotos en países de América del Sur. Analizando la precisión de dichas ortofotos comparándolas con fuente de mayor exactitud y con las especificaciones técnicas del Plan Nacional de Ortofotografía Aérea (PNOA) se determinará la viabilidad de que dicha metodología se pueda aplicar a zonas rurales. ABSTRACT Using a digital photogrammetric camera results in a demonstrable increase of the radiometric quality due to a better improved signal/noise ratio and the radiometric resolution of 12 bits per pixel of the image. Simultaneously a significant saving of time and money is achieved thanks to the elimination of the developing and film scanning stages, as well as to the increase of flying hours per day. On the other hand, airborne laser system Light Detection and Ranging (LIDAR) is a system with high performance and yield for the acquisition of elevation data in order to generate a digital terrain model (DTM), as well as objects on the ground which allows to achieve high accuracy and data density. Both the LIDAR and the digital photogrammetric camera system are combined with other well known techniques: global positioning system (GPS) and inertial measurement unit (IMU) orientation, which are currently in a mature evolutionary stage, which allow to reduce and/or remove field support and perform a direct guidance of sensors using specific historic data from the satellites. By combining these technologies, a methodology for automatic generation of orthophotos in South American countries will be proposed and implemented. Analyzing the accuracy of these orthophotos comparing them with more accurate sources and technical specifications of the National Aerial Orthophoto (PNOA), the viability of whether this methodology should be applied to rural areas, will be determined.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
O uso de veículos aéreos não tripulados (VANTs) tem se tornado cada vez mais comum, principalmente em aplicações de uso civil. No cenário militar, o uso de VANTs tem focado o cumprimento de missões específicas que podem ser divididas em duas grandes categorias: sensoriamento remoto e transporte de material de emprego militar. Este trabalho se concentra na categoria do sensoriamento remoto. O trabalho foca a definição de um modelo e uma arquitetura de referência para o desenvolvimento de sensores inteligentes orientados a missões específicas. O principal objetivo destas missões é a geração de mapas temáticos. Neste trabalho são investigados processos e mecanismos que possibilitem a geração desta categoria de mapas. Neste sentido, o conceito de MOSA (Mission Oriented Sensor Array) é proposto e modelado. Como estudos de caso dos conceitos apresentados são propostos dois sistemas de mapeamento automático de fontes sonoras, um para o caso civil e outro para o caso militar. Essas fontes podem ter origem no ruído gerado por grandes animais (inclusive humanos), por motores de combustão interna de veículos ou por atividade de artilharia (incluindo caçadores). Os MOSAs modelados para esta aplicação são baseados na integração de dados provenientes de um sensor de imageamento termal e uma rede de sensores acústicos em solo. A integração das informações de posicionamento providas pelos sensores utilizados, em uma base cartográfica única, é um dos aspectos importantes tratados neste trabalho. As principais contribuições do trabalho são a proposta de sistemas MOSA, incluindo conceitos, modelos, arquitetura e a implementação de referência representada pelo sistema de mapeamento automático de fontes sonoras.