958 resultados para WEB (Computer program language)
Resumo:
We propose in this work, a new method of conceptual organization of areas involving assistive technology, categorizing them in a logical and simple manner; Furthermore, we also propose the implementation of an interface based on electroculography, able to generate high-level commands, to trigger robotic, computer and electromechanical devices. To validate the eye interface, was developed an electronic circuit associated with a computer program that captured the signals generated by eye movements of users, generating high-level commands, able to trigger an active bracing and many other electromechanical systems. The results showed that it was possible to control many electromechanical systems through only eye movements. The interface is presented as a viable way to perform the proposed task and can be improved in the signals analysis in the the digital level. The diagrammatic model developed, presented as a tool easy to use and understand, providing the conceptual organization needs of assistive technology
Resumo:
The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.
Desempenho agronômico, bromatológico e estabilidade fenotípica de sorgo silageiro em Uberlândia - MG
Resumo:
Sorghum (Sorghum bicolor (L.) Moench) is a good alternative to be used as silage, especially in places with water scarcity and high temperatures, due to their morphological and physiological characteristics. The appropriate management, as the ideal seeding time, interferes both productivity and the quality of silage. The work was conducted with the objective of evaluating the agronomic and bromatological performance of varieties and hybrids of silage sorghum and their phenotypic stability in two seasons, season and off-season, in the city of Uberlândia, Minas Gerais. The experiments were performed at Capim Branco Experimental Farm of Federal University of Uberlândia (UFU), located in the referred city. There were two sowing dates in the same experimental area, off-season (March to June 2014) and season (November 2014 to March 2015), and the varieties and hybrids were evaluated in both situations. The design was a randomized block with 25 treatments (hybrids and varieties of sorghum) and three replications. Agronomical and bromatological data were subjected to an analysis of variance; averages were grouped by Scott-Knott test at 5% of probability, through Genes computer program; and to estimate the stability, it was opted for Annicchiarico method. The flowering of cultivars, dry matter productivity, plant height, Acid Detergent Fiber (ADF), Neutral Detergent Fiber (NDF) and Crude Protein (CP) are affected by the environment and the variety. Regarding productivity and quality of the fiber, SF11 variety was superior, independent of the rated environment. In relation to the performance stability of dry matter, the varieties SF15, SF11, SF25, PROG 134 IPA, 1141572, 1141570 and 1141562 were highlighted. For the stability of the quality of fibers (FDA and FDN), the variety 1141562 stood out. The environment reduces the expression of characters “days of flowering”, “plant height” and “productivity of dry matter of hybrids”. From the 25 hybrids analyzed for productivity and stability of dry matter performance, seven were highlighted, regardless of the rated environment: Volumax commercial hybrid and experiments 12F39006, 12F39007, 12F37014, 12F39014, 12F38009 and 12F02006.
Resumo:
Nesta dissertação apresentamos um trabalho de desenvolvimento e utilização de pulsos de radiofreqüência modulados simultaneamente em freqüência, amplitude e fase (pulsos fortemente modulados, SMP, do inglês Strongly Modulated Pulses) para criar estados iniciais e executar operações unitárias que servem como blocos básicos para processamento da informação quântica utilizando Ressonância Magnética Nuclear (RMN). As implementações experimentais foram realizas em um sistema de 3 q-bits constituído por spins nucleares de Césio 133 (spin nuclear 7/2) em uma amostra de cristal líquido em fase nemática. Os pulsos SMP´s foram construídos teoricamente utilizando um programa especialmente desenvolvido para esse fim, sendo o mesmo baseado no processo de otimização numérica Simplex Nelder-Mead. Através deste programa, os pulsos SMP foram otimizados de modo a executarem as operações lógicas desejadas com durações consideravelmente menores que aquelas realizadas usando o procedimento usual de RMN, ou seja, seqüências de pulsos e evoluções livres. Isso tem a vantagem de reduzir os efeitos de descoerência decorrentes da relaxação do sistema. Os conceitos teóricos envolvidos na criação dos SMPs são apresentados e as principais dificuldades (experimentais e teóricas) que podem surgir devido ao uso desses procedimentos são discutidas. Como exemplos de aplicação, foram produzidos os estados pseudo-puros usados como estados iniciais de operações lógicas em RMN, bem como operações lógicas que foram posteriormente aplicadas aos mesmos. Utilizando os SMP\'s também foi possível realizar experimentalmente os algoritmos quânticos de Grover e Deutsch-Jozsa para 3 q-bits. A fidelidade das implementações experimentais foi determinadas utilizando as matrizes densidade experimentais obtidas utilizando um método de tomografia da matriz densidade previamente desenvolvido.
Resumo:
Based on close examinations of instant message (IM) interactions, this chapter argues that an interactional sociolinguistic approach to computer-mediated language use could provide explanations for phenomena that previously could not be accounted for in computer-mediated discourse analysis (CMDA). Drawing on the theoretical framework of relational work (Locher, 2006), the analysis focuses on non-task oriented talk and its function in forming and establishing communication norms in the team, as well as micro-level phenomena, such as hesitation, backchannel signals and emoticons. The conclusions of this preliminary research suggest that the linguistic strategies used for substituting audio-visual signals are strategically used in discursive functions and have an important role in relational work
Resumo:
This paper focuses on James March’s 1991 article on ‘Exploration and Exploitation in Organizational Learning’, which is now the seventh most highly cited paper in management and organisation studies. March’s paper is based on a computer program that simulates the collective and individual learning of a group of fifty individuals. The largely forgotten story that this paper re-calls is the real-life experiment that March, in large part, designed and conducted when he was the new ‘boy Dean’ of the School of Social Sciences in the University of California at Irvine between 1964 and 1969. Taken together, both stories illuminate important moments in the history of organisation studies. The comparison suggests that March’s model, which was probably the first simulation of an organisation learning, also worked to constitute rather than model the phenomenon.
Resumo:
Este artículo presenta una investigación en la que se analizan las dificultades del profesorado para planificar, coordinar y evaluar competencias claves en una muestra de 23 centros educativos. El tema tiene hondas repercusiones ya que una mala praxis educativa de las competencias claves puede conculcar uno de los derechos fundamentales del alumnado a ser evaluado de forma objetiva (LODE: Art.6b y RD 732/1995: Art. 13.1) y poder superar las pruebas de evaluación consideradas necesarias para la obtención del título académico mínimo que otorga el estado español. La investigación se ha desarrollado desde una doble perspectiva metodológica; en primer lugar, es una investigación descriptiva en la que presentamos las características fundamentales de las competencias claves y la normativa básica para su desarrollo y evaluación. En segundo lugar, aplicamos un procedimiento de análisis con una doble vertiente cualitativa mediante el empleo del programa Atlas-Ti y del enfoque reticular-categorial del análisis de redes sociales con la aplicación de UCINET y el visor yED Graph Editor para abordar el análisis de las principales dificultades y obstáculos detectados. Los resultados muestran que existen serias dificultades en las tres dimensiones analizadas: "planificación", "coordinación" y "evaluación" de competencias clave; especialmente en la necesidad de formación del profesorado, en la evaluación de las competencias, en la metodología para su desarrollo y en los procesos de coordinación interna para su consecución en los centros educativos.
Resumo:
In this talk, I will describe various computational modelling and data mining solutions that form the basis of how the office of Deputy Head of Department (Resources) works to serve you. These include lessons I learn about, and from, optimisation issues in resource allocation, uncertainty analysis on league tables, modelling the process of winning external grants, and lessons we learn from student satisfaction surveys, some of which I have attempted to inject into our planning processes.
Resumo:
Abstract Ordnance Survey, our national mapping organisation, collects vast amounts of high-resolution aerial imagery covering the entirety of the country. Currently, photogrammetrists and surveyors use this to manually capture real-world objects and characteristics for a relatively small number of features. Arguably, the vast archive of imagery that we have obtained portraying the whole of Great Britain is highly underutilised and could be ‘mined’ for much more information. Over the last year the ImageLearn project has investigated the potential of "representation learning" to automatically extract relevant features from aerial imagery. Representation learning is a form of data-mining in which the feature-extractors are learned using machine-learning techniques, rather than being manually defined. At the beginning of the project we conjectured that representations learned could help with processes such as object detection and identification, change detection and social landscape regionalisation of Britain. This seminar will give an overview of the project and highlight some of our research results.
Resumo:
Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.
Resumo:
The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.
Resumo:
Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es
Resumo:
Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.
Resumo:
Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.
Resumo:
Abstract Mandevillian intelligence is a specific form of collective intelligence in which individual cognitive vices (i.e., shortcomings, limitations, constraints and biases) are seen to play a positive functional role in yielding collective forms of cognitive success. In this talk, I will introduce the concept of mandevillian intelligence and review a number of strands of empirical research that help to shed light on the phenomenon. I will also attempt to highlight the value of the concept of mandevillian intelligence from a philosophical, scientific and engineering perspective. Inasmuch as we accept the notion of mandevillian intelligence, then it seems that the cognitive and epistemic value of a specific social or technological intervention will vary according to whether our attention is focused at the individual or collective level of analysis. This has a number of important implications for how we think about the cognitive impacts of a number of Web-based technologies (e.g., personalized search mechanisms). It also forces us to take seriously the idea that the exploitation (or even the accentuation!) of individual cognitive shortcomings could, in some situations, provide a productive route to collective forms of cognitive and epistemic success. Speaker Biography Dr Paul Smart Paul Smart is a senior research fellow in the Web and Internet Science research group at the University of Southampton in the UK. He is a Fellow of the British Computer Society, a professional member of the Association of Computing Machinery, and a member of the Cognitive Science Society. Paul’s research interests span a number of disciplines, including philosophy, cognitive science, social science, and computer science. His primary area of research interest relates to the social and cognitive implications of Web and Internet technologies. Paul received his bachelors degree in Psychology from the University of Nottingham. He also holds a PhD in Experimental Psychology from the University of Sussex.