941 resultados para Open Robot Project
Resumo:
There are now almost 700 Open Access policies around the world, two thirds of them in universities and research institutes. There is considerable variation across these policies in terms of the conditions they lay down for authors and of their effectiveness. This briefing paper lays out the main issues that affect the effectiveness of a policy in providing high levels of Open Access research material.
Resumo:
Open Access (OA) policies have been adopted at the national, institutional and funder levels in the UK and various infrastructural support mechanisms are available to facilitate open access. In July 2012, following an independent study on ‘Accessibility, sustainability, excellence: how to expand access to research publications’ the UK Government announced its OA policy. The Government’s policy determines that ‘support for publication in open access or hybrid journals, funded by Article Processing Charges (APCs), [i]s the main vehicle for the publication of research’. At the same time that the UK Government announced its OA policy, the UK’s major research funder, the Research Councils UK (RCUK), revised its OA policy announcing its ‘preference for immediate [Gold] Open Access with the maximum opportunity for re-use’. In March 2014, the UK Funding Councils announced their OA policy for the post-2014 Research Evaluation Framework (REF). The policy requires the deposit of peer-reviewed article and conference proceedings in repositories (Green OA) and is applicable from 1 April 2016. By and large, two distinct OA routes are being promoted by the UK Government and RCUK (Gold OA) and the Funding Councils (Green OA). This scenario requires that continued efforts are made to ensure that advice and support are provided to universities, academic libraries and researchers on the distinct OA policies and on policy compliance. The UK research institutions have been adopting OA policies from as early as 2003 and there currently are 85 institutional OA policies. Despite distinct OA policies having been adopted by policymakers, national research funders and research institutions, the UK’s movement towards OA has been a result of stakeholders coordinated efforts and is considered to be a case of good practice.
Resumo:
Phase 4: Review of the conditions under which individual services and platforms can be sustained On Tuesday 1 October 2013, in Bristol, United Kingdom, Knowledge Exchange brought together a group of international Open Access Service providers to discuss the sustainability of their services. A number of recurring lessons learned were mentioned; Though project funding can be used to start up a service, it does not guarantee the continuation of a service and it can be hard to establish the service as a viable entity, standing on its own feet. Research funders should be aware that if they have policies or mandates for making research outputs available they will eventually also be responsible for on-going support for the underlying infrastructure. At present some services are used globally but the costs are only covered by a limited geographic spread, sometimes only a number of institutions or only one country. Finding other funding sources can be challenging. Various routes were mentioned including commercial partnerships, memberships, offering additional paid services or using a Freemium model. There is not one model that will fit all. As more services turn to library sponsorship to sustain them, one strategy might be to bundle the requests and approach a group of research and infrastructure funders or institutions (and others) with a package rather than each service going through the same resource consuming process of soliciting funding. This will also allow the community to identify gaps, dependencies and overlap in the services. The possibility of setting up an organisation to bundle the services was discussed and a number of risks were identified.
Resumo:
SANTANA, André M.; SOUZA, Anderson A. S.; BRITTO, Ricardo S.; ALSINA, Pablo J.; MEDEIROS, Adelardo A. D. Localization of a mobile robot based on odometry and natural landmarks using extended Kalman Filter. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
This article concerns how higher education institutions across the United Kingdom are implementing systems and workflows in order to meet open access requirements for the next Research Excellence Framework. The way that institutions are preparing is not uniform, although there are key areas which require attention: cost management, advocacy, systems and metadata, structural workflows, and internal policy. Examples of preparative work in these areas are taken from institutions who have participated in the Open Access Good Practice initiative supported by Jisc.
Resumo:
This study discusses the importance of creating Open Innovation (OI) teams for optimizing costs of Research and Development (R&D), dividing risks and maximizing profits. The purpose of this study is to determine team characteristics beneficial for successful OI project, with the emphasis on the fact that such teams are formed of professionals belonging to different organizations, both private and state-owned, with different educational and professional backgrounds and personal qualities. This purpose is supported by three sub-objectives: to observe the phenomenon of OI and its implementation in emerging economies, particularly in Russia; to specify professional and personal competencies of OI team members essential for the successful collaboration; and to identify the role of the leader in OI teams. The theoretical part of this study consists of knowledge from academic literature related to OI, cross-functional and innovation teams and leadership in innovation. The practical part of the study is presented in the form of multiple case study, and the empirical research is based on six semistructured interviews collected in October 2014 from the CEOs, Innovation Managers and Technical Directors of innovation companies participating actively in OI projects. The findings of the study demonstrate that many of the necessary competencies are equal for innovation or cross-functional teams and OI teams, such as professionalism and communication skills. However, due to the specific nature of OI, additional personal characteristics were recognized as beneficial for OI teams, such as flexibility, empathy and success-orientation. The role of the leader is also considered as a critical success factor for OI teams, with the emphasis on flexibility and autonomy. The findings of the study contribute to understanding the connection between notions of team member, team climate and team leader, and its influence on OI project success. Thus, the main purpose of the study is providing support for existing knowledge on OI teams and developing new insights into this newly emerged topic.
Resumo:
The first version of this text was presented in the “Philosophy of Communication” section at the ECREA’s 5th European Communication Conference, “Communication for Empowerment,” in Lisbon in November 2014. I would like to thank the audience for the lively post-presentation discussion.
Resumo:
International audience
Resumo:
POSTDATA is a 5 year's European Research Council (ERC) Starting Grant Project that started in May 2016 and is hosted by the Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain. The context of the project is the corpora of European Poetry (EP), with a special focus on poetic materials from different languages and literary traditions. POSTDATA aims to offer a standardized model in the philological field and a metadata application profile (MAP) for EP in order to build a common classification of all these poetic materials. The information of Spanish, Italian and French repertoires will be published in the Linked Open Data (LOD) ecosystem. Later we expect to extend the model to include additional corpora. There are a number of Web Based Information Systems in Europe with repertoires of poems available to human consumption but not in an appropriate condition to be accessible and reusable by the Semantic Web. These systems are not interoperable; they are in fact locked in their databases and proprietary software, not suitable to be linked in the Semantic Web. A way to make this data interoperable is to develop a MAP in order to be able to publish this data available in the LOD ecosystem, and also to publish new data that will be created and modeled based on this MAP. To create a common data model for EP is not simple since the existent data models are based on conceptualizations and terminology belonging to their own poetical traditions and each tradition has developed an idiosyncratic analytical terminology in a different and independent way for years. The result of this uncoordinated evolution is a set of varied terminologies to explain analogous metrical phenomena through the different poetic systems whose correspondences have been hardly studied – see examples in González-Blanco & Rodríguez (2014a and b). This work has to be done by domain experts before the modeling actually starts. On the other hand, the development of a MAP is a complex task though it is imperative to follow a method for this development. The last years Curado Malta & Baptista (2012, 2013a, 2013b) have been studying the development of MAP's in a Design Science Research (DSR) methodological process in order to define a method for the development of MAPs (see Curado Malta (2014)). The output of this DSR process was a first version of a method for the development of Metadata Application Profiles (Me4MAP) (paper to be published). The DSR process is now in the validation phase of the Relevance Cycle to validate Me4MAP. The development of this MAP for poetry will follow the guidelines of Me4MAP and this development will be used to do the validation of Me4MAP. The final goal of the POSTDATA project is: i) to be able to publish all the data locked in the WIS, in LOD, where any agent interested will be able to build applications over the data in order to serve final users; ii) to build a Web platform where: a) researchers, students and other final users interested in EP will be able to access poems (and their analyses) of all databases; b) researchers, students and other final users will be able to upload poems, the digitalized images of manuscripts, and fill in the information concerning the analysis of the poem, collaboratively contributing to a LOD dataset of poetry.
Resumo:
This article describes the Robot Vision challenge, a competition that evaluates solutions for the visual place classification problem. Since its origin, this challenge has been proposed as a common benchmark where worldwide proposals are measured using a common overall score. Each new edition of the competition introduced novelties, both for the type of input data and subobjectives of the challenge. All the techniques used by the participants have been gathered up and published to make it accessible for future developments. The legacy of the Robot Vision challenge includes data sets, benchmarking techniques, and a wide experience in the place classification research that is reflected in this article.
Resumo:
In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.
Resumo:
Our proposal aims to display the analysis techniques, methodologies as well as the most relevant results expected within the Exhibitium project framework (http://www.exhibitium.com). Awarded by the BBVA Foundation, the Exhibitium project is being developed by an international consortium of several research groups . Its main purpose is to build a comprehensive and structured data repository about temporary art exhibitions, captured from the web, to make them useful and reusable in various domains through open and interoperable data systems.
Resumo:
A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.