9 resultados para lean implementation time

em Universidad de Alicante


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organising neural models have the ability to provide a good representation of the input space. In particular the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time-consuming, especially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This paper proposes a Graphics Processing Unit (GPU) parallel implementation of the GNG with Compute Unified Device Architecture (CUDA). In contrast to existing algorithms, the proposed GPU implementation allows the acceleration of the learning process keeping a good quality of representation. Comparative experiments using iterative, parallel and hybrid implementations are carried out to demonstrate the effectiveness of CUDA implementation. The results show that GNG learning with the proposed implementation achieves a speed-up of 6× compared with the single-threaded CPU implementation. GPU implementation has also been applied to a real application with time constraints: acceleration of 3D scene reconstruction for egomotion, in order to validate the proposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the XVIII Conference on Design of Circuits and Integrated Systems (DCIS), Ciudad Real, España, 2003.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to 10th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Sharjah, Emiratos Árabes, 2003.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although it is known that the Spanish current Educative System promotes using the Communicate Approach to teach foreign languages in schools, other recently designed approaches are also used to help students improve their skills when communicating in a foreign language. One of these approaches is Content and Language Integrated Learning, also known as CLIL, which is used to teach content courses using the English language as the language of instruction. This approach improves the students’ skills in English as the same time as they learn content from other areas. The goal of this thesis is to present a research project carried out at the University of Alicante during the academic year 2011-2012. With this research we obtained results that provide quantitative and qualitative data which explains how the use of the CLIL methodology affects the English level of students in the “Didactics of the English Language in Preschool Education” course in Preschool Education Teacher Undergraduate Program as students acquire the contents of the course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The construction industry has long been considered as highly fragmented and non-collaborative industry. This fragmentation sprouted from complex and unstructured traditional coordination processes and information exchanges amongst all parties involved in a construction project. This nature coupled with risk and uncertainty has pushed clients and their supply chain to search for new ways of improving their business process to deliver better quality and high performing product. This research will closely investigate the need to implement a Digital Nervous System (DNS), analogous to a biological nervous system, on the flow and management of digital information across the project lifecycle. This will be through direct examination of the key processes and information produced in a construction project and how a DNS can provide a well-integrated flow of digital information throughout the project lifecycle. This research will also investigate how a DNS can create a tight digital feedback loop that enables the organisation to sense, react and adapt to changing project conditions. A Digital Nervous System is a digital infrastructure that provides a well-integrated flow of digital information to the right part of the organisation at the right time. It provides the organisation with the relevant and up-to-date information it needs, for critical project issues, to aid in near real-time decision-making. Previous literature review and survey questionnaires were used in this research to collect and analyse data about information management problems of the industry – e.g. disruption and discontinuity of digital information flow due to interoperability issues, disintegration/fragmentation of the adopted digital solutions and paper-based transactions. Results analysis revealed efficient and effective information management requires the creation and implementation of a DNS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, a methodology based in a dynamical framework is proposed to incorporate additional sources of information to normalized difference vegetation index (NDVI) time series of agricultural observations for a phenological state estimation application. The proposed implementation is based on the particle filter (PF) scheme that is able to integrate multiple sources of data. Moreover, the dynamics-led design is able to conduct real-time (online) estimations, i.e., without requiring to wait until the end of the campaign. The evaluation of the algorithm is performed by estimating the phenological states over a set of rice fields in Seville (SW, Spain). A Landsat-5/7 NDVI series of images is complemented with two distinct sources of information: SAR images from the TerraSAR-X satellite and air temperature information from a ground-based station. An improvement in the overall estimation accuracy is obtained, especially when the time series of NDVI data is incomplete. Evaluations on the sensitivity to different development intervals and on the mitigation of discontinuities of the time series are also addressed in this work, demonstrating the benefits of this data fusion approach based on the dynamic systems.