11 resultados para The ISTRION platform
em Universidad de Alicante
Resumo:
This paper analyzes the learning experiences and opinions from a group of undergraduate students in a course about Robotics. The contents of this course were taught as a set of seminars. In each seminar, the student learned interdisciplinary knowledge of computer science, control engineering, electronics and other fields related to Robotics. The aim of this course is that the students are able to design and implement their own and custom robotic solution for a series of tests planned by the teachers. These tests measure the behavior and mechatronic features of the students' robots. Finally, the students' robots are confronted with some competitions. In this paper, the low-cost robotic architecture used by the students, the contents of the course, the tests to compare the solutions of students and the opinion of them are amply discussed.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
The subject of Construction of Structures I studies, from a constructive point of view and taking into account current legislation, reinforced concrete structures used in buildings, through the acquisition of knowledge and construction criteria required in the profession of a Technical Architect. The contents acquired in this course are essential for further professional development of technicians and are closely related to many of the subjects taught in the same or other courses of the Degree in Technical Architecture at the University of Alicante. The aim of this paper is to present, analyze and discuss the development of a new methodology proposed in the mentioned subject, as it supposed an important change in the traditional way of teaching Construction and Structures I. In order to incorporate new teaching tools in 2013-2014, the course has been implemented by using a Moodle software tool to promote blended learning with online exercises. Our Moodle community allows collaborative work within an open-source platform where teachers and students share a new and personalized learning environment. Students are easily used to the interface and the platform, value the constant connection with teachers or other fellows and completely agree with the possibility of making questions or share documents 24 hours a day. The proposed methodology consists of lectures and practical classes. In the lectures, the basics of each topic are discussed; class attendance, daily study and conducting scheduled exercises are indispensable. Practical classes allow to consolidate the knowledge gained in theory classes by solving professional exercises and actual construction problems related to structures, that shall be compulsorily delivered online. So, after the correction of the teacher and the subsequent feedback of students, practical exercises ensure lifelong learning of the student, who can download any kind of material at any time (constructive details, practical exercises and even corrected exams). Regarding the general evaluation system, goals achievement is assessed on an ongoing basis (65% of the final mark) along the course through written and graphic evidences in person and online, as well as a individual development of a workbook. In all cases, the acquisition of skills, the ability to synthesize, the capacity of logical and critical thinking are assessed. The other 35 % of the mark is evaluated by a complementary graphic exam. Participation in the computing platform is essential and the student is required to do and present, at least 90% of the practices proposed. Those who do not comply with the practices in each specific date could not be assessed continuously and may only choose the final exam. In conclusion, the subject of Construction of Structures I is essential in the development of the regulated profession of Technical Architect as they are considered, among other professional profiles, as specialists in construction of building structures. The use of a new communication platform and online teaching allows the acquisition of knowledge and constructive approaches in a continuous way, with a more direct and personal monitoring by the teacher that has been highly appreciated by almost 100% of the students. Ultimately, it is important to say that the use of Moodle in this subject is a very interesting tool, which was really well welcome by students in one of the densest and important subjects of the Degree of Technical Architecture.
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.
Resumo:
This paper describes a stage in the COMENEGO project, which is creating comparable corpora of Business texts in order to distribute them among translation practitioners so that they can use this resource when translating economic, business or financial texts. This stage consists of discursive analysis of a pilot specialised corpus initially compiled in French and Spanish. Its textual resources are classified in different categories which need to be confirmed so that they can be useful when including them into the virtual platform which will allow users exploit the corpus and filter their searches according to their specific needs. The aim of this paper is to propose a discursive analysis approach based on the concept of ‘metadiscourse’ (Hyland, 2005).
Resumo:
El Libro blanco, título de grado en traducción e interpretación, publicado en 2004 por la ANECA, revela que "la especialización en traducción literaria es minoritaria". Así que ¿qué se puede hacer para formar adecuada y eficazmente a los estudiantes que quieren dedicarse a esta parcela tan particular? Para responder a tal cuestión, en primer lugar, estudiamos el proceso de la enseñanza-aprendizaje de la traducción en la era de las tecnologías de la información y comunicación. A continuación, describimos el entorno de trabajo que utilizamos para adaptarnos tanto al EEES como a las necesidades del mercado. Por último, damos a conocer una serie de actuaciones llevadas a cabo en el marco de un máster de traducción literaria.
Resumo:
El coneixement de la ciència i la tècnica es representa i es transfereix a través de paraules que tenen un significat especialitzat, precís i concís. L’accés al coneixement especialitzat permet l’ús adequat de la terminologia. Treballar el llenguatge juntament amb el coneixement científic des dels inicis és crucial. En el projecte «Jugant a definir la ciència» (I i II) partim del supòsit que les bases del coneixement especialitzat es comencen a adquirir en els primers anys de vida d’una persona. El nostre objecte d’estudi és presentar recursos per treballar col·laborativament paraules bàsiques de la ciència a l’escola com ara aigua, espai, estrella, cervell, gel, mort, sol, calor, velocitat, aire, vida, etc. Enguany, el projecte ha saltat a la xarxa. En aquest article presentem, a més, la plataforma digital, «El club lèxic», que fomenta el treball col·laboratiu.
Resumo:
Conscientes de las posibilidades pedagógicas, participativas y de colaboración científica de la Web 2.0, los autores del presente artículo se plantean la detección de las necesidades y expectativas hacia la creación de una plataforma virtual delimitada por un espacio interactivo o red social para la Historia de la Educación y el Patrimonio Histórico-educativo. Gracias a la evaluación mediante un cuestionario en el que han participado un grupo de expertos docentes e investigadores de Historia de la Educación, se han obtenido una serie de conclusiones a tener en cuenta respecto a determinados aspectos (conocimiento y uso, expectativas, componentes, etc.) que orientarán la construcción ad hoc de dicha plataforma. Es relevante el poco conocimiento y uso de las redes sociales tanto genéricas como específicas, así como otro tipo de aplicaciones web. Destaca, por otro lado, que la edad de los participantes no ha condicionado la importancia percibida sobre las capacidades de las TIC para el desarrollo en la docencia y la investigación.
Resumo:
Camera traps have become a widely used technique for conducting biological inventories, generating a large number of database records of great interest. The main aim of this paper is to describe a new free and open source software (FOSS), developed to facilitate the management of camera-trapped data which originated from a protected Mediterranean area (SE Spain). In the last decade, some other useful alternatives have been proposed, but ours focuses especially on a collaborative undertaking and on the importance of spatial information underpinning common camera trap studies. This FOSS application, namely, “Camera Trap Manager” (CTM), has been designed to expedite the processing of pictures on the .NET platform. CTM has a very intuitive user interface, automatic extraction of some image metadata (date, time, moon phase, location, temperature, atmospheric pressure, among others), analytical (Geographical Information Systems, statistics, charts, among others), and reporting capabilities (ESRI Shapefiles, Microsoft Excel Spreadsheets, PDF reports, among others). Using this application, we have achieved a very simple management, fast analysis, and a significant reduction of costs. While we were able to classify an average of 55 pictures per hour manually, CTM has made it possible to process over 1000 photographs per hour, consequently retrieving a greater amount of data.
Resumo:
PURPOSE: To evaluate in a pilot study the visual, refractive, corneal topographic, and aberrometric changes after wavefront-guided LASIK or photorefractive keratectomy (PRK) using a high-resolution aberrometer to calculate the treatment for aberrated eyes. METHODS: Twenty aberrated eyes of 18 patients undergoing wavefront-guided LASIK or PRK using the VISX STARS4IR excimer laser and the iDesign aberrometer (Abbott Medical Optics, Inc., Santa Ana, CA) were enrolled in this prospective study. Three groups were differentiated: keratoconus post-CXL group including 11 keratoconic eyes (10 patients), post-LASIK group including 5 eyes (5 patients) with previous decentered LASIK treatments, and post-RK group including 4 eyes (3 patients) with previous radial keratotomy. Visual, refractive, contrast sensitivity, corneal topographic, and ocular aberrometric changes were evaluated during a 6-month follow-up. RESULTS: An improvement in uncorrected (UDVA) and corrected visual acuity (CDVA) associated with a reduction in the spherical equivalent was observed in the three groups, but was only statistically significant in the keratoconus post-CXL and post-LASIK groups (P ≤ .04). All eyes gained one or more lines of CDVA after surgery. Improvements in contrast sensitivity were observed in the three groups, but they were only statistically significant in the keratoconus post-CXL and post-LASIK groups (P ≤ .04). Regarding aberrations, a reduction was observed in trefoil aberrations in the keratoconus post-CXL group (P = .05) and significant reductions in higher-order and primary coma aberrations in the post-LASIK group (P = .04). CONCLUSIONS: Wavefront-guided laser enhancements using the evaluated platform seem to be safe and effective to restore the visual function in aberrated eyes.