1000 resultados para Robótica e Informática Industrial


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work is to improve students’ learning by designing a teaching model that seeks to increase student motivation to acquire new knowledge. To design the model, the methodology is based on the study of the students’ opinion on several aspects we think importantly affect the quality of teaching (such as the overcrowded classrooms, time intended for the subject or type of classroom where classes are taught), and on our experience when performing several experimental activities in the classroom (for instance, peer reviews and oral presentations). Besides the feedback from the students, it is essential to rely on the experience and reflections of lecturers who have been teaching the subject several years. This way we could detect several key aspects that, in our opinion, must be considered when designing a teaching proposal: motivation, assessment, progressiveness and autonomy. As a result we have obtained a teaching model based on instructional design as well as on the principles of fractal geometry, in the sense that different levels of abstraction for the various training activities are presented and the activities are self-similar, that is, they are decomposed again and again. At each level, an activity decomposes into a lower level tasks and their corresponding evaluation. With this model the immediate feedback and the student motivation are encouraged. We are convinced that a greater motivation will suppose an increase in the student’s working time and in their performance. Although the study has been done on a subject, the results are fully generalizable to other subjects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el curso docente 2010-2011 se inició la implantación del grado en Ingeniería Multimedia, título próximo a la Ingeniería Informática, pero enfocada a formar a profesionales capaces de gestionar proyectos Multimedia tanto en el ámbito del ocio como en el de la gestión de contenidos en redes de información. Esta implantación ha sido progresiva, de manera que cada año se iniciaba un curso nuevo de esta titulación, motivo por el cual este año, 2014-2015, es el primer año en el que el título está completamente implantado desde el inicio del curso. Esto nos ha llevado a plantearnos realizar un estudio sobre como están interconectadas las asignaturas en los distintos cursos. Este estudio ha tenido como objetivo averiguar los problemas o carencias de conocimientos que, por un lado tienen los alumnos en 2º curso, y por otro los que se pueden encontrar en 3º, así como establecer las posibles vías de solución a estos problemas, con la finalidad de mejorar el rendimiento en el aprendizaje de los alumnos. También se ha realizado un seguimiento sobre la evaluación de los alumnos realizada en las asignaturas de 2º para contrastar su adecuación al sistema de evaluación continua promovido por el Plan Bolonia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis aims to investigate the interaction of acoustic waves and fiber Bragg gratings (FBGs) in standard and suspended-core fibers (SCFs), to evaluate the influence of the fiber, grating and modulator design on the increase of the modulation efficiency, bandwidth and frequency. Initially, the frequency response and the resonant acoustic modes of a low frequency acousto-optic modulator (f < 1.2 MHz) are numerically investigated by using the finite element method. Later, the interaction of longitudinal acoustic waves and FBGs in SCFs is also numerically investigated. The fiber geometric parameters are varied and the strain and grating properties are simulated by means of the finite element method and the transfer matrix method. The study indicates that the air holes composing the SCF cause a significant reduction of the amount of silica in the fiber cross section increasing acousto-optic interaction in the core. Experimental modulation of the reflectivity of FBGs inscribed in two distinct SCFs indicates evidences of this increased interaction. Besides, a method to acoustically induce a dynamic phase-shift in a chirped FBG employing an optimized design of modulator is shown. Afterwards, a combination of this modulator and a FBG inscribed in a three air holes SCF is applied to mode-lock an ytterbium doped fiber laser. To improve the modulator design for future applications, two other distinct devices are investigated to increase the acousto-optic interaction, bandwidth and frequency (f > 10 MHz). A high reflectivity modulation has been achieved for a modulator based on a tapered fiber. Moreover, an increased modulated bandwidth (320 pm) has been obtained for a modulator based on interaction of a radial long period grating (RLPG) and a FBG inscribed in a standard fiber. In summary, the results show a considerable reduction of the grating/fiber length and the modulator size, indicating possibilities for compact and faster acousto-optic fiber devices. Additionally, the increased interaction efficiency, modulated bandwidth and frequency can be useful to shorten the pulse width of future all-fiber mode-locked fiber lasers, as well, to other photonic devices which require the control of the light in optical fibers by electrically tunable acoustic waves.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An ideal biomaterial for dental implants must have very high biocompatibility, which means that such materials should not provoke any serious adverse tissue response. Also, used metal alloys must have high fatigue resistance due the masticatory force and good corrosion resistance. These properties are rendered by using alpha and beta stabilizers, such as Al, V, Ni, Fe, Cr, Cu, Zn. Commercially pure titanium (TiCP) is used often for dental and orthopedic implants manufacturing. However, sometimes other alloys are employed and consequently it is essential to research the chemical elements present in those alloys that could bring prejudice for the health. Present work investigated TiCP metal alloys used for dental implant manufacturing and evaluated the presence of stabilizing elements within existing limits and standards for such materials. For alloy characterization and identification of stabilizing elements it was used EDXRF technique. This method allows to perform qualitative and quantitative analysis of the materials using the spectra of the characteristic X-rays emitted by the elements present in the metal samples. The experimental setup was based on two X- rays tubes (AMPTEK Mini X model with Ag and Au targets), a X-123SDD detector (AMPTEK) and a 0.5mm Cu collimator, developed due to the sample characteristics. The other experimental setup used as a complementary technique is composed of an X-ray tube with a Mo target, collimator 0.65mm and XFlash (SDD) detector - ARTAX 200 (BRUKER). Other method for elemental characterization by energy dispersive spectroscopy (EDS) applied in present work was based on Scanning Electron Microscopy (SEM) EVO® (Zeeis). This method also was used to evaluate the surface microstructure of the sample. The percentual of Ti obtained in the elementary characterization was among 93.35 ± 0.17% and 95.34 ± 0.19 %. These values are considered below the reference limit of 98.635% to 99.5% for TiCP, established by Association of metals centric materials engineers and scientists Society (ASM). The presence of elements Al and V in all samples also contributed to underpin the fact that are not TiCP implants. The values for Al vary between 6.3 ± 1.3% and 3.7 ± 2.0% and for V, between 0.26 ± 0.09% and 0.112 ± 0.048%. According to the American Society for Testing and Materials (ASTM), these elements should not be present in TiCP and in accordance with the National Institute of Standards and Technology (NIST), the presence of Al should be <0.01% and V should be of 0.009 ± 0.001%. Obtained results showed that implant materials are not exactly TiCP but, were manufactured using Ti-Al-V alloy, which contained Fe, Ni, Cu and Zn. The quantitative analysis and elementary characterization of experimental results shows that the best accuracy and precision were reached with X-Ray tube with Au target and collimator of 0.5 mm. Use of technique of EDS confirmed the results of EDXRF for Ti-Al-V alloy. Evaluating the surface microstructure by SEM of the implants, it was possible to infer that ten of the thirteen studied samples are contemporaneous, rough surface and three with machined surface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The communication in vehicular ad hoc networks (VANETs) is commonly divided in two scenarios, namely vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). Aiming at establishing secure communication against eavesdroppers, recent works have proposed the exchange of secret keys based on the variation in received signal strength (RSS). However, the performance of such scheme depends on the channel variation rate, being more appropriate for scenarios where the channel varies rapidly, as is usually the case with V2V communication. In the communication V2I, the channel commonly undergoes slow fading. In this work we propose the use of multiple antennas in order to artificially generate a fast fading channel so that the extraction of secret keys out of the RSS becomes feasible in a V2I scenario. Numerical analysis shows that the proposed model can outperform, in terms of secret bit extraction rate, a frequency hopping-based method proposed in the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Portland cement being very common construction material has in its composition the natural gypsum. To decrease the costs of manufacturing, the cement industry is substituting the gypsum in its composition by small quantities of phosphogypsum, which is the residue generated by the production of fertilizers and consists essentially of calcium dihydrate and some impurities, such as fluoride, metals in general, and radionuclides. Currently, tons of phosphogypsum are stored in the open air near the fertilizer industries, causing contamination of the environment. The 226 Ra present in these materials, when undergoes radioactive decay, produces the 222Rn gas. This radioactive gas, when inhaled together with its decay products deposited in the lungs, produces the exposure to radiation and can be a potential cause of lung cancer. Thus, the objective of this study was to measure the concentration levels of 222Rn from cylindrical samples of Portland cement, gypsum and phosphogypsum mortar from the state of Paraná, as well as characterizer the material and estimate the radon concentration in an environment of hypothetical dwelling with walls covered by such materials. Experimental setup of 222Rn activity measurements was based on AlphaGUARD detector (Saphymo GmbH). The qualitative and quantitative analysis was performed by gamma spectrometry and EDXRF with Au and Ag targets tubes (AMPTEK), and Mo target (ARTAX) and mechanical testing with x- ray equipment (Gilardoni) and the mechanical press (EMIC). Obtained average values of radon activity from studied materials in the air of containers were of 854 ± 23 Bq/m3, 60,0 ± 7,2 Bq/m3 e 52,9 ± 5,4 Bq/m3 for Portland cement, gypsum and phosphogypsum mortar, respectively. These results extrapolated into the volume of hypothetical dwelling of 36 m3 with the walls covered by such materials were of 3366 ± 91 Bq/m3, 237 ± 28 Bq/m3 e 208 ± 21 Bq/m3for Portland cement, gypsum and phosphogypsum mortar, respectively. Considering the limit of 300 Bq/m3 established by the ICRP, it could be concluded that the use of Portland cement plaster in dwellings is not secure and requires some specific mitigation procedure. Using the results of gamma spectrometry there were calculated the values of radium equivalent activity concentrations (Raeq) for Portland cement, gypsum and phosphogypsum mortar, which were obtained equal to 78,2 ± 0,9 Bq/kg; 58,2 ± 0,9 Bq/kg e 68,2 ± 0,9 Bq/kg, respectively. All values of radium equivalent activity concentrations for studied samples are below the maximum level of 370 Bq/kg. The qualitative and quantitative analysis of EDXRF spectra obtained with studied mortar samples allowed to evaluate quantitate and the elements that constitute the material such as Ca, S, Fe, and others.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents an application of optical fiber sensors based on Bragg gratings integrated to a transtibial prosthesis tube manufactured with a polymeric composite systrem of epoxy resin reinforced with glass fiber. The main objective of this study is to characterize the sensors applied to the gait cycle and changes in the gravity center of a transtibial amputee, trough the analysis of deformation and strengh of the transtibial prosthesis tube. For this investigation it is produced a tube of the composite material described above using the molding method of resin transfer (RTM) with four optical sensors. The prosthesis in which the original tube is replaced is classified as endoskeletal, has vacuum fitting, aluminium conector tube and carbon fiber foot cushioning. The volunteer for the tests was a man of 41 years old, 1.65 meters tall, 72 kilograms and left-handed. His amputation occurred due to trauma (surgical section is in the medial level, and was made below the left lower limb knee). He has been a transtibial prosthesis user for two years and eight months. The characterization of the optical sensors and analysis of mechanical deformation and tube resistance occurred through the gait cycle and the variation of the center of gravity of the body by the following tests: stand up, support leg without the prosthesis, support in the leg with the prosthesis, walk forward and walk backward. Besides the characterization of optical sensors during the gait cycle and the variation of the gravity center in a transtibial amputated, the results also showed a high degree of integration of the sensors in the composite and a high mechanical strength of the material.