58 resultados para Jogos virtuais - Classificação indicativa


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho tem dois objetivos: avaliar a usabilidade de três interfaces de ambientes virtuais de educação à distância através de duas técnicas avaliativas e identificar os fatores influenciadores da percepção de usabilidade dos ambientes avaliados. Os sistemas de educação à distância escolhidos foram o AulaNet, o E-Proinfo e o Teleduc, por serem desenvolvidos no Brasil e terem distribuição gratuita. A avaliação da usabilidade foi realizada através de duas técnicas documentadas na literatura. A primeira técnica de avaliação, do tipo preditiva ou diagnóstica, foi realizada pelo autor e um concluinte do curso de Sistemas de Informação do Centro Federal de Educação Tecnológica do estado do Piauí (CEFET-PI), mediante a observação de um checklist denominado Ergolist. A segunda avaliação, do tipo prospectivo, foi efetivada com o usuário sendo o próprio avaliador das interfaces, através de um questionário. A amostra foi composta de 15 professores e 15 alunos do CEFET-PI. Os resultados colhidos foram analisados a partir da estatística descritiva e testes de chi-quadrado. Os resultados mostraram que os ambientes apresentarem problemas de adaptabilidade, pois não possuem flexibilidade e nem levam em consideração a experiência do usuário. Na análise inferencial, foi constatado que o tempo de uso da Internet não afetou significativamente sua avaliação da usabilidade dos três ambientes, assim como na maior parte das variáveis de usabilidade não foram influenciadas pelo tipo de usuário , sexo e escolaridade . Por outro lado, em vários dos critérios ergonômicos avaliados, as variáveis de sistema tipo de ambiente e experiência com computador e a variável demográfica faixa etária afetaram a percepção de usabilidade dos ambientes virtuais de educação à distância

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work is to draw attention to the importance of use of techniques of loss prevention in small retail organization, analyzing and creating a classification model related to the use of these in companies. This work identifies the fragilities and virtues of companies and classifies them relating the use of techniques of loss prevention. The used methodology is based in a revision of the available literature on measurements and techniques of loss prevention, analyzing the processes that techniques needed to be adopted to reduce losses, approaching the "pillars" of loss prevention, the cycle life of products in retail and cycles of continues improvement in business. Based on the objectives of this work and on the light of researched techniques, was defined the case study, developed from a questionnaire application and the researcher's observation on a net of 16 small supermarkets. From those studies a model of classification of companies was created. The practical implications of this work are useful to point mistakes in retail administration that can become losses, reducing the profitability of companies or even making them impracticable. The academic contribution of this study is a proposal of an unpublished model of classification for small supermarkets based on the use of techniques of loss prevention. As a result of the research, 14 companies were classified as Companies with Minimum Use of Loss Prevention Techniques - CMULPT, and 02 companies were classified as Companies with Deficient Use of Loss Prevention Techniques - CDULPT. The result of the research concludes that on average the group was classified as being Companies with Minimum Use of Techniques of Prevention of Losses EUMTPP, and that the companies should adopt a program of loss prevention focusing in the identification and quantification of losses and in a implantation of a culture of loss prevention

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho tem como objetivo estudar os sistemas de Classificações existentes para a garantia da gestão da qualidade no setor hoteleiro, tendo como foco principal a Matriz de Classificação para os Meios de Hospedagem da EMBRATUR e a ISO 9000, observando os benefícios que esses sistemas e/ou processos de gestão poderão vir a proporcionar para o setor hoteleiro no que se refere à qualidade de seus serviços. Para a obtenção dessas informações foi realizada uma análise comparativa dos sistemas de gestão da qualidade através de pesquisas bibliográficas e de questionários enviados para empreendimentos hoteleiros certificados e classificados, onde os principais resultados fornecidos pela pesquisa foram trabalhados de forma a apresentar, de maneira clara, a superioridade de um sistema em relação ao outro

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of the maps obtained from remote sensing orbital images submitted to digital processing became fundamental to optimize conservation and monitoring actions of the coral reefs. However, the accuracy reached in the mapping of submerged areas is limited by variation of the water column that degrades the signal received by the orbital sensor and introduces errors in the final result of the classification. The limited capacity of the traditional methods based on conventional statistical techniques to solve the problems related to the inter-classes took the search of alternative strategies in the area of the Computational Intelligence. In this work an ensemble classifiers was built based on the combination of Support Vector Machines and Minimum Distance Classifier with the objective of classifying remotely sensed images of coral reefs ecosystem. The system is composed by three stages, through which the progressive refinement of the classification process happens. The patterns that received an ambiguous classification in a certain stage of the process were revalued in the subsequent stage. The prediction non ambiguous for all the data happened through the reduction or elimination of the false positive. The images were classified into five bottom-types: deep water; under-water corals; inter-tidal corals; algal and sandy bottom. The highest overall accuracy (89%) was obtained from SVM with polynomial kernel. The accuracy of the classified image was compared through the use of error matrix to the results obtained by the application of other classification methods based on a single classifier (neural network and the k-means algorithm). In the final, the comparison of results achieved demonstrated the potential of the ensemble classifiers as a tool of classification of images from submerged areas subject to the noise caused by atmospheric effects and the water column

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Post dispatch analysis of signals obtained from digital disturbances registers provide important information to identify and classify disturbances in systems, looking for a more efficient management of the supply. In order to enhance the task of identifying and classifying the disturbances - providing an automatic assessment - techniques of digital signal processing can be helpful. The Wavelet Transform has become a very efficient tool for the analysis of voltage or current signals, obtained immediately after disturbance s occurrences in the network. This work presents a methodology based on the Discrete Wavelet Transform to implement this process. It uses a comparison between distribution curves of signals energy, with and without disturbance. This is done for different resolution levels of its decomposition in order to obtain descriptors that permit its classification, using artificial neural networks

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we propose a solution to solve the scalability problem found in collaborative, virtual and mixed reality environments of large scale, that use the hierarchical client-server model. Basically, we use a hierarchy of servers. When the capacity of a server is reached, a new server is created as a sun of the first one, and the system load is distributed between them (father and sun). We propose efficient tools and techniques for solving problems inherent to client-server model, as the definition of clusters of users, distribution and redistribution of users through the servers, and some mixing and filtering operations, that are necessary to reduce flow between servers. The new model was tested, in simulation, emulation and in interactive applications that were implemented. The results of these experimentations show enhancements in the traditional, previous models indicating the usability of the proposed in problems of all-to-all communications. This is the case of interactive games and other applications devoted to Internet (including multi-user environments) and interactive applications of the Brazilian Digital Television System, to be developed by the research group. Keywords: large scale virtual environments, interactive digital tv, distributed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we propose the Interperception paradigm, a new approach that includes a set of rules and a software architecture for merge users from different interfaces in the same virtual environment. The system detects the user resources and provide transformations on the data in order to allow its visualization in 3D, 2D and textual (1D) interfaces. This allows any user to connect, access information, and exchange information with other users in a feasible way, without needs of changing hardware or software. As results are presented two virtual environments builded acording this paradigm

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The precision and the fast identification of abnormalities of bottom hole are essential to prevent damage and increase production in the oil industry. This work presents a study about a new automatic approach to the detection and the classification of operation mode in the Sucker-rod Pumping through dynamometric cards of bottom hole. The main idea is the recognition of the well production status through the image processing of the bottom s hole dynamometric card (Boundary Descriptors) and statistics and similarity mathematics tools, like Fourier Descriptor, Principal Components Analysis (PCA) and Euclidean Distance. In order to validate the proposal, the Sucker-Rod Pumping system real data are used

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of graphical objects three-dimensional (3D) multimedia applications is gaining more space in the media. Networks with high transmission rates, computers with large processing and graphics boost and popularize such three-dimensional applications. The areas of 3D applications ranging from military applications, entertainment applications geared up for education. Within the applications related to education, we highlight the applications that create virtual copies of cultural spaces such as museums. Through this copy, you can virtually visit a museum, see other users, communicate, exchange information on works, etc. Thereby allowing the visit museums physically distant remote users. A major problem of such virtual environments is its update. By dealing with various media (text, images, sounds, and 3D models), its subsequent handling and update on a virtual environment requires staff with specialized knowledge. Speaking of museums, they hardly have people on your team with this profile. Inside the GT-MV (Grupo de Trabalho de Museus Virtuais), funded by RNP (Rede Nacional de Ensino e Pesquisa) propose a portal for registration, amendment and seen collaborative virtual museums of Brazil. The update, be it related to work or physical space, a system with a national scale like this, would be impossible if done only by the project team. Within this scenario, we propose the modeling and implementation of a tool that allows editing of virtual spaces in an easy and intuitive as compared with available tools. Within the context of GT-MV, we apply the SAMVC (Sistema de Autoria de Museus Virtuais Colaborativos) to museums where curators build the museum from a 3D floor plan (2D). The system, from these twodimensional information, recreates the equivalent in three dimensions. With this, through little or no training, team members from each museum may be responsible for updating the system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Brain-Computer Interfaces (BCI) have as main purpose to establish a communication path with the central nervous system (CNS) independently from the standard pathway (nervous, muscles), aiming to control a device. The main objective of the current research is to develop an off-line BCI that separates the different EEG patterns resulting from strictly mental tasks performed by an experimental subject, comparing the effectiveness of different signal-preprocessing approaches. We also tested different classification approaches: all versus all, one versus one and a hierarchic classification approach. No preprocessing techniques were found able to improve the system performance. Furthermore, the hierarchic approach proved to be capable to produce results above the expected by literature

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reinforcement learning is a machine learning technique that, although finding a large number of applications, maybe is yet to reach its full potential. One of the inadequately tested possibilities is the use of reinforcement learning in combination with other methods for the solution of pattern classification problems. It is well documented in the literature the problems that support vector machine ensembles face in terms of generalization capacity. Algorithms such as Adaboost do not deal appropriately with the imbalances that arise in those situations. Several alternatives have been proposed, with varying degrees of success. This dissertation presents a new approach to building committees of support vector machines. The presented algorithm combines Adaboost algorithm with a layer of reinforcement learning to adjust committee parameters in order to avoid that imbalances on the committee components affect the generalization performance of the final hypothesis. Comparisons were made with ensembles using and not using the reinforcement learning layer, testing benchmark data sets widely known in area of pattern classification

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work holds the purpose of presenting an auxiliary way of bone density measurement through the attenuation of electromagnetic waves. In order to do so, an arrangement of two microstrip antennas with rectangular configuration has been used, operating in a frequency of 2,49 GHz, and fed by a microstrip line on a substrate of fiberglass with permissiveness of 4.4 and height of 0,9 cm. Simulations were done with silica, bone meal, silica and gypsum blocks samples to prove the variation on the attenuation level of different combinations. Because of their good reproduction of the human beings anomaly aspects, samples of bovine bone were used. They were subjected to weighing, measurement and microwave radiation. The samples had their masses altered after mischaracterization and the process was repeated. The obtained data were inserted in a neural network and its training was proceeded with the best results gathered by correct classification on 100% of the samples. It comes to the conclusion that through only one non-ionizing wave in the 2,49 GHz zone it is possible to evaluate the attenuation level in the bone tissue, and that with the appliance of neural network fed with obtained characteristics in the experiment it is possible to classify a sample as having low or high bone density