949 resultados para Indexação automática
Resumo:
Os modelos e as técnicas de modelação são, hoje em dia, fundamentais na engenharia de software, devido à complexidade e sofisticação dos sistemas de informação actuais.A linguagem Unified Modeling Language (UML) [OMG, 2005a] [OMG, 2005b] tornou-se uma norma para modelação, na engenharia de software e em outras áreas e domínios, mas é reconhecida a sua falta de suporte para a modelação da interactividade e da interface com o utilizador [Nunes and Falcão e Cunha, 2000].Neste trabalho, é explorada a ligação entre as áreas de engenharia de software e de interacção humano-computador, tendo, para isso, sido escolhido o processo de desenvolvimento Wisdom [Nunes and Falcão e Cunha, 2000] [Nunes, 2001]. O método Wisdom é conduzido por casos de utilização essenciais e pelo princípio da prototipificação evolutiva, focando-se no desenho das interfaces com o utilizador através da estrutura da apresentação, com a notação Protótipos Abstractos Canónicos (PAC) [Constantine and Lockwood, 1999] [Constantine, 2003], e do comportamento da interacção com a notação ConcurTaskTrees (CTT) [Paternò, 1999] [Mori, Paternò, et al., 2004] em UML.É proposto, também, neste trabalho um novo passo no processo Wisdom, sendo definido um modelo específico, construído segundo os requisitos da recomendação Model Driven Architecture (MDA) [Soley and OMG, 2000] [OMG, 2003] elaborada pela organização Object Managent Group (OMG). Este modelo específico será o intermediário entre o modelo de desenho e a implementação da interface final com o utilizador. Esta proposta alinha o método Wisdom com a recomendação MDA, tornando possível que sejam gerados, de forma automática, protótipos funcionais de interfaces com o utilizador a partir dos modelos conceptuais de análise e desenho.Foi utilizada a ferramenta de modelação e de metamodelação MetaSketch [Nóbrega, Nunes, et al., 2006] para a definição e manipulação dos modelos e elementos propostos. Foram criadas as aplicações Model2Model e Model2Code para suportar as transformações entre modelos e a geração de código a partir destes. Para a plataforma de implementação foi escolhida a framework Hydra, desenvolvida na linguagem PHP [PHP, 2006], que foi adaptada com alguns conceitos de modo a suportar a abordagem defendida neste trabalho.
Resumo:
As aluviões, também denominadas por fluxos de detritos ou enxurradas, são movimentos de vertente rápidos, que ocorrem por ação da água e são consideradas um dos fenómenos mais perigosos em regiões montanhosas, causando prejuízos por onde passam. Como forma de prevenção, mas também como forma de estudo das condições inerentes à formação destes fenómenos e do respetivo comportamento ao longo do seu percurso, deve ser atribuído um papel importante à monitorização automática dos cursos de água. Na Ilha da Madeira, a aluvião mais recente aconteceu a 20 de fevereiro de 2010, afetando os concelhos da vertente Sul, particularmente os do Funchal e Ribeira Brava. Este evento surgiu devido a uma situação meteorológica adversa, com precipitações de elevada intensidade, resultando no transporte de um elevado volume de material sólido que levou ao transbordamento das ribeiras, obstruindo completamente a baixa funchalense e outros locais. Portanto, esta dissertação surge como introdução à temática do estudo e monitorização dos fluxos de detritos na Ilha da Madeira, onde, inicialmente, foi realizada uma primeira abordagem ao fenómeno, referindo as suas causas e características, tendo sido, também, mencionadas algumas das aluviões que assolaram a ilha. Posteriormente, foram descritos alguns dos equipamentos incluídos num sistema de monitorização, com as respetivas vantagens e desvantagens, bem como a descrição de alguns sistemas existentes. Como objetivo deste trabalho, foi idealizada uma solução de monitorização automática para a bacia hidrográfica da Ribeira de Machico, tendo sido analisadas as propriedades morfológicas da bacia, seguindo-se uma descrição dos sensores utilizados e a sua localização. Por fim, como caso de estudo, foi apresentado o sistema de monitorização a ser implementado pelo LREC (Laboratório Regional de Engenharia Civil), onde foram definidos os aparelhos utilizados, bem como a respetiva localização.
Resumo:
Analisa a indexação dos documentos da Biblioteca Setorial de Química através de um estudo informétrico na ferramenta de busca do Sistema de Bibliotecas (SISBI) da Universidade Federal do Rio Grande do Norte (UFRN). Descreve um estudo informétrico realizado na ferramenta de busca do SISBI da UFRN, sendo feita direcionada para os documentos Biblioteca Setorial de Química. Enfoca a importância do estudo informétrico para analisar a recuperação da informação relacionada à indexação. Aborda a relação da informetria com a indexação e recuperação da informação com o intuito de que o profissional bibliotecário seja mais analítico e tenha uma compreensão maior do campo da ciência da informação. Utiliza de uma metodologia de consultas de assuntos pré-definidos, sendo feita uma filtragem de forma quantitativa com o objetivo de verificar se a indexação dos documentos está sendo satisfatória para a recuperação da informação na ferramenta de busca do SISBI. Constata a relevância dos documentos em cada busca sua precisão e revocação, mostrando que para haver uma boa recuperação da informação tem que a indexação seja feita de forma que não haja ambiguidade com outros termos, com isso mostra a importância de sempre ser feito um estudo informétrico com que verifique a recuperação da informação para que possa sempre haver uma melhora na mesma.
Resumo:
VANTI, Nadia et al. Linguagens de indexação: uso das linguagens presentes na prática da indexação.In:ENCONTRO REGIONAL DE ESTUDANTES DE BIBLIOTECONOMIA, DOCUMENTAÇÃO, CIÊNCIA DA INFORMAÇÃO E GESTÃO DA INFORMAÇÃO,14, 2011, Maranhão. Anais... Maranhão: EREBD, 2011.
Resumo:
NASCIMENTO, Bruna Laís C. do; FELIPE, Carla Beatriz Marques; BEZERRA, Midinai Gomes.Política de indexação: visando a qualidade para a recuperação da informação. In: SEMINÁRIO DE PESQUISA DO CCSA, 16., 2010, Natal. Anais eletrônicos... Natal: UFRN, 2010. Disponível em:
Resumo:
Deaf people have serious difficulties to access information. The support for sign languages is rarely addressed in Information and Communication Technologies (ICT). Furthermore, in scientific literature, there is a lack of works related to machine translation for sign languages in real-time and open-domain scenarios, such as TV. To minimize these problems, in this work, we propose a solution for automatic generation of Brazilian Sign Language (LIBRAS) video tracks into captioned digital multimedia contents. These tracks are generated from a real-time machine translation strategy, which performs the translation from a Brazilian Portuguese subtitle stream (e.g., a movie subtitle or a closed caption stream). Furthermore, the proposed solution is open-domain and has a set of mechanisms that exploit human computation to generate and maintain their linguistic constructions. Some implementations of the proposed solution were developed for digital TV, Web and Digital Cinema platforms, and a set of experiments with deaf users was developed to evaluate the main aspects of the solution. The results showed that the proposed solution is efficient and able to generate and embed LIBRAS tracks in real-time scenarios and is a practical and feasible alternative to reduce barriers of deaf to access information, especially when human interpreters are not available
Resumo:
Due to the large amount of television content, which emerged from the Digital TV, viewers are facing a new challenge, how to find interesting content intuitively and efficiently. The Personalized Electronic Programming Guides (pEPG) arise as an answer to this complex challenge. We propose TrendTV a layered architecture that allows the formation of social networks among viewers of Interactive Digital TV based on online microblogging. Associated with a pEPG, this social network allows the viewer to perform content filtering on a particular subject from the indications made by other viewers of his network. Allowing the viewer to create his own indications for a particular content when it is displayed, or to analyze the importance of a particular program online, based on these indications. This allows any user to perform filtering on content and generate or exchange information with other users in a flexible and transparent way, using several different devices (TVs, Smartphones, Tablets or PCs). Moreover, this architecture defines a mechanism to perform the automatic exchange of channels based on the best program that is showing at the moment, suggesting new components to be added to the middleware of the Brazilian Digital TV System (Ginga). The result is a constructed and dynamic database containing the classification of several TV programs as well as an application to automatically switch to the best channel of the moment
Resumo:
E-learning, which refers to the use of Internet-related technologies to improve knowledge and learning, has emerged as a complementary form of education, bringing advantages such as increased accessibility to information, personalized learning, democratization of education and ease of update, distribution and standardization of the content. In this sense, this paper aims to develop a tool, named ISE-SPL, whose purpose is the automatic generation of E-learning systems for medical education, making use of concepts of Software Product Lines. It consists of an innovative methodology for medical education that aims to assist professors of healthcare in their teaching through the use of educational technologies, all based on computing applied to healthcare (Informatics in Health). The tests performed to validate the ISE-SPL were divided into two stages: the first was made by using a software analysis tool similar to ISE-SPL, called SPLOT and the second was performed through usability questionnaires to healthcare professors who used ISESPL. Both tests showed positive results, proving it to be an efficient tool for generation of E-learning software and useful for professors in healthcare
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input
Resumo:
This paper proposes a methodology for automatic extraction of building roof contours from a Digital Elevation Model (DEM), which is generated through the regularization of an available laser point cloud. The methodology is based on two steps. First, in order to detect high objects (buildings, trees etc.), the DEM is segmented through a recursive splitting technique and a Bayesian merging technique. The recursive splitting technique uses the quadtree structure for subdividing the DEM into homogeneous regions. In order to minimize the fragmentation, which is commonly observed in the results of the recursive splitting segmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. The high object polygons are extracted by using vectorization and polygonization techniques. Second, the building roof contours are identified among all high objects extracted previously. Taking into account some roof properties and some feature measurements (e. g., area, rectangularity, and angles between principal axes of the roofs), an energy function was developed based on the Markov Random Field (MRF) model. The solution of this function is a polygon set corresponding to building roof contours and is found by using a minimization technique, like the Simulated Annealing (SA) algorithm. Experiments carried out with laser scanning DEM's showed that the methodology works properly, as it delivered roof contours with approximately 90% shape accuracy and no false positive was verified.
Resumo:
This paper proposes a monoscopic method for automatic determination of building's heights in digital photographs areas, based on radial displacement of points in the plan image and geometry at the time the photo is obtained. Determination of the buildings' heights can be used to model the surface in urban areas, urban planning and management, among others. The proposed methodology employs a set of steps to detect arranged radially from the system of photogrammetric coordinates, which characterizes the lateral edges of buildings present in the photo. In a first stage is performed the reduction of the searching area through detection of shadows projected by buildings, generating sub-images of the areas around each of the detected shadow. Then, for each sub-image, the edges are automatically extracted, and tests of consistency are applied for it in order to be characterized as segments of straight arranged radially. Next, with the lateral edges selected and the knowledge of the flight height, the buildings' heights can be calculated. The experimental results obtained with real images showed that the proposed approach is suitable to perform the automatic identification of the buildings height in digital images.
Resumo:
This article proposes a method for 3D road extraction from a stereopair of aerial images. The dynamic programming (DP) algorithm is used to carry out the optimization process in the object-space, instead of usually doing it in the image-space such as the DP traditional methodologies. This means that road centerlines are directly traced in the object-space, implying that a mathematical relationship is necessary to connect road points in object and image-space. This allows the integration of radiometric information from images into the associate mathematical road model. As the approach depends on an initial approximation of each road, it is necessary a few seed points to coarsely describe the road. Usually, the proposed method allows good results to be obtained, but large anomalies along the road can disturb its performance. Therefore, the method can be used for practical application, although it is expected some kind of local manual edition of the extracted road centerline.
Resumo:
In this paper is a totally automatic strategy proposed to reduce the complexity of patterns ( vegetation, building, soils etc.) that interact with the object 'road' in color images, thus reducing the difficulty of the automatic extraction of this object. The proposed methodology consists of three sequential steps. In the first step the punctual operator is applied for artificiality index computation known as NandA ( Natural and Artificial). The result is an image whose the intensity attribute is the NandA response. The second step consists in automatically thresholding the image obtained in the previous step, resulting in a binary image. This image usually allows the separation between artificial and natural objects. The third step consists in applying a preexisting road seed extraction methodology to the previous generated binary image. Several experiments carried out with real images made the verification of the potential of the proposed methodology possible. The comparison of the obtained result to others obtained by a similar methodology for road seed extraction from gray level images, showed that the main benefit was the drastic reduction of the computational effort.