954 resultados para Indexació automàtica
Resumo:
E-learning, which refers to the use of Internet-related technologies to improve knowledge and learning, has emerged as a complementary form of education, bringing advantages such as increased accessibility to information, personalized learning, democratization of education and ease of update, distribution and standardization of the content. In this sense, this paper aims to develop a tool, named ISE-SPL, whose purpose is the automatic generation of E-learning systems for medical education, making use of concepts of Software Product Lines. It consists of an innovative methodology for medical education that aims to assist professors of healthcare in their teaching through the use of educational technologies, all based on computing applied to healthcare (Informatics in Health). The tests performed to validate the ISE-SPL were divided into two stages: the first was made by using a software analysis tool similar to ISE-SPL, called SPLOT and the second was performed through usability questionnaires to healthcare professors who used ISESPL. Both tests showed positive results, proving it to be an efficient tool for generation of E-learning software and useful for professors in healthcare
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input
Resumo:
This paper proposes a methodology for automatic extraction of building roof contours from a Digital Elevation Model (DEM), which is generated through the regularization of an available laser point cloud. The methodology is based on two steps. First, in order to detect high objects (buildings, trees etc.), the DEM is segmented through a recursive splitting technique and a Bayesian merging technique. The recursive splitting technique uses the quadtree structure for subdividing the DEM into homogeneous regions. In order to minimize the fragmentation, which is commonly observed in the results of the recursive splitting segmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. The high object polygons are extracted by using vectorization and polygonization techniques. Second, the building roof contours are identified among all high objects extracted previously. Taking into account some roof properties and some feature measurements (e. g., area, rectangularity, and angles between principal axes of the roofs), an energy function was developed based on the Markov Random Field (MRF) model. The solution of this function is a polygon set corresponding to building roof contours and is found by using a minimization technique, like the Simulated Annealing (SA) algorithm. Experiments carried out with laser scanning DEM's showed that the methodology works properly, as it delivered roof contours with approximately 90% shape accuracy and no false positive was verified.
Resumo:
This paper proposes a monoscopic method for automatic determination of building's heights in digital photographs areas, based on radial displacement of points in the plan image and geometry at the time the photo is obtained. Determination of the buildings' heights can be used to model the surface in urban areas, urban planning and management, among others. The proposed methodology employs a set of steps to detect arranged radially from the system of photogrammetric coordinates, which characterizes the lateral edges of buildings present in the photo. In a first stage is performed the reduction of the searching area through detection of shadows projected by buildings, generating sub-images of the areas around each of the detected shadow. Then, for each sub-image, the edges are automatically extracted, and tests of consistency are applied for it in order to be characterized as segments of straight arranged radially. Next, with the lateral edges selected and the knowledge of the flight height, the buildings' heights can be calculated. The experimental results obtained with real images showed that the proposed approach is suitable to perform the automatic identification of the buildings height in digital images.
Resumo:
This article proposes a method for 3D road extraction from a stereopair of aerial images. The dynamic programming (DP) algorithm is used to carry out the optimization process in the object-space, instead of usually doing it in the image-space such as the DP traditional methodologies. This means that road centerlines are directly traced in the object-space, implying that a mathematical relationship is necessary to connect road points in object and image-space. This allows the integration of radiometric information from images into the associate mathematical road model. As the approach depends on an initial approximation of each road, it is necessary a few seed points to coarsely describe the road. Usually, the proposed method allows good results to be obtained, but large anomalies along the road can disturb its performance. Therefore, the method can be used for practical application, although it is expected some kind of local manual edition of the extracted road centerline.
Resumo:
In this paper is a totally automatic strategy proposed to reduce the complexity of patterns ( vegetation, building, soils etc.) that interact with the object 'road' in color images, thus reducing the difficulty of the automatic extraction of this object. The proposed methodology consists of three sequential steps. In the first step the punctual operator is applied for artificiality index computation known as NandA ( Natural and Artificial). The result is an image whose the intensity attribute is the NandA response. The second step consists in automatically thresholding the image obtained in the previous step, resulting in a binary image. This image usually allows the separation between artificial and natural objects. The third step consists in applying a preexisting road seed extraction methodology to the previous generated binary image. Several experiments carried out with real images made the verification of the potential of the proposed methodology possible. The comparison of the obtained result to others obtained by a similar methodology for road seed extraction from gray level images, showed that the main benefit was the drastic reduction of the computational effort.
Resumo:
The new technique for automatic search of the order parameters and critical properties is applied to several well-know physical systems, testing the efficiency of such a procedure, in order to apply it for complex systems in general. The automatic-search method is combined with Monte Carlo simulations, which makes use of a given dynamical rule for the time evolution of the system. In the problems inves¬tigated, the Metropolis and Glauber dynamics produced essentially equivalent results. We present a brief introduction to critical phenomena and phase transitions. We describe the automatic-search method and discuss some previous works, where the method has been applied successfully. We apply the method for the ferromagnetic fsing model, computing the critical fron¬tiers and the magnetization exponent (3 for several geometric lattices. We also apply the method for the site-diluted ferromagnetic Ising model on a square lattice, computing its critical frontier, as well as the magnetization exponent f3 and the susceptibility exponent 7. We verify that the universality class of the system remains unchanged when the site dilution is introduced. We study the problem of long-range bond percolation in a diluted linear chain and discuss the non-extensivity questions inherent to long-range-interaction systems. Finally we present our conclusions and possible extensions of this work
Resumo:
The terminological performance of the descriptors representing the Information Science domain in the SIBI/USP Controlled Vocabulary was evaluated in manual, automatic and semi-automatic indexing processes. It can be concluded that, in order to have a better performance (i.e., to adequately represent the content of the corpus), current Information Science descriptors of the SIBi/USP Controlled Vocabulary must be extended and put into context by means of terminological definitions so that information needs of users are fulfilled.
Resumo:
Este trabalho apresenta uma extensão do provador haRVey destinada à verificação de obrigações de prova originadas de acordo com o método B. O método B de desenvolvimento de software abrange as fases de especificação, projeto e implementação do ciclo de vida do software. No contexto da verificação, destacam-se as ferramentas de prova Prioni, Z/EVES e Atelier-B/Click n Prove. Elas descrevem formalismos com suporte à checagem satisfatibilidade de fórmulas da teoria axiomática dos conjuntos, ou seja, podem ser aplicadas ao método B. A checagem de SMT consiste na checagem de satisfatibilidade de fórmulas da lógica de primeira-ordem livre de quantificadores dada uma teoria decidível. A abordagem de checagem de SMT implementada pelo provador automático de teoremas haRVey é apresentada, adotando-se a teoria dos vetores que não permite expressar todas as construções necessárias às especificações baseadas em conjuntos. Assim, para estender a checagem de SMT para teorias dos conjuntos destacam-se as teorias dos conjuntos de Zermelo-Frankel (ZFC) e de von Neumann-Bernays-Gödel (NBG). Tendo em vista que a abordagem de checagem de SMT implementada no haRVey requer uma teoria finita e pode ser estendida para as teorias nãodecidíveis, a teoria NBG apresenta-se como uma opção adequada para a expansão da capacidade dedutiva do haRVey à teoria dos conjuntos. Assim, através do mapeamento dos operadores de conjunto fornecidos pela linguagem B a classes da teoria NBG, obtem-se uma abordagem alternativa para a checagem de SMT aplicada ao método B
Resumo:
Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping
Resumo:
A remoção de inconsistências em um projeto é menos custosa quando realizadas nas etapas iniciais da sua concepção. A utilização de Métodos Formais melhora a compreensão dos sistemas além de possuir diversas técnicas, como a especificação e verificação formal, para identificar essas inconsistências nas etapas iniciais de um projeto. Porém, a transformação de uma especificação formal para uma linguagem de programação é uma tarefa não trivial. Quando feita manualmente, é uma tarefa passível da inserção de erros. O uso de ferramentas que auxiliem esta etapa pode proporcionar grandes benefícios ao produto final a ser desenvolvido. Este trabalho propõe a extensão de uma ferramenta cujo foco é a tradução automática de especificações em CSPm para Handel-C. CSP é uma linguagem de descrição formal adequada para trabalhar com sistemas concorrentes. Handel-C é uma linguagem de programação cujo resultado pode ser compilado diretamente para FPGA's. A extensão consiste no aumento no número de operadores CSPm aceitos pela ferramenta, permitindo ao usuário definir processos locais, renomear canais e utilizar guarda booleana em escolhas externas. Além disto, propomos também a implementação de um protocolo de comunicação que elimina algumas restrições da composição paralela de processos na tradução para Handel-C, permitindo que a comunicação entre múltiplos processos possa ser mapeada de maneira consistente e que a mesma somente ocorra quando for autorizada.
Resumo:
Removing inconsistencies in a project is a less expensive activity when done in the early steps of design. The use of formal methods improves the understanding of systems. They have various techniques such as formal specification and verification to identify these problems in the initial stages of a project. However, the transformation from a formal specification into a programming language is a non-trivial task and error prone, specially when done manually. The aid of tools at this stage can bring great benefits to the final product to be developed. This paper proposes the extension of a tool whose focus is the automatic translation of specifications written in CSPM into Handel-C. CSP is a formal description language suitable for concurrent systems, and CSPM is the notation used in tools support. Handel-C is a programming language whose result can be compiled directly into FPGA s. Our extension increases the number of CSPM operators accepted by the tool, allowing the user to define local processes, to rename channels in a process and to use Boolean guards on external choices. In addition, we also propose the implementation of a communication protocol that eliminates some restrictions on parallel composition of processes in the translation into Handel-C, allowing communication in a same channel between multiple processes to be mapped in a consistent manner and that improper communication in a channel does not ocurr in the generated code, ie, communications that are not allowed in the system specification
Resumo:
Typically Web services contain only syntactic information that describes their interfaces. Due to the lack of semantic descriptions of the Web services, service composition becomes a difficult task. To solve this problem, Web services can exploit the use of ontologies for the semantic definition of service s interface, thus facilitating the automation of discovering, publication, mediation, invocation, and composition of services. However, ontology languages, such as OWL-S, have constructs that are not easy to understand, even for Web developers, and the existing tools that support their use contains many details that make them difficult to manipulate. This paper presents a MDD tool called AutoWebS (Automatic Generation of Semantic Web Services) to develop OWL-S semantic Web services. AutoWebS uses an approach based on UML profiles and model transformations for automatic generation of Web services and their semantic description. AutoWebS offers an environment that provides many features required to model, implement, compile, and deploy semantic Web services