43 resultados para Lyra Minima
Resumo:
Este trabalho tem dois objetivos: avaliar a usabilidade de três interfaces de ambientes virtuais de educação à distância através de duas técnicas avaliativas e identificar os fatores influenciadores da percepção de usabilidade dos ambientes avaliados. Os sistemas de educação à distância escolhidos foram o AulaNet, o E-Proinfo e o Teleduc, por serem desenvolvidos no Brasil e terem distribuição gratuita. A avaliação da usabilidade foi realizada através de duas técnicas documentadas na literatura. A primeira técnica de avaliação, do tipo preditiva ou diagnóstica, foi realizada pelo autor e um concluinte do curso de Sistemas de Informação do Centro Federal de Educação Tecnológica do estado do Piauí (CEFET-PI), mediante a observação de um checklist denominado Ergolist. A segunda avaliação, do tipo prospectivo, foi efetivada com o usuário sendo o próprio avaliador das interfaces, através de um questionário. A amostra foi composta de 15 professores e 15 alunos do CEFET-PI. Os resultados colhidos foram analisados a partir da estatística descritiva e testes de chi-quadrado. Os resultados mostraram que os ambientes apresentarem problemas de adaptabilidade, pois não possuem flexibilidade e nem levam em consideração a experiência do usuário. Na análise inferencial, foi constatado que o tempo de uso da Internet não afetou significativamente sua avaliação da usabilidade dos três ambientes, assim como na maior parte das variáveis de usabilidade não foram influenciadas pelo tipo de usuário , sexo e escolaridade . Por outro lado, em vários dos critérios ergonômicos avaliados, as variáveis de sistema tipo de ambiente e experiência com computador e a variável demográfica faixa etária afetaram a percepção de usabilidade dos ambientes virtuais de educação à distância
Resumo:
The progresses of the Internet and telecommunications have been changing the concepts of Information Technology IT, especially with regard to outsourcing services, where organizations seek cost-cutting and a better focus on the business. Along with the development of that outsourcing, a new model named Cloud Computing (CC) evolved. It proposes to migrate to the Internet both data processing and information storing. Among the key points of Cloud Computing are included cost-cutting, benefits, risks and the IT paradigms changes. Nonetheless, the adoption of that model brings forth some difficulties to decision-making, by IT managers, mainly with regard to which solutions may go to the cloud, and which service providers are more appropriate to the Organization s reality. The research has as its overall aim to apply the AHP Method (Analytic Hierarchic Process) to decision-making in Cloud Computing. There to, the utilized methodology was the exploratory kind and a study of case applied to a nationwide organization (Federation of Industries of RN). The data collection was performed through two structured questionnaires answered electronically by IT technicians, and the company s Board of Directors. The analysis of the data was carried out in a qualitative and comparative way, and we utilized the software to AHP method called Web-Hipre. The results we obtained found the importance of applying the AHP method in decision-making towards the adoption of Cloud Computing, mainly because on the occasion the research was carried out the studied company already showed interest and necessity in adopting CC, considering the internal problems with infrastructure and availability of information that the company faces nowadays. The organization sought to adopt CC, however, it had doubt regarding the cloud model and which service provider would better meet their real necessities. The application of the AHP, then, worked as a guiding tool to the choice of the best alternative, which points out the Hybrid Cloud as the ideal choice to start off in Cloud Computing. Considering the following aspects: the layer of Infrastructure as a Service IaaS (Processing and Storage) must stay partly on the Public Cloud and partly in the Private Cloud; the layer of Platform as a Service PaaS (Software Developing and Testing) had preference for the Private Cloud, and the layer of Software as a Service - SaaS (Emails/Applications) divided into emails to the Public Cloud and applications to the Private Cloud. The research also identified the important factors to hiring a Cloud Computing provider
Resumo:
In today s global society, companies have become even more competitive with the abundance of information that prospective clients have available to them. This way the tourist services has appropriate the distribution using electronics ways as information for doing acquirement services. This work, shows the adopted provision of on-line tourist services does have suited apportionment for factors and buy decisions. The method used is a survey applied to the tourists, at international airport Augusto Severo, no matter why the visit reason to Natal city, at all, they were 210 people, being the chief variables evaluated the tourist perception of facility, utility, benefits, amenity and pleasure from the Internet s use, under the terms at the tourists experiences from the WEB about to acquisitions and researches. The results acquaint that the younger tourists or less experts tourists, in visit to Natal, demonstrated greater bias to the Internet s use than the other people. By the way women have a larger representation as a consumer from the WEB and finally, people along greater study tends to adopt. The work s contribution provides greater knowledge for the tourism executives about how might use the Internet, as well as bring forward a scene propitious for the on-line diffuseness service
Resumo:
This work present a interval approach to deal with images with that contain uncertainties, as well, as treating these uncertainties through morphologic operations. Had been presented two intervals models. For the first, is introduced an algebraic space with three values, that was constructed based in the tri-valorada logic of Lukasiewiecz. With this algebraic structure, the theory of the interval binary images, that extends the classic binary model with the inclusion of the uncertainty information, was introduced. The same one can be applied to represent certain binary images with uncertainty in pixels, that it was originated, for example, during the process of the acquisition of the image. The lattice structure of these images, allow the definition of the morphologic operators, where the uncertainties are treated locally. The second model, extend the classic model to the images in gray levels, where the functions that represent these images are mapping in a finite set of interval values. The algebraic structure belong the complete lattices class, what also it allow the definition of the elementary operators of the mathematical morphology, dilation and erosion for this images. Thus, it is established a interval theory applied to the mathematical morphology to deal with problems of uncertainties in images
Resumo:
This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.
Resumo:
With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
The Support Vector Machines (SVM) has attracted increasing attention in machine learning area, particularly on classification and patterns recognition. However, in some cases it is not easy to determinate accurately the class which given pattern belongs. This thesis involves the construction of a intervalar pattern classifier using SVM in association with intervalar theory, in order to model the separation of a pattern set between distinct classes with precision, aiming to obtain an optimized separation capable to treat imprecisions contained in the initial data and generated during the computational processing. The SVM is a linear machine. In order to allow it to solve real-world problems (usually nonlinear problems), it is necessary to treat the pattern set, know as input set, transforming from nonlinear nature to linear problem. The kernel machines are responsible to do this mapping. To create the intervalar extension of SVM, both for linear and nonlinear problems, it was necessary define intervalar kernel and the Mercer s theorem (which caracterize a kernel function) to intervalar function
Resumo:
ln this work, it was deveIoped a parallel cooperative genetic algorithm with different evolution behaviors to train and to define architectures for MuItiIayer Perceptron neural networks. MuItiIayer Perceptron neural networks are very powerful tools and had their use extended vastIy due to their abiIity of providing great resuIts to a broad range of appIications. The combination of genetic algorithms and parallel processing can be very powerful when applied to the Iearning process of the neural network, as well as to the definition of its architecture since this procedure can be very slow, usually requiring a lot of computational time. AIso, research work combining and appIying evolutionary computation into the design of neural networks is very useful since most of the Iearning algorithms deveIoped to train neural networks only adjust their synaptic weights, not considering the design of the networks architecture. Furthermore, the use of cooperation in the genetic algorithm allows the interaction of different populations, avoiding local minima and helping in the search of a promising solution, acceIerating the evolutionary process. Finally, individuaIs and evolution behavior can be exclusive on each copy of the genetic algorithm running in each task enhancing the diversity of populations
Resumo:
The use of the natural gas is growing year after year in the whole world and also in Brazil. It is verified that in the last five years the profile of natural gas consumption reached a great advance and investments had been carried through in this area. In the oil industry, the use of the natural gas for fuel in the drive of engines is usual for a long date. It is also used to put into motion equipment, or still, to generate electric power. Such engines are based on the motor cycle of combustion Otto, who requires a natural gas with well definite specification, conferring characteristic anti-detonating necessary to the equipment performance for projects based on this cycle. In this work, process routes and thermodynamic conditions had been selected and evaluated. Based on simulation assays carried out in commercial simulators the content of the methane index of the effluent gas were evaluated at various ranges of pressure, temperature, flowrate, molecular weight and chemical nature and composition of the absorbent. As final result, it was established a route based on process efficiency, optimized consumption of energy and absorbent. Thereby, it serves as base for the compact equipment conception to be used in locu into the industry for the removal of hydrocarbon from the natural gas produced
Resumo:
The composition of petroleum may change from well to well and its resulting characteristics influence significantly the refine products. Therefore, it is important to characterize the oil in order to know its properties and send it adequately for processing. Since petroleum is a multicomponent mixture, the use of synthetic mixtures that are representative of oil fractions provides a better understand of the real mixture behavior. One way for characterization is usually obtained through correlation of physico-chemical properties of easy measurement, such as density, specific gravity, viscosity, and refractive index. In this work new measurements were obtained for density, specific gravity, viscosity, and refractive index of the following binary mixtures: n-heptane + hexadecane, cyclohexane + hexadecane, and benzene + hexadecane. These measurements were accomplished at low pressure and temperatures in the range 288.15 K to 310.95 K. These data were applied in the development of a new method of oil characterization. Furthermore, a series of measurements of density at high pressure and temperature of the binary mixture cyclohexane + n-hexadecane were performed. The ranges of pressure and temperature were 6.895 to 62.053 MPa and 318.15 to 413.15 K, respectively. Based on these experimental data of compressed liquid mixtures, a thermodynamic modeling was proposed using the Peng-Robinson equation of state (EOS). The EOS was modified with scaling of volume and a relatively reduced number of parameters were employed. The results were satisfactory demonstrating accuracy not only for density data, but also for isobaric thermal expansion and isothermal compressibility coefficients. This thesis aims to contribute in a scientific manner to the technological problem of refining heavy fractions of oil. This problem was treated in two steps, i.e., characterization and search of the processes that can produce streams with economical interest, such as solvent extraction at high pressure and temperature. In order to determine phase equilibrium data in these conditions, conceptual projects of two new experimental apparatus were developed. These devices consist of cells of variable volume together with a analytical static device. Therefore, this thesis contributed with the subject of characterization of hydrocarbons mixtures and with development of equilibrium cells operating at high pressure and temperature. These contributions are focused on the technological problem of refining heavy oil fractions
Resumo:
In the present work are established initially the fundamental relationships of thermodynamics that govern the equilibrium between phases, the models used for the description of the behavior non ideal of the liquid and vapor phases in conditions of low pressures. This work seeks the determination of vapor-liquid equilibrium (VLE) data for a series of multicomponents mixtures of saturated aliphatic hydrocarbons, prepared synthetically starting from substances with analytical degree and the development of a new dynamic cell with circulation of the vapor phase. The apparatus and experimental procedures developed are described and applied for the determination of VLE data. VLE isobarics data were obtained through a Fischer's ebulliometer of circulation of both phases, for the systems pentane + dodecane, heptane + dodecane and decane + dodecane. Using the two new dynamic cells especially projected, of easy operation and low cost, with circulation of the vapor phase, data for the systems heptane + decane + dodecane, acetone + water, tween 20 + dodecane, phenol + water and distillation curves of a gasoline without addictive were measured. Compositions of the equilibrium phases were found by densimetry, chromatography, and total organic carbon analyzer. Calibration curves of density versus composition were prepared from synthetic mixtures and the behavior excess volumes were evaluated. The VLE data obtained experimentally for the hydrocarbon and aqueous systems were submitted to the test of thermodynamic consistency, as well as the obtained from the literature data for another binary systems, mainly in the bank DDB (Dortmund Data Bank), where the Gibbs-Duhem equation is used obtaining a satisfactory data base. The results of the thermodynamic consistency tests for the binary and ternary systems were evaluated in terms of deviations for applications such as model development. Later, those groups of data (tested and approved) were used in the KijPoly program for the determination of the binary kij parameters of the cubic equations of state original Peng-Robinson and with the expanded alpha function. These obtained parameters can be applied for simulation of the reservoirs petroleum conditions and of the several distillation processes found in the petrochemistry industry, through simulators. The two designed dynamic cells used equipments of national technology for the determination Humberto Neves Maia de Oliveira Tese de Doutorado PPGEQ/PRH-ANP 14/UFRN of VLE data were well succeed, demonstrating efficiency and low cost. Multicomponents systems, mixtures of components of different molecular weights and also diluted solutions may be studied in these developed VLE cells
Resumo:
In this thesis we study some problems related to petroleum reservoirs using methods and concepts of Statistical Physics. The thesis could be divided percolation problem in random multifractal support motivated by its potential application in modelling oil reservoirs. We develped an heterogeneous and anisotropic grid that followin two parts. The first one introduce a study of the percolations a random multifractal distribution of its sites. After, we determine the percolation threshold for this grid, the fractal dimension of the percolating cluster and the critical exponents ß and v. In the second part, we propose an alternative systematic of modelling and simulating oil reservoirs. We introduce a statistical model based in a stochastic formulation do Darcy Law. In this model, the distribution of permeabilities is localy equivalent to the basic model of bond percolation
Percolação convencional, percolação correlacionada e percolação por invasão num suporte multifractal
Resumo:
In this work we have studied the problem of percolation in a multifractal geometric support, in its different versions, and we have analysed the conection between this problem and the standard percolation and also the connection with the critical phenomena formalism. The projection of the multifractal structure into the subjacent regular lattice allows to map the problem of random percolation in the multifractal lattice into the problem of correlated percolation in the regular lattice. Also we have investigated the critical behavior of the invasion percolation model in this type of environment. We have discussed get the finite size effects
Resumo:
In this work we have studied, by Monte Carlo computer simulation, several properties that characterize the damage spreading in the Ising model, defined in Bravais lattices (the square and the triangular lattices) and in the Sierpinski Gasket. First, we investigated the antiferromagnetic model in the triangular lattice with uniform magnetic field, by Glauber dynamics; The chaotic-frozen critical frontier that we obtained coincides , within error bars, with the paramegnetic-ferromagnetic frontier of the static transition. Using heat-bath dynamics, we have studied the ferromagnetic model in the Sierpinski Gasket: We have shown that there are two times that characterize the relaxation of the damage: One of them satisfy the generalized scaling theory proposed by Henley (critical exponent z~A/T for low temperatures). On the other hand, the other time does not obey any of the known scaling theories. Finally, we have used methods of time series analysis to study in Glauber dynamics, the damage in the ferromagnetic Ising model on a square lattice. We have obtained a Hurst exponent with value 0.5 in high temperatures and that grows to 1, close to the temperature TD, that separates the chaotic and the frozen phases