995 resultados para Digital algorithms
Resumo:
Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.
Resumo:
The objective of this work was to develop a free access exploratory data analysis software application for academic use that is easy to install and can be handled without user-level programming due to extensive use of chemometrics and its association with applications that require purchased licenses or routines. The developed software, called Chemostat, employs Hierarchical Cluster Analysis (HCA), Principal Component Analysis (PCA), intervals Principal Component Analysis (iPCA), as well as correction methods, data transformation and outlier detection. The data can be imported from the clipboard, text files, ASCII or FT-IR Perkin-Elmer “.sp” files. It generates a variety of charts and tables that allow the analysis of results that can be exported in several formats. The main features of the software were tested using midinfrared and near-infrared spectra in vegetable oils and digital images obtained from different types of commercial diesel. In order to validate the software results, the same sets of data were analyzed using Matlab© and the results in both applications matched in various combinations. In addition to the desktop version, the reuse of algorithms allowed an online version to be provided that offers a unique experience on the web. Both applications are available in English.
Resumo:
Whether digital book will become the dominant design of books and be a widely accepted format for reading is a question that is currently asked by every e-publisher, publishing industry worker and many book consumers. This study is the first to holistically approach Christensen’s disruptive innovation theory for an instrument of measuring the phenomenon of the digital book. The disruptiveness of an innovation could be measured by it’s disruptive potential and the disruption process it passes. The empirical part of the thesis is designed so to investigate the digital book‘s features as an innovation for disruptive potential and then the current digital book market, monitoring it for disruption processes. Proving that the digital book is a disruptive innovation may allow understanding it’s prospects and even help in making a pattern of the innovation’s market infiltration in the future. The framework created for answering the research question could also be used in a similar way to analyze other E-publishing products (e.g. e-newspapers, emagazines).
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Konsistens och förändring i finländsk etermediapolitik. Implementering av digital television och en jämförelse med Kanada Avhandlingen handlar om hur det finländska televisionssystemet förändrades i slutet av 1990-talet från en nationell institution till ett dualistiskt system som präglas av stark marknadsorientering. Syftet med avhandlingen är att förstå på vilket sätt en så snabb förändring kunde ske och analysera de institutionella faktorerna bakom utvecklingen. På teoretisk nivå diskuteras tesen om det nära sambandet mellan statliga politiska institutioner och rundradioverksamhetens institutioner. Avhandlingen består av två fallstudier. Den första sätter fokus på de första åren av den finländska televisionens digitaliseringsprocess som startade med starka industriell-nationalistiska motiveringar. Analysen, som baserar på offentliga dokument, sträcker sig framtill hösten 2001 då de digitala televisionssändningarna startade och regeringspropositionen om den nya kommunikationsmarknadslagen lämnades till riksdagen. Dessa policy-processer analyseras som en ”marknadisering” av de traditionella styrningsprinciper och idéer gällande finländsk rundradioverksamheten. En jämförelse mellan Finlands och Kanadas nationella rundradiopolitik gör att man kan koppla slutsatserna till den internationella utvecklingen. Jämförelsen visar hur kommunikationspolitiska linjen i de två länderna har kommit att likna varandra trots att ländernas tv-system och deras styrordningar är mycket olika. Exemplet med Kanada visar att den särskilda teknologin inte är viktig utan snarare de kommersiella intressen som står bakom och som beslutsfattare gärna döljer i en nationalistisk retorik. Studien visar att det är viktigt att beakta vilken tyngd man i politiken ger de två sidor som rundradioverksamheten består av: sändningsteknologin och verksamheten som en speciell kulturform. Nationalstatens handlingsutrymme minskar inom fältet om målet är att vara framgångsrik i konkurrensen i den nya internationella ekonomin. Enligt de nyliberalistiska principer som det politiska systemet överlag har tillägnat sig det är önskvärd men också helt inhemska institutionella traditioner och praxis som följdes i den finländska digitaliseringsprocessen främjade utvecklingen som ledde till att nästan alla ursprungliga nationella syften föll sönder.
Resumo:
Paper presented at the 40th Annual Conference of LIBER (Ligue des Bibliothèques Européennes de Recherche - Association of European Research Libraries) on July 1st, 2011; with the slides used at the presentation.
Resumo:
With the increase of use of digital media the need for the methods of multimedia protection becomes extremely important. The number of the solutions to the problem from encryption to watermarking is large and is growing every year. In this work digital image watermarking is considered, specifically a novel method of digital watermarking of color and spectral images. An overview of existing methods watermarking of color and grayscale images is given in the paper. Methods using independent component analysis (ICA) for detection and the ones using discrete wavelet transform (DWT) and discrete cosine transform (DCT) are considered in more detail. A novel method of watermarking proposed in this paper allows embedding of a color or spectral watermark image into color or spectral image consequently and successful extraction of the watermark out of the resultant watermarked image. A number of experiments have been performed on the quality of extraction depending on the parameters of the embedding procedure. Another set of experiments included the test of the robustness of the algorithm proposed. Three techniques have been chosen for that purpose: median filter, low-pass filter (LPF) and discrete cosine transform (DCT), which are a part of a widely known StirMark - Image Watermarking Robustness Test. The study shows that the proposed watermarking technique is fragile, i.e. watermark is altered by simple image processing operations. Moreover, we have found that the contents of the image to be watermarked do not affect the quality of the extraction. Mixing coefficients, that determine the amount of the key and watermark image in the result, should not exceed 1% of the original. The algorithm proposed has proven to be successful in the task of watermark embedding and extraction.
Resumo:
A utilização de matéria-prima de origem florestal aumentou significativamente nas últimas décadas. A busca por alta produtividade concretizou-se com a introdução de espécies exóticas, principalmente Eucalyptus sp. e Pinus sp. Neste trabalho avaliou-se a precisão da classificação digital obtida no levantamento de povoamentos florestais implantados e naturais da área da carta de Cachoeira do Sul - RS, utilizando técnicas de geoprocessamento, sensoriamento remoto, SIG (sistema de informação geográfica) e GPS (sistema de posicionamento global). Verificou-se que a área é ocupada por vegetação natural (35,54%), Pinus sp. (1,89%) e Eucalyptus sp. (0,77%), cuja precisão na classificação supervisionada digital foi: Exatidão global (85,23%), Kappa (84,90%) e Tau (77,74%). Concluiu-se que os três índices de acurácia podem ser utilizados, apesar de os índices Kappa e Tau mostrarem-se mais consistentes.
Resumo:
This study was conducted in order to learn how companies’ revenue models will be transformed due to the digitalisation of its products and processes. Because there is still only a limited number of researches focusing solely on revenue models, and particularly on the revenue model change caused by the changes at the business environment, the topic was initially approached through the business model concept, which organises the different value creating operations and resources at a company in order to create profitable revenue streams. This was used as the base for constructing the theoretical framework for this study, used to collect and analyse the information. The empirical section is based on a qualitative study approach and multiple-case analysis of companies operating in learning materials publishing industry. Their operations are compared with companies operating in other industries, which have undergone comparable transformation, in order to recognise either similarities or contrasts between the cases. The sources of evidence are a literature review to find the essential dimensions researched earlier, and interviews 29 of managers and executives at 17 organisations representing six industries. Based onto the earlier literature and the empirical findings of this study, the change of the revenue model is linked with the change of the other dimen-sions of the business model. When one dimension will be altered, as well the other should be adjusted accordingly. At the case companies the transformation is observed as the utilisation of several revenue models simultaneously and the revenue creation processes becoming more complex.
Resumo:
Dados de sensoriamento remoto têm sido largamente utilizados para classificação da cobertura e uso da terra, em particular graças à aquisição periódica de imagens de satélite e à generalização dos sistemas de processamento digital de imagens, que oferecem uma variedade de algoritmos de classificação de imagens. Este trabalho teve por objetivo avaliar alguns dos métodos mais comuns de classificações supervisionadas e não supervisionadas para imagens do sensor TM do satélite Landsat-5, em três áreas com diferentes padrões de paisagem em Rondônia: (1) áreas de fazendas de "Médio porte", (2) assentamentos no padrão "Espinha de peixe" e (3) áreas de contato entre floresta e "Cerrado". A comparação com um mapa de referência baseado na estatística Kappa produziu indicadores de desempenho bons ou superiores (melhores resultados - K-médias: k = 0,68; k = 0,77; k = 0,64 e MaxVer: k = 0,71; k = 0,89; k = 0,70, respectivamente nas três áreas citadas), para os algoritmos utilizados. Os resultados indicaram que a escolha de um algoritmo deve considerar tanto a capacidade de discriminar várias assinaturas espectrais em diferentes padrões de paisagem quanto a relação custo/benefício decorrente das várias etapas do trabalho dos operadores que elaboram um mapa de cobertura e uso da terra. Este trabalho apontou a necessidade de esforço mais sistemático de avaliação prévia de várias opções de execução de um projeto específico antes de se iniciar o trabalho de elaboração de um mapa de cobertura e uso da terra.
Resumo:
This paper describes the cost-benefit analysis of digital long-term preservation (LTP) that was carried out in the context of the Finnish National Digital Library Project (NDL) in 2010. The analysis was based on the assumption that as many as 200 archives, libraries, and museums will share an LTP system. The term ‘system’ shall be understood as encompassing not only information technology, but also human resources, organizational structures, policies and funding mechanisms. The cost analysis shows that an LTP system will incur, over the first 12 years, cumulative costs of €42 million, i.e. an average of €3.5 million per annum. Human resources and investments in information technology are the major cost factors. After the initial stages, the analysis predicts annual costs of circa €4 million. The analysis compared scenarios with and without a shared LTP system. The results indicate that a shared system will have remarkable benefits. At the development and implementation stages, a shared system shows an advantage of €30 million against the alternative scenario consisting of five independent LTP solutions. During the later stages, the advantage is estimated at €10 million per annum. The cumulative cost benefit over the first 12 years would amount to circa €100 million.
Resumo:
A análise completa de tronco é bastante aplicada em estudos de crescimento de espécies florestais que apresentam anéis de crescimento distinguíveis. As medições dos anéis são feitas manualmente com régua milimetrada, procedimento estabelecido como convencional. Porém, Rosot et al. (2003) desenvolveram procedimentos para que essas medições sejam feitas por programas de computador, trazendo avanço significativo para a aplicação da técnica. Assim, o objetivo deste trabalho foi comparar as medições manuais com as digitais em análise de tronco de Mimosa scabrella e Pinus taeda. Seis árvores de cada espécie foram utilizadas, e as diferenças entre os dois procedimentos para DAP (cm), altura total (m), área transversal (cm²) e volume (cm³) de cada anel de crescimento foram comparadas com um teste t(0,05)de dados pareados. As diferenças entre as medições manuais e as digitais foram significativas apenas para o volume do P. taeda em algumas idades. Os resultados indicam que a anatro digital é uma boa opção para a anatro convencional, demandando pessoa treinada para utilizar as ferramentas de Sistema de Informações Geográficas (SIG).
Resumo:
A regionalização de vazões tem sido realizada com o objetivo de disponibilizar informações hidrológicas em locais sem dados ou com poucas informações disponíveis, porém a determinação das características físicas das bacias de drenagem para cada local de interesse limita consideravelmente a aplicação dos resultados de um estudo convencional de regionalização de vazões. Uma das soluções para esse problema é a automação do processo de delimitação de bacias, utilizando-se de modelo digital de elevação hidrologicamente consistente (MDEHC). O objetivo principal deste trabalho foi a regionalização de vazões máxima, mínima e média de longo período e da curva de permanência para a bacia hidrográfica do Rio Paraíba do Sul, a montante da cidade de Volta Redonda, com base em um MDEHC. O modelo digital de elevação mostrou-se hidrologicamente consistente, possibilitando, assim, a determinação automática das características físicas da bacia. Foram identificadas quatro regiões hidrologicamente homogêneas e obtidas equações de regressão em que a área de drenagem e o comprimento do curso d'água principal se caracterizaram como as variáveis mais expressivas para a representação das diversas variáveis e funções regionalizadas.
Resumo:
Esitys Bulgarian kirjastoseuran seminaarissa 15.3.2012