927 resultados para Algorithmic Complexity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-level introduction for web science students, rather than for computer science students.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the past decades, all of the efforts at quantifying systems complexity with a general tool has usually relied on using Shannon's classical information framework to address the disorder of the system through the Boltzmann-Gibbs-Shannon entropy, or one of its extensions. However, in recent years, there were some attempts to tackle the quantification of algorithmic complexities in quantum systems based on the Kolmogorov algorithmic complexity, obtaining some discrepant results against the classical approach. Therefore, an approach to the complexity measure is proposed here, using the quantum information formalism, taking advantage of the generality of the classical-based complexities, and being capable of expressing these systems' complexity on other framework than its algorithmic counterparts. To do so, the Shiner-Davison-Landsberg (SDL) complexity framework is considered jointly with linear entropy for the density operators representing the analyzed systems formalism along with the tangle for the entanglement measure. The proposed measure is then applied in a family of maximally entangled mixed state.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Com o crescimento da informação disponível na Web, arquivos pessoais e profissionais, protagonizado tanto pelo aumento da capacidade de armazenamento de dados, como pelo aumento exponencial da capacidade de processamento dos computadores, e do fácil acesso a essa mesma informação, um enorme fluxo de produção e distribuição de conteúdos audiovisuais foi gerado. No entanto, e apesar de existirem mecanismos para a indexação desses conteúdos com o objectivo de permitir a pesquisa e acesso aos mesmos, estes apresentam normalmente uma grande complexidade algorítmica ou exigem a contratação de pessoal altamente qualificado, para a verificação e categorização dos conteúdos. Nesta dissertação pretende-se estudar soluções de anotação colaborativa de conteúdos e desenvolver uma ferramenta que facilite a anotação de um arquivo de conteúdos audiovisuais. A abordagem implementada é baseada no conceito dos “Jogos com Propósito” (GWAP – Game With a Purpose) e permite que os utilizadores criem tags (metadatos na forma de palavras-chave) de forma a atribuir um significado a um objecto a ser categorizado. Assim, e como primeiro objectivo, foi desenvolvido um jogo com o propósito não só de entretenimento, mas também que permita a criação de anotações audiovisuais perante os vídeos que são apresentados ao jogador e, que desta forma, se melhore a indexação e categorização dos mesmos. A aplicação desenvolvida permite ainda a visualização dos conteúdos e metadatos categorizados, e com o objectivo de criação de mais um elemento informativo, permite a inserção de um like num determinado instante de tempo do vídeo. A grande vantagem da aplicação desenvolvida reside no facto de adicionar anotações a pontos específicos do vídeo, mais concretamente aos seus instantes de tempo. Trata-se de uma funcionalidade nova, não disponível em outras aplicações de anotação colaborativa de conteúdos audiovisuais. Com isto, o acesso aos conteúdos será bastante mais eficaz pois será possível aceder, por pesquisa, a pontos específicos no interior de um vídeo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses the implementation details of a child friendly, good quality, English text-to-speech (TTS) system that is phoneme-based, concatenative, easy to set up and use with little memory. Direct waveform concatenation and linear prediction coding (LPC) are used. Most existing TTS systems are unit-selection based, which use standard speech databases available in neutral adult voices.Here reduced memory is achieved by the concatenation of phonemes and by replacing phonetic wave files with their LPC coefficients. Linguistic analysis was used to reduce the algorithmic complexity instead of signal processing techniques. Sufficient degree of customization and generalization catering to the needs of the child user had been included through the provision for vocabulary and voice selection to suit the requisites of the child. Prosody had also been incorporated. This inexpensive TTS systemwas implemented inMATLAB, with the synthesis presented by means of a graphical user interface (GUI), thus making it child friendly. This can be used not only as an interesting language learning aid for the normal child but it also serves as a speech aid to the vocally disabled child. The quality of the synthesized speech was evaluated using the mean opinion score (MOS).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite many diverse theories that address closely related themes—e.g., probability theory, algorithmic complexity, cryptoanalysis, and pseudorandom number generation—a near-void remains in constructive methods certified to yield the desired “random” output. Herein, we provide explicit techniques to produce broad sets of both highly irregular finite and normal infinite sequences, based on constructions and properties derived from approximate entropy (ApEn), a computable formulation of sequential irregularity. Furthermore, for infinite sequences, we considerably refine normality, by providing methods for constructing diverse classes of normal numbers, classified by the extent to which initial segments deviate from maximal irregularity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of estimating P(Yi + (...) + Y-n > x) by importance sampling when the Yi are i.i.d. and heavy-tailed. The idea is to exploit the cross-entropy method as a toot for choosing good parameters in the importance sampling distribution; in doing so, we use the asymptotic description that given P(Y-1 + (...) + Y-n > x), n - 1 of the Yi have distribution F and one the conditional distribution of Y given Y > x. We show in some specific parametric examples (Pareto and Weibull) how this leads to precise answers which, as demonstrated numerically, are close to being variance minimal within the parametric class under consideration. Related problems for M/G/l and GI/G/l queues are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present algorithms which work on pairs of 0,1- matrices which multiply again a matrix of zero and one entries. When applied over a pair, the algorithms change the number of non-zero entries present in the matrices, meanwhile their product remains unchanged. We establish the conditions under which the number of 1s decreases. We recursively define as well pairs of matrices which product is a specific matrix and such that by applying on them these algorithms, we minimize the total number of non-zero entries present in both matrices. These matrices may be interpreted as solutions for a well known information retrieval problem, and in this case the number of 1 entries represent the complexity of the retrieve and information update operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the early Holocene two main paleoamerican cultures thrived in Brazil: the Tradicao Nordeste in the semi-desertic Sertao and the Tradicao Itaparica in the high plains of the Planalto Central. Here we report on paleodietary singals of a Paleoamerican found in a third Brazilian ecological setting - a riverine shellmound, or sambaqui, located in the Atlantic forest. Most sambaquis are found along the coast. The peoples associated with them subsisted on marine resources. We are reporting a different situation from the oldest recorded riverine sambaqui, called Capelinha. Capelinha is a relatively small sambaqui established along a river 60 km from the Atlantic Ocean coast. It contained the well-preserved remains of a Paleoamerican known as Luzio dated to 9,945 +/- 235 years ago; the oldest sambaqui dweller so far. Luzio's bones were remarkably well preserved and allowed for stable isotopic analysis of diet. Although artifacts found at this riverine site show connections with the Atlantic coast, we show that he represents a population that was dependent on inland resources as opposed to marine coastal resources. After comparing Luzio's paleodietary data with that of other extant and prehistoric groups, we discuss where his group could have come from, if terrestrial diet persisted in riverine sambaquis and how Luzio fits within the discussion of the replacement of paleamerican by amerindian morphology. This study adds to the evidence that shows a greater complexity in the prehistory of the colonization of and the adaptations to the New World.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We evaluated the reliability and validity of a Brazilian-Portuguese version of the Epilepsy Medication Treatment Complexity Index (EMTCI). Interrater reliability was evaluated with the intraclass correlation coefficient (ICC), and validity was evaluated by correlation of mean EMTCI scores with the following variables: number of antiepileptic drugs (AEDs), seizure control, patients` perception of seizure control, and adherence to the therapeutic regimen as measured with the Morisky scale. We studied patients with epilepsy followed in a tertiary university-based hospital outpatient clinic setting, aged 18 years or older, independent in daily living activities, and without cognitive impairment or active psychiatric disease. ICCs ranged from 0.721 to 0.999. Mean EMTCI scores were significantly correlated with the variables assessed. Higher EMTCI scores were associated with an increasing number of AEDs, uncontrolled seizures, patients` perception of lack of seizure control, and poorer adherence to the therapeutic regimen. The results indicate that the Brazilian-Portuguese EMTCI is reliable and valid to be applied clinically in the country. The Brazilian-Portuguese EMTCI version may be a useful tool in developing strategies to minimize treatment complexity, possibly improving seizure control and quality of life in people with epilepsy in our milieu. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aging is known to have a degrading influence on many structures and functions of the human sensorimotor system. The present work assessed aging-related changes in postural sway using fractal and complexity measures of the center of pressure (COP) dynamics with the hypothesis that complexity and fractality decreases in the older individuals. Older subjects (68 +/- 4 years) and young adult subjects (28 +/- 7 years) performed a quiet stance task (60 s) and a prolonged standing task (30 min) where subjects were allowed to move freely. Long-range correlations (fractality) of the data were estimated by the detrended fluctuation analysis (DFA); changes in entropy were estimated by the multi-scale entropy (MSE) measure. The DFA results showed that the fractal dimension was lower for the older subjects in comparison to the young adults but the fractal dimensions of both groups were not different from a 1/f noise, for time intervals between 10 and 600 s. The MSE analysis performed with the typically applied adjustment to the criterion distance showed a higher degree of complexity in the older subjects, which is inconsistent with the hypothesis that complexity in the human physiological system decreases with aging. The same MSE analysis performed without adjustment showed no differences between the groups. Taken all results together, the decrease in total postural sway and long-range correlations in older individuals are signs of an adaptation process reflecting the diminishing ability to generate adequate responses on a longer time scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to investigate the effects of knowledge of results (KR) frequency and task complexity on motor skill acquisition. The task consisted of throwing a bocha ball to place it as close as possible to the target ball. 120 students ages 11 to 73 years were assigned to one of eight experimental groups according to knowledge of results frequency (25, 50, 75, and 100%) and task complexity (simple and complex). Subjects performed 90 trials in the acquisition phase and 10 trials in the transfer test. The results showed that knowledge of results given at a frequency of 25% resulted in an inferior absolute error than 50% and inferior variable error than 50, 75, and 100 I frequencies, but no effect of task complexity was found.