880 resultados para Iterative decoding
Resumo:
Este texto lança algumas reflexões sobre os axiomas do capitalismo que permitem descodificar todas as convenções das sociedades chamadas “selvagens”, e realizar as promessas de poder das sociedades déspotas, “bárbaras”. Simultaneamente, o capitalismo, provoca uma multiplicação das fraturas territoriais e de globalizações de toda ordem. Nessa descodificação de todos os valores, os fluxos (sociais, econômicos, políticos) liberados pelo capitalismo hesitam entre, de um lado, uma descodificação que levaria a uma desterritorialização radical dos processos e, de outro lado, um reaparecimento e re-atualização conservadora desses fluxos liberados pelo próprio processo de “criação destrutiva”.
Resumo:
Esta pesquisa documental analisa as concepções de alfabetização, leitura e escrita subjacentes à Provinha Brasil no período 2008-2012 e o panorama em que esse programa de avaliação é produzido. Parte do referencial bakhtiniano e do conceito de alfabetização de Gontijo (2008, 2013). Ao tomar a Provinha como gênero do discurso, discute os elos precedentes dentro do contexto de produção dessa avaliação, a autoria do Programa e seus principais destinatários. Constata que a Provinha é criada como resposta às demandas de avaliação da alfabetização provenientes de organismos internacionais como o Banco Mundial e a Organização das Nações Unidas para a Educação (Unesco). A avaliação é elaborada pelo Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep) como órgão que coordena as avaliações no País, em colaboração com pesquisadores de universidades e de organizações da sociedade civil, para demonstrar confiabilidade científica aliada à participação democrática no processo de produção. Seus principais destinatários são gestores de Secretarias de Educação e professores. Aos primeiros, cabe aderir ao programa de avaliação e tomar medidas administrativas para sua operacionalização nas redes. Os docentes têm o papel central de seguir as orientações do material e reorganizar sua prática em função de melhorias nos desempenhos das crianças no teste. Estas, por sua vez, são desconsideradas como sujeitos de dizeres e é legitimado um discurso homogeneizador sobre seu desenvolvimento. A partir dos testes aplicados e das matrizes de referência e seus eixos, a pesquisa analisa como a diferenciação teórica entre alfabetização e letramento se concretiza na organização das provas. A alfabetização, entendida como apropriação do sistema de escrita, é avaliada no primeiro eixo do teste principalmente como identificação de unidades menores da língua, como letras, sílabas e fonemas. As habilidades de leitura, ligadas ao letramento como concebido nos pressupostos do programa, são aferidas ora como decodificação de palavras e frases descontextualizadas, ora como apreensão de significado predeterminado do texto. A escrita somente é avaliada no ano de 2008 e por meio de itens que solicitavam codificação de palavras e frases ditadas pelo aplicador. Desse modo, a Provinha Brasil contribui para a subtração das potencialidades políticas e transformadoras do aprendizado da língua materna no País.
Resumo:
Um algoritmo numérico foi criado para apresentar a solução da conversão termoquímica de um combustível sólido. O mesmo foi criado de forma a ser flexível e dependente do mecanismo de reação a ser representado. Para tanto, um sistema das equações características desse tipo de problema foi resolvido através de um método iterativo unido a matemática simbólica. Em função de não linearidades nas equações e por se tratar de pequenas partículas, será aplicado o método de Newton para reduzir o sistema de equações diferenciais parciais (EDP’s) para um sistema de equações diferenciais ordinárias (EDO’s). Tal processo redução é baseado na união desse método iterativo à diferenciação numérica, pois consegue incorporar nas EDO’s resultantes funções analíticas. O modelo reduzido será solucionado numericamente usando-se a técnica do gradiente bi-conjugado (BCG). Tal modelo promete ter taxa de convergência alta, se utilizando de um número baixo de iterações, além de apresentar alta velocidade na apresentação das soluções do novo sistema linear gerado. Além disso, o algoritmo se mostra independente do tamanho da malha constituidora. Para a validação, a massa normalizada será calculada e comparada com valores experimentais de termogravimetria encontrados na literatura, , e um teste com um mecanismo simplificado de reação será realizado.
Resumo:
Este trabalho tem como objeto de estudo os impressos do Projeto Trilhas, material pedagógico produzido em parceria entre o Instituto Natura, a Comunidade Educativa CEDAC e o Ministério da Educação. Trata-se de uma análise documental com as quais se procurou envolver as noções de enunciado, texto, gênero e suporte que possibilitaram fundamentar a proposta metodológica, pautada pelo diálogo, que teve como escopo problematizar como esse conjunto de materiais pode contribuir no processo do ensino e da aprendizagem das crianças matriculadas nas turmas do primeiro ano do Ensino Fundamental, com foco na análise das concepções de alfabetização, leitura e escrita, engendradas nos materiais. Para isso, o referencial teórico que balizou as reflexões se fundamentou nas contribuições da perspectiva bakhtiniana de linguagem e lançou ancoragens no conceito de alfabetização proposto criticamente por Gontijo (2008). As análises se constituíram como uma arena, isto é, um palco de alteridade. Logo, buscaram compreender como o conceito e as concepções se materializaram nas atividades produzidas pelos sujeitos-autores e problematizaram como os impressos do Projeto Trilhas podem contribuir para a melhoria do ensino e da aprendizagem das crianças matriculadas no primeiro ano do Ensino Fundamental. Com as análises, sustenta-se que o conceito que solidifica a constituição dos impressos deste projeto se aproxima das contribuições de Ferreiro e Teberosky (1999), isto é, a alfabetização é o processo pelo qual as crianças assimilam o código escrito e compreende os usos que são dados a ele nas culturas do escrito. A leitura se configurou como decodificação dos signos linguísticos e compreensão de significados, e a escrita como codificação.
Resumo:
Polymers have become the reference material for high reliability and performance applications. In this work, a multi-scale approach is proposed to investigate the mechanical properties of polymeric based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, a coupling of a Finite Element Method (FEM) and Molecular Dynamics (MD) modeling in an iterative procedure was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, the previous described multi-scale method computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multi-scale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
In cameras with radial distortion, straight lines in space are in general mapped to curves in the image. Although epipolar geometry also gets distorted, there is a set of special epipolar lines that remain straight, namely those that go through the distortion center. By finding these straight epipolar lines in camera pairs we can obtain constraints on the distortion center(s) without any calibration object or plumbline assumptions in the scene. Although this holds for all radial distortion models we conceptually prove this idea using the division distortion model and the radial fundamental matrix which allow for a very simple closed form solution of the distortion center from two views (same distortion) or three views (different distortions). The non-iterative nature of our approach makes it immune to local minima and allows finding the distortion center also for cropped images or those where no good prior exists. Besides this, we give comprehensive relations between different undistortion models and discuss advantages and drawbacks.
Resumo:
This paper aims to describe the processes of teaching illustration and animation, together, in the context of a masters degree program. In Portugal, until very recently, illustration and animation higher education courses, were very scarce and only provided by a few private universities, which offered separated programs - either illustration or animation. The MA in Illustration and Animation (MIA) based in the Instituto Politécnico do Cávado e Ave in Portugal, dared to join these two creative areas in a common learning model and is already starting it’s third edition with encouraging results and will be supported by the first international conference on illustration and animation (CONFIA). This masters program integrates several approaches and techniques (in illustration and animation) and integrates and encourages creative writing and critique writing. This paper describes the iterative process of construction, and implementation of the program as well as the results obtained on the initial years of existence in terms of pedagogic and learning conclusions. In summary, we aim to compare pedagogic models of animation or illustration teaching in higher education opposed to a more contemporary and multidisciplinary model approach that integrates the two - on an earlier stage - and allows them to be developed separately – on the second part of the program. This is based on the differences and specificities of animation (from classic techniques to 3D) and illustration (drawing the illustration) and the intersection area of these two subjects within the program structure focused on the students learning and competencies acquired to use in professional or authorial projects.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
Polymeric materials have become the reference material for high reliability and performance applications. However, their performance in service conditions is difficult to predict, due in large part to their inherent complex morphology, which leads to non-linear and anisotropic behavior, highly dependent on the thermomechanical environment under which it is processed. In this work, a multiscale approach is proposed to investigate the mechanical properties of polymeric-based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, the coupling of a finite element method (FEM) and molecular dynamics (MD) modeling, in an iterative procedure, was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, this multiscale approach computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multiscale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
Facing the lateral vibration problem of a machine rotor as a beam on elastic supports in bending, the authors deal with the free vibration of elastically restrained Bernoulli-Euler beams carrying a finite number of concentrated elements along their length. Based on Rayleigh's quotient, an iterative strategy is developed to find the approximated torsional stiffness coefficients, which allows the reconciliation between the theoretical model results and the experimental ones, obtained through impact tests. The mentioned algorithm treats the vibration of continuous beams under a determined set of boundary and continuity conditions, including different torsional stiffness coefficients and the effect of attached concentrated masses and rotational inertias, not only in the energetic terms of the Rayleigh's quotient but also on the mode shapes, considering the shape functions defined in branches. Several loading cases are examined and examples are given to illustrate the validity of the model and accuracy of the obtained natural frequencies.
Resumo:
Deoxyribonucleic acid, or DNA, is the most fundamental aspect of life but present day scientific knowledge has merely scratched the surface of the problem posed by its decoding. While experimental methods provide insightful clues, the adoption of analysis tools supported by the formalism of mathematics will lead to a systematic and solid build-up of knowledge. This paper studies human DNA from the perspective of system dynamics. By associating entropy and the Fourier transform, several global properties of the code are revealed. The fractional order characteristics emerge as a natural consequence of the information content. These properties constitute a small piece of scientific knowledge that will support further efforts towards the final aim of establishing a comprehensive theory of the phenomena involved in life.
Resumo:
This paper proposes a new methodology to reduce the probability of occurring states that cause load curtailment, while minimizing the involved costs to achieve that reduction. The methodology is supported by a hybrid method based on Fuzzy Set and Monte Carlo Simulation to catch both randomness and fuzziness of component outage parameters of transmission power system. The novelty of this research work consists in proposing two fundamentals approaches: 1) a global steady approach which deals with building the model of a faulted transmission power system aiming at minimizing the unavailability corresponding to each faulted component in transmission power system. This, results in the minimal global cost investment for the faulted components in a system states sample of the transmission network; 2) a dynamic iterative approach that checks individually the investment’s effect on the transmission network. A case study using the Reliability Test System (RTS) 1996 IEEE 24 Buses is presented to illustrate in detail the application of the proposed methodology.
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.