966 resultados para digital architecture
Resumo:
2-D Discrete Cosine Transform (DCT) is widely used as the core of digital image and video compression. In this paper, we present a novel DCT architecture that allows aggressive voltage scaling by exploiting the fact that not all intermediate computations are equally important in a DCT system to obtain "good" image quality with Peak Signal to Noise Ratio(PSNR) > 30 dB. This observation has led us to propose a DCT architecture where the signal paths that are less contributive to PSNR improvement are designed to be longer than the paths that are more contributive to PSNR improvement. It should also be noted that robustness with respect to parameter variations and low power operation typically impose contradictory requirements in terms of architecture design. However, the proposed architecture lends itself to aggressive voltage scaling for low-power dissipation even under process parameter variations. Under a scaled supply voltage and/or variations in process parameters, any possible delay errors would only appear from the long paths that are less contributive towards PSNR improvement, providing large improvement in power dissipation with small PSNR degradation. Results show that even under large process variation and supply voltage scaling (0.8V), there is a gradual degradation of image quality with considerable power savings (62.8%) for the proposed architecture when compared to existing implementations in 70 nm process technology.
Resumo:
Digital pathology and the adoption of image analysis have grown rapidly in the last few years. This is largely due to the implementation of whole slide scanning, advances in software and computer processing capacity and the increasing importance of tissue-based research for biomarker discovery and stratified medicine. This review sets out the key application areas for digital pathology and image analysis, with a particular focus on research and biomarker discovery. A variety of image analysis applications are reviewed including nuclear morphometry and tissue architecture analysis, but with emphasis on immunohistochemistry and fluorescence analysis of tissue biomarkers. Digital pathology and image analysis have important roles across the drug/companion diagnostic development pipeline including biobanking, molecular pathology, tissue microarray analysis, molecular profiling of tissue and these important developments are reviewed. Underpinning all of these important developments is the need for high quality tissue samples and the impact of pre-analytical variables on tissue research is discussed. This requirement is combined with practical advice on setting up and running a digital pathology laboratory. Finally, we discuss the need to integrate digital image analysis data with epidemiological, clinical and genomic data in order to fully understand the relationship between genotype and phenotype and to drive discovery and the delivery of personalized medicine.
Resumo:
This paper presents a multi-agent system approach to address the difficulties encountered in traditional SCADA systems deployed in critical environments such as electrical power generation, transmission and distribution. The approach models uncertainty and combines multiple sources of uncertain information to deliver robust plan selection. We examine the approach in the context of a simplified power supply/demand scenario using a residential grid connected solar system and consider the challenges of modelling and reasoning with
uncertain sensor information in this environment. We discuss examples of plans and actions required for sensing, establish and discuss the effect of uncertainty on such systems and investigate different uncertainty theories and how they can fuse uncertain information from multiple sources for effective decision making in
such a complex system.
Resumo:
A digital directional modulation (DM) transmitter structure is proposed from a practical implementation point of view in this paper. This digital DM architecture is built with the help of several off-the-shelf physical layer wireless experiment platform hardware boards. When compared with previous analogue DM transmitter architectures, the digital means offers more precise and fast control on the updates of the array excitations. More importantly, it is an ideal physical arrangement to implement the most universal DM synthesis algorithm, i.e., the orthogonal vector approach. The practical issues in digital DM system calibrations are described and solved. The bit error rates (BERs) are measured via real-time data transmissions to illustrate the DM advantages, in terms of secrecy performance, over conventional non-DM beam-steering transmitters.
Resumo:
The design of a high-performance IIR (infinite impulse response) digital filter is described. The chip architecture operates on 11-b parallel, two's complement input data with a 12-b parallel two's complement coefficient to produce a 14-b two's complement output. The chip is implemented in 1.5-µm, double-layer-metal CMOS technology, consumes 0.5 W, and can operate up to 15 Msample/s. The main component of the system is a fine-grained systolic array that internally is based on a signed binary number representation (SBNR). Issues addressed include testing, clock distribution, and circuitry for conversion between two's complement and SBNR.
Resumo:
A novel digital image correlation (DIC) technique has been developed to track changes in textile yarn orientations during shear characterisation experiments, requiring only low-cost digital imaging equipment. Fabric shear angles and effective yarn strains are calculated and visualised using this new DIC technique for bias extension testing of an aerospace grade, carbon-fibre reinforcement material with a plain weave architecture. The DIC results are validated by direct measurement, and the use of a wide bias extension sample is evaluated against a more commonly used narrow sample. Wide samples exhibit a shear angle range 25% greater than narrow samples and peak loads which are 10 times higher. This is primarily due to excessive yarn slippage in the narrow samples; hence, the wide sample configuration is recommended for characterisation of shear properties which are required for accurate modelling of textile draping.
Resumo:
This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.
Resumo:
Esta tese investiga a caracterização (e modelação) de dispositivos que realizam o interface entre os domínios digital e analógico, tal como os buffers de saída dos circuitos integrados (CI). Os terminais sem fios da atualidade estão a ser desenvolvidos tendo em vista o conceito de rádio-definido-por-software introduzido por Mitola. Idealmente esta arquitetura tira partido de poderosos processadores e estende a operação dos blocos digitais o mais próximo possível da antena. Neste sentido, não é de estranhar que haja uma crescente preocupação, no seio da comunidade científica, relativamente à caracterização dos blocos que fazem o interface entre os domínios analógico e digital, sendo os conversores digital-analógico e analógico-digital dois bons exemplos destes circuitos. Dentro dos circuitos digitais de alta velocidade, tais como as memórias Flash, um papel semelhante é desempenhado pelos buffers de saída. Estes realizam o interface entre o domínio digital (núcleo lógico) e o domínio analógico (encapsulamento dos CI e parasitas associados às linhas de transmissão), determinando a integridade do sinal transmitido. Por forma a acelerar a análise de integridade do sinal, aquando do projeto de um CI, é fundamental ter modelos que são simultaneamente eficientes (em termos computacionais) e precisos. Tipicamente a extração/validação dos modelos para buffers de saída é feita usando dados obtidos da simulação de um modelo detalhado (ao nível do transístor) ou a partir de resultados experimentais. A última abordagem não envolve problemas de propriedade intelectual; contudo é raramente mencionada na literatura referente à caracterização de buffers de saída. Neste sentido, esta tese de Doutoramento foca-se no desenvolvimento de uma nova configuração de medição para a caracterização e modelação de buffers de saída de alta velocidade, com a natural extensão aos dispositivos amplificadores comutados RF-CMOS. Tendo por base um procedimento experimental bem definido, um modelo estado-da-arte é extraído e validado. A configuração de medição desenvolvida aborda não apenas a integridade dos sinais de saída mas também do barramento de alimentação. Por forma a determinar a sensibilidade das quantias estimadas (tensão e corrente) aos erros presentes nas diversas variáveis associadas ao procedimento experimental, uma análise de incerteza é também apresentada.
Digital Debris of Internet Art: An Allegorical and Entropic Resistance to the Epistemology of Search
Resumo:
This Ph.D., by thesis, proposes a speculative lens to read Internet Art via the concept of digital debris. In order to do so, the research explores the idea of digital debris in Internet Art from 1993 to 2011 in a series of nine case studies. Here, digital debris are understood as words typed in search engines and which then disappear; bits of obsolete codes which are lingering on the Internet, abandoned website, broken links or pieces of ephemeral information circulating on the Internet and which are used as a material by practitioners. In this context, the thesis asks what are digital debris? The thesis argues that the digital debris of Internet Art represent an allegorical and entropic resistance to the what Art Historian David Joselit calls the Epistemology of Search. The ambition of the research is to develop a language in-between the agency of the artist and the autonomy of the algorithm, as a way of introducing Internet Art to a pluridisciplinary audience, hence the presence of the comparative studies unfolding throughout the thesis, between Internet Art and pionners in the recycling of waste in art, the use of instructions as a medium and the programming of poetry. While many anthropological and ethnographical studies are concerned with the material object of the computer as debris once it becomes obsolete, very few studies have analysed waste as discarded data. The research shifts the focus from an industrial production of digital debris (such as pieces of hardware) to obsolete pieces of information in art practice. The research demonstrates that illustrations of such considerations can be found, for instance, in Cory Arcangel’s work Data Diaries (2001) where QuickTime files are stolen, disassembled, and then re-used in new displays. The thesis also looks at Jodi’s approach in Jodi.org (1993) and Asdfg (1998), where websites and hyperlinks are detourned, deconstructed, and presented in abstract collages that reveals the architecture of the Internet. The research starts in a typological manner and classifies the pieces of Internet Art according to the structure at play in the work. Indeed if some online works dealing with discarded documents offer a self-contained and closed system, others nurture the idea of openness and unpredictability. The thesis foregrounds the ideas generated through the artworks and interprets how those latter are visually constructed and displayed. Not only does the research questions the status of digital debris once they are incorporated into art practice but it also examine the method according to which they are retrieved, manipulated and displayed to submit that digital debris of Internet Art are the result of both semantic and automated processes, rendering them both an object of discourse and a technical reality. Finally, in order to frame the serendipity and process-based nature of the digital debris, the Ph.D. concludes that digital debris are entropic . In other words that they are items of language to-be, paradoxically locked in a constant state of realisation.
Resumo:
In this paper a parallel implementation of an Adaprtive Generalized Predictive Control (AGPC) algorithm is presented. Since the AGPC algorithm needs to be fed with knowledge of the plant transfer function, the parallelization of a standard Recursive Least Squares (RLS) estimator and a GPC predictor is discussed here.
Resumo:
In this paper a parallel implementation of an Adaprtive Generalized Predictive Control (AGPC) algorithm is presented. Since the AGPC algorithm needs to be fed with knowledge of the plant transfer function, the parallelization of a standard Recursive Least Squares (RLS) estimator and a GPC predictor is discussed here.
Resumo:
We will consider the architecture of the communication platform prototype, "World Cultures in English(es)" (WCE), in relation to the interaction among different types of media and audiences. Such an architecture has emphasized the need for an interdisciplinary team of scholars, librarians, and Information Technology experts who have conceived the prototype. This prototype was developed using PHP and MySQL, and is based on the University of Lisbon server. The "World Cultures in English(es)" is an Open Access platform bringing together different types of documents—written, audio, visual, multimedia, and electronic—and aims at educational, cultural, social, and economic inclusiveness, namely in terms of users with special needs. The WCE platform strongly implies social commitment through reliable information and forms of communication adequate to different kinds of audiences. The "World Cultures in English(es)" prototype will be tested by different audiences from different schools and universities, leading to the necessary adjustments.
Resumo:
The construction industry wants graduate employees skilled in relationship building and information technology and communications (ITC). Much of the relationship building at universities has evolved through technology. Government and the ITC industry fund lobby groups to influence both educational establishments and Government to incorporate more ITC in education _ and ultimately into the construction industry. This influencing ignores the technoskeptics’ concerns about student disengagement through excessive online distractions. Construction studies students (n=64) and lecturers (n=16) at a construction university were surveyed to discover the impact of the use and applications of ITC. Contrary to Government and industry technopositivism, construction students and lecturers preferred hard copy documents to online feedback for assignments and marking, more human interface and less technological substitution and to be on campus for lectures and face-to-face meetings rather than viewing on-screen. ITC also distracted users from tasks which, in the case of students, prevented the development of the concentration and deep thinking which a university education should deliver. The research findings are contrary to the promotions of Government, ITC industry and ITC departments and have implications for construction employers where a renewed focus on human communication should mean less stress, fewer delays and cost overruns.
Resumo:
Sparse matrix-vector multiplication (SMVM) is a fundamental operation in many scientific and engineering applications. In many cases sparse matrices have thousands of rows and columns where most of the entries are zero, while non-zero data is spread over the matrix. This sparsity of data locality reduces the effectiveness of data cache in general-purpose processors quite reducing their performance efficiency when compared to what is achieved with dense matrix multiplication. In this paper, we propose a parallel processing solution for SMVM in a many-core architecture. The architecture is tested with known benchmarks using a ZYNQ-7020 FPGA. The architecture is scalable in the number of core elements and limited only by the available memory bandwidth. It achieves performance efficiencies up to almost 70% and better performances than previous FPGA designs.