994 resultados para digital subtraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The importance of digital inclusion to Europe is obvious: as we move towards an ever more internet-communicating society the lack of access to basic digital infrastructures for a significant segment of the population is both problematic for those individuals without access and also problematic for those providing services which should be efficient and fully utilised. The EU’s ‘Information Society’ project has been the central plank of the European attempt to build a European digital marketplace, a concept which necessitates digital inclusion of the population at large. It is a project which prefers universal service obligations to achieve inclusion. If that is to be the preferred solution I suggest that we must consider exclusion from the banking system, given that the Information Society is at root an economic community.

However, universal service obligations are not the only method whereby digital inclusion can be encouraged and I posit we may need to reconsider the role of the state as supplier of services through the concept of ‘social solidarity’.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, education and training in pathology has been delivered using textbooks, glass slides and conventional microscopy. Over the last two decades, the number of web-based pathology resources has expanded dramatically with centralized pathological resources being delivered to many students simultaneously. Recently, whole slide imaging technology allows glass slides to be scanned and viewed on a computer screen via dedicated software. This technology is referred to as virtual microscopy and has created enormous opportunities in pathological training and education. Students are able to learn key histopathological skills, e.g. to identify areas of diagnostic relevance from an entire slide, via a web-based computer environment. Students no longer need to be in the same room as the slides. New human–computer interfaces are also being developed using more natural touch technology to enhance the manipulation of digitized slides. Several major initiatives are also underway introducing online competency and diagnostic decision analysis using virtual microscopy and have important future roles in accreditation and recertification. Finally, researchers are investigating how pathological decision-making is achieved using virtual microscopy and modern eyetracking devices. Virtual microscopy and digital pathology will continue to improve how pathology training and education is delivered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The value proposition concept, while forming a central foundational premise of service-dominant (S-D) logic, has nevertheless been treated somewhat ambiguously. Recent work in attempting to address this has focused through a S-D logic lens on the reciprocal nature of value propositions. Important to this work has been a focus on communicative interactions and resource integration between network suppliers and customers. Overall, value proposition thinking has not studied in detail their adoption and use in practice. Considering the compelling notion of reciprocity, there have been recent calls for research to consider reciprocal value propositions in practice. The overall aim of this paper, therefore, was to explore how reciprocal value propositions are developed (or not) in practice at the network level. The study was set in the mobile television (TV) sector, which, as an internet-driven sector, is viewed as particularly pertinent. To conduct the study an S-D logic and Industrial Marketing and Purchasing (IMP) Group framework are integrated for the first time. A key finding is that while the reciprocal value proposition concept is theoretically intuitive, it is by no means inevitable in practice. Reciprocal value propositions were found to be simultaneously constrained, and, potentially enabled by these constraints in practice. At an overall level this paper contributes to the ongoing collaborative process, which aims to move S-D logic from a framework to a theory. More specifically, we provide new insights into the development of reciprocal value propositions in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although a substantial corpus of digital materials is now available to scholarship across the disciplines, objective evidence of their use, impact, and value, based on a robust assessment, is sparse. Traditional methods of assessment of impact in the humanities, notably citation in scholarly publications, are not an effective way of assessing impact of digital content. These issues are problematic in the field of Digital Humanities where there is a need to effectively assess impact to justify its continued funding and existence. A number of qualitative and quantitative methods exist that can be used to monitor the use of digital resources in various contexts although they have yet to be applied widely. These have been made available to the creators, managers, and funders of digital content in an accessible form through the TIDSR (Toolkit for the Impact of Digital Scholarly Resources) developed by the Oxford Internet Institute. In 2011, the authors of this article developed the SPHERE project (Stormont Parliamentary Hansards: Embedded in Research and Education) specifically to use TIDSR to evaluate the use and impact of The Stormont Papers, a digital collection of the Hansards of the Stormont Northern Irish Parliament from 1921 to 1972. This article presents the methodology, findings, and analysis of the project. The authors argue that TIDSR is a useful and, critically, transferrable method to understand and increase the impact of digital resources. The findings of the project are modified into a series of wider recommendations on protecting the investment in digital resources by increasing their use, value, and impact. It is reasonable to suggest that effectively showing the impact of Digital Humanities is critical to its survival.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 64-point Fourier transform chip is described that performs a forward or inverse, 64-point Fourier transform on complex two's complement data supplied at a rate of 13.5MHz and can operate at clock rates of up to 40MHz, under worst-case conditions. It uses a 0.6µm double-level metal CMOS technology, contains 535k transistors and uses an internal 3.3V power supply. It has an area of 7.8×8mm, dissipates 0.9W, has 48 pins and is housed in a 84 pin PLCC plastic package. The chip is based on a FFT architecture developed from first principles through a detailed investigation of the structure of the relevant DFT matrix and through mapping repetitive blocks within this matrix onto a regular silicon structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of fine grain pipelining techniques in the design of high performance Wave Digital Filters (WDFs) is described. It is shown that significant increases in the sampling rate of bit parallel circuits can be achieved using most significant bit (msb) first arithmetic. A novel VLSI architecture for implementing two-port adaptor circuits is described which embodies these ideas. The circuit in question is highly regular, uses msb first arithmetic and is implemented using simple carry-save adders. © 1992 Kluwer Academic Publishers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of high-performance VLSI architectures for real-time image coding applications are described. In particular, attention is focused on circuits for computing the 2-D DCT (discrete cosine transform) and for 2-D vector quantization. The former circuits are based on Winograd algorithms and comprise a number of bit-level systolic arrays with a bit-serial, word-parallel input. The latter circuits exhibit a similar data organization and consist of a number of inner product array circuits. Both circuits are highly regular and allow extremely high data rates to be achieved through extensive use of parallelism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A systematic design methodology is described for the rapid derivation of VLSI architectures for implementing high performance recursive digital filters, particularly ones based on most significant digit (msd) first arithmetic. The method has been derived by undertaking theoretical investigations of msd first multiply-accumulate algorithms and by deriving important relationships governing the dependencies between circuit latency, levels of pipe-lining and the range and number representations of filter operands. The techniques described are general and can be applied to both bit parallel and bit serial circuits, including those based on on-line arithmetic. The method is illustrated by applying it to the design of a number of highly pipelined bit parallel IIR and wave digital filter circuits. It is shown that established architectures, which were previously designed using heuristic techniques, can be derived directly from the equations described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of fine-grain pipelining techniques in the design of high-performance wave digital filters (WDFs) is described. The problems of latency in feedback loops can be significantly reduced if computations are organized most significant, as opposed to least significant, bit first and if the results are fed back as soon as they are formed. The result is that chips can be designed which offer significantly higher sampling rates than otherwise can be obtained using conventional methods. How these concepts can be extended to the more challenging problem of WDFs is discussed. It is shown that significant increases in the sampling rate of bit-parallel circuits can be achieved using most significant bit first arithmetic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes how worst-case error analysis can be applied to solve some of the practical issues in the development and implementation of a low power, high performance radix-4 FFT chip for digital video applications. The chip has been fabricated using a 0.6 µm CMOS technology and can perform a 64 point complex forward or inverse FFT on real-time video at up to 18 Megasamples per second. It comprises 0.5 million transistors in a die area of 7.8×8 mm and dissipates 1 W, leading to a cost-effective silicon solution for high quality video processing applications. The analysis focuses on the effect that different radix-4 architectural configurations and finite wordlengths has on the FFT output dynamic range. These issues are addressed using both mathematical error models and through extensive simulation.