1000 resultados para Hipocratismo digital
Resumo:
This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.
Resumo:
A 64-point Fourier transform chip is described that performs a forward or inverse, 64-point Fourier transform on complex two's complement data supplied at a rate of 13.5MHz and can operate at clock rates of up to 40MHz, under worst-case conditions. It uses a 0.6µm double-level metal CMOS technology, contains 535k transistors and uses an internal 3.3V power supply. It has an area of 7.8×8mm, dissipates 0.9W, has 48 pins and is housed in a 84 pin PLCC plastic package. The chip is based on a FFT architecture developed from first principles through a detailed investigation of the structure of the relevant DFT matrix and through mapping repetitive blocks within this matrix onto a regular silicon structure.
Resumo:
The application of fine grain pipelining techniques in the design of high performance Wave Digital Filters (WDFs) is described. It is shown that significant increases in the sampling rate of bit parallel circuits can be achieved using most significant bit (msb) first arithmetic. A novel VLSI architecture for implementing two-port adaptor circuits is described which embodies these ideas. The circuit in question is highly regular, uses msb first arithmetic and is implemented using simple carry-save adders. © 1992 Kluwer Academic Publishers.
Resumo:
A number of high-performance VLSI architectures for real-time image coding applications are described. In particular, attention is focused on circuits for computing the 2-D DCT (discrete cosine transform) and for 2-D vector quantization. The former circuits are based on Winograd algorithms and comprise a number of bit-level systolic arrays with a bit-serial, word-parallel input. The latter circuits exhibit a similar data organization and consist of a number of inner product array circuits. Both circuits are highly regular and allow extremely high data rates to be achieved through extensive use of parallelism.
Resumo:
A systematic design methodology is described for the rapid derivation of VLSI architectures for implementing high performance recursive digital filters, particularly ones based on most significant digit (msd) first arithmetic. The method has been derived by undertaking theoretical investigations of msd first multiply-accumulate algorithms and by deriving important relationships governing the dependencies between circuit latency, levels of pipe-lining and the range and number representations of filter operands. The techniques described are general and can be applied to both bit parallel and bit serial circuits, including those based on on-line arithmetic. The method is illustrated by applying it to the design of a number of highly pipelined bit parallel IIR and wave digital filter circuits. It is shown that established architectures, which were previously designed using heuristic techniques, can be derived directly from the equations described.
Resumo:
The application of fine-grain pipelining techniques in the design of high-performance wave digital filters (WDFs) is described. The problems of latency in feedback loops can be significantly reduced if computations are organized most significant, as opposed to least significant, bit first and if the results are fed back as soon as they are formed. The result is that chips can be designed which offer significantly higher sampling rates than otherwise can be obtained using conventional methods. How these concepts can be extended to the more challenging problem of WDFs is discussed. It is shown that significant increases in the sampling rate of bit-parallel circuits can be achieved using most significant bit first arithmetic.
Resumo:
This paper describes how worst-case error analysis can be applied to solve some of the practical issues in the development and implementation of a low power, high performance radix-4 FFT chip for digital video applications. The chip has been fabricated using a 0.6 µm CMOS technology and can perform a 64 point complex forward or inverse FFT on real-time video at up to 18 Megasamples per second. It comprises 0.5 million transistors in a die area of 7.8×8 mm and dissipates 1 W, leading to a cost-effective silicon solution for high quality video processing applications. The analysis focuses on the effect that different radix-4 architectural configurations and finite wordlengths has on the FFT output dynamic range. These issues are addressed using both mathematical error models and through extensive simulation.
Resumo:
The need to account for the effect of design decisions on manufacture and the impact of manufacturing cost on the life cycle cost of any product are well established. In this context, digital design and manufacturing solutions have to be further developed to facilitate and automate the integration of cost as one of the major driver in the product life cycle management. This article is to present an integration methodology for implementing cost estimation capability within a digital manufacturing environment. A digital manufacturing structure of knowledge databases are set out and the ontology of assembly and part costing that is consistent with the structure is provided. Although the methodology is currently used for recurring cost prediction, it can be well applied to other functional developments, such as process planning. A prototype tool is developed to integrate both assembly time cost and parts manufacturing costs within the same digital environment. An industrial example is used to validate this approach.
Resumo:
This paper examines the applicability of a digital manufacturing framework to the implementation of a Value Driven Design (VDD) approach for the development of a stiffened composite panel. It presents a means by which environmental considerations can be integrated with conventional product and process design drivers within a customized, digital environment. A composite forming process is used as an exemplar for the work which creates a collaborative environment for the integration of more traditional design drivers with parameters related to manufacturability as well as more sustainable processes and products. The environmental stakeholder is introduced to the VDD process through a customized product/process/resource (PPR) environment where application specific power consumption and material waste data has been measured and characterised in the process design interface. This allows the manufacturing planner to consider power consumption as a concurrent design driver and the inclusion of energy as a parameter in a VDD approach to the development of efficiently manufactured, sustainable transport systems.
Resumo:
In this paper we seek to show how marketing activities inscribe value on business model innovation, representative of an act, or sequence of socially interconnecting acts. Theoretically we ask two interlinked questions: (1) how can value inscriptions contribute to business model innovations? (2) how can marketing activities support the inscription of value on business model innovations? Semi-structured in-depth interviews were conducted with the thirty-seven members from across four industrial projects commercializing disruptive digital innovations. Various individuals from a diverse range of firms are shown to cast relevant components of their agency and knowledge on business model innovations through negotiation as an ongoing social process. Value inscription is mutually constituted from the marketing activities, interactions and negotiations of multiple project members across firms and functions to counter destabilizing forces and tensions arising from the commercialization of disruptive digital innovations. This contributes to recent conceptual thinking in the industrial marketing literature, which views business models as situated within dynamic business networks and a context-led evolutionary process. A contribution is also made to debate in the marketing literature around marketing's boundary-spanning role, with marketing activities shown to span and navigate across functions and firms in supporting value inscriptions on business model innovations.