988 resultados para Coding theory.


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research into domain specific ontologies is difficult to treat empirically. This is because it is difficult to ground domain ontology while simultaneously being true to its guiding philosophy or theory. Further, ontology generation is often introspective and reflective or relies on experts for ontology generation. Even those relying on expert generation lack rigour and tend to be more ad-hoc. We ask how Grounded Theory can be used to generate domain specific ontologies where appropriate high level theory and suitable textual data sources are available. We are undertaking generation of a domain ontology for the discipline of information systems by applying the Grounded Theory method. Specifically we are using Roman Ingarden’s theory of scientific works to seed a coding family and adapting the method to ask relevant questions when analysing rich textual data. We have found that a guiding ontological theory, such as Ingarden’s, can be used to seed a coding family giving rise to a viable method for generating ontologies for research. This is significant because Grounded Theory may be one of the key methods for generating ontologies where substantial uniform quality text is available to the ontologist. We also present our partial analysis of information systems research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A person-centred approach to care in residential aged care facilities should uphold residents’ rights to independence, choice, decision-making, participation, and control over their lifestyle. Little is known about how nurses and personal care assistants working in these facilities uphold these ideals when assisting residents maintain continence and manage incontinence. The overall aim of the study was to develop a grounded theory to describe and explain how Australian residents of aged care facilities have their continence care needs determined, delivered and communicated. This paper presents and discusses a subset of the findings about the ethical challenges nurses and personal care assistants encountered whilst providing continence care. Grounded theory methodology was used for in-depth interviews with 18 nurses and personal care assistants who had experience of providing, supervising or assessing continence care in any Australian residential aged care facility, and to analyse 88 hours of field observations in two facilities. Data generation and analysis occurred simultaneously using open coding, theoretical coding, and selective coding, until data were saturated. While addressing the day-to-day needs of residents who needed help to maintain continence and/or manage incontinence, nurses and personal care assistants struggled to enable residents to exercise choice and autonomy. The main factor that contributed to this problem was that the fact that nurses and personal care assistants had to respond to multiple, competing, and conflicting expectations about residents’ care needs. This situation was compounded by workforce constraints, inadequate information about residents’ care needs, and an unpredictable work environment. Providing continence care accentuated the ethical tensions associated with caregiving. Nurses’ and personal care assistants’ responses were mainly characterised by highly protective behaviours towards residents. Underlying structural factors that hinder high quality continence care to residents of aged care facilities should be urgently addressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 Network coding has shown the promise of significant throughput improvement. In this paper, we study the network throughput using network coding and explore how the maximum throughput can be achieved in a two-way relay wireless network. Unlike previous studies, we consider a more general network with arbitrary structure of overhearing status between receivers and transmitters. To efficiently utilize the coding opportunities, we invent the concept of network coding cliques (NCCs), upon which a formal analysis on the network throughput using network coding is elaborated. In particular, we derive the closed-form expression of the network throughput under certain traffic load in a slotted ALOHA network with basic medium access control. Furthermore, the maximum throughput as well as optimal medium access probability at each node is studied under various network settings. Our theoretical findings have been validated by simulation as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multicast is an important mechanism in modern wireless networks and has attracted significant efforts to improve its performance with different metrics including throughput, delay, energy efficiency, etc. Traditionally, an ideal loss-free channel model is widely used to facilitate routing protocol design. However, the quality of wireless links is affected or even jeopardized resulting in transmission failures by many factors like collisions, fading or the noise of environment. In this paper, we propose a reliable multicast protocol, called CodePipe, with energy-efficiency, high throughput and fairness in lossy wireless networks. Building upon opportunistic routing and random linear network coding, CodePipe can not only eliminate coordination between nodes, but also improve the multicast throughput significantly by exploiting both intra-batch and inter-batch coding opportunities. In particular, four key techniques, namely, LP-based opportunistic routing structure, opportunistic feeding, fast batch moving and inter-batch coding, are proposed to offer significant improvement in throughput, energy-efficiency and fairness.Moreover, we design an efficient online extension of CodePipe such that it can work in a dynamic network where nodes join and leave the network as time progresses. We evaluate CodePipe on ns2 simulator by comparing with other two state-of-art multicast protocols,MORE and Pacifier. Simulation results show that CodePipe significantly outperforms both of them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic Resonance images (MRI) do not only exhibit sparsity but their sparsity take a certain predictable shape which is common for all kinds of images. That region based localised sparsity can be used to de-noise MR images from random thermal noise. This paper present a simple framework to exploit sparsity of MR images for image de-noising. As, noise in MR images tends to change its shape based on contrast level and signal itself, the proposed method is independent of noise shape and type and it can be used in combination with other methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless mesh networks are widely applied in many fields such as industrial controlling, environmental monitoring, and military operations. Network coding is promising technology that can improve the performance of wireless mesh networks. In particular, network coding is suitable for wireless mesh networks as the fixed backbone of wireless mesh is usually unlimited energy. However, coding collision is a severe problem affecting network performance. To avoid this, routing should be effectively designed with an optimum combination of coding opportunity and coding validity. In this paper, we propose a Connected Dominating Set (CDS)-based and Flow-oriented Coding-aware Routing (CFCR) mechanism to actively increase potential coding opportunities. Our work provides two major contributions. First, it effectively deals with the coding collision problem of flows by introducing the information conformation process, which effectively decreases the failure rate of decoding. Secondly, our routing process considers the benefit of CDS and flow coding simultaneously. Through formalized analysis of the routing parameters, CFCR can choose optimized routing with reliable transmission and small cost. Our evaluation shows CFCR has a lower packet loss ratio and higher throughput than existing methods, such as Adaptive Control of Packet Overhead in XOR Network Coding (ACPO), or Distributed Coding-Aware Routing (DCAR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Educação Escolar - FCLAR

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis presenta un novedoso marco de referencia para el análisis y optimización del retardo de codificación y descodificación para vídeo multivista. El objetivo de este marco de referencia es proporcionar una metodología sistemática para el análisis del retardo en codificadores y descodificadores multivista y herramientas útiles en el diseño de codificadores/descodificadores para aplicaciones con requisitos de bajo retardo. El marco de referencia propuesto caracteriza primero los elementos que tienen influencia en el comportamiento del retardo: i) la estructura de predicción multivista, ii) el modelo hardware del codificador/descodificador y iii) los tiempos de proceso de cuadro. En segundo lugar, proporciona algoritmos para el cálculo del retardo de codificación/ descodificación de cualquier estructura arbitraria de predicción multivista. El núcleo de este marco de referencia consiste en una metodología para el análisis del retardo de codificación/descodificación multivista que es independiente de la arquitectura hardware del codificador/descodificador, completada con un conjunto de modelos que particularizan este análisis del retardo con las características de la arquitectura hardware del codificador/descodificador. Entre estos modelos, aquellos basados en teoría de grafos adquieren especial relevancia debido a su capacidad de desacoplar la influencia de los diferentes elementos en el comportamiento del retardo en el codificador/ descodificador, mediante una abstracción de su capacidad de proceso. Para revelar las posibles aplicaciones de este marco de referencia, esta tesis presenta algunos ejemplos de su utilización en problemas de diseño que afectan a codificadores y descodificadores multivista. Este escenario de aplicación cubre los siguientes casos: estrategias para el diseño de estructuras de predicción que tengan en consideración requisitos de retardo además del comportamiento tasa-distorsión; diseño del número de procesadores y análisis de los requisitos de velocidad de proceso en codificadores/ descodificadores multivista dado un retardo objetivo; y el análisis comparativo del comportamiento del retardo en codificadores multivista con diferentes capacidades de proceso e implementaciones hardware. ABSTRACT This thesis presents a novel framework for the analysis and optimization of the encoding and decoding delay for multiview video. The objective of this framework is to provide a systematic methodology for the analysis of the delay in multiview encoders and decoders and useful tools in the design of multiview encoders/decoders for applications with low delay requirements. The proposed framework characterizes firstly the elements that have an influence in the delay performance: i) the multiview prediction structure ii) the hardware model of the encoder/decoder and iii) frame processing times. Secondly, it provides algorithms for the computation of the encoding/decoding delay of any arbitrary multiview prediction structure. The core of this framework consists in a methodology for the analysis of the multiview encoding/decoding delay that is independent of the hardware architecture of the encoder/decoder, which is completed with a set of models that particularize this delay analysis with the characteristics of the hardware architecture of the encoder/decoder. Among these models, the ones based in graph theory acquire special relevance due to their capacity to detach the influence of the different elements in the delay performance of the encoder/decoder, by means of an abstraction of its processing capacity. To reveal possible applications of this framework, this thesis presents some examples of its utilization in design problems that affect multiview encoders and decoders. This application scenario covers the following cases: strategies for the design of prediction structures that take into consideration delay requirements in addition to the rate-distortion performance; design of number of processors and analysis of processor speed requirements in multiview encoders/decoders given a target delay; and comparative analysis of the encoding delay performance of multiview encoders with different processing capabilities and hardware implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adoption of DRG coding may be seen as a central feature of the mechanisms of the health reforms in New Zealand. This paper presents a story of the use of DRG coding by describing the experience of one major health provider. The conventional literature portrays casemix accounting and medical coding systems as rational techniques for the collection and provision of information for management and contracting decisions/negotiations. Presents a different perspective on the implications and effects of the adoption of DRG technology, in particular the part played by DRG coding technology as a part of a casemix system is explicated from an actor network theory perspective. Medical coding and the DRG methodology will be argued to represent ``black boxes''. Such technological ``knowledge objects'' provide strong points in the networks which are so important to the processes of change in contemporary organisations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a mean field theory of code-division multiple access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is funded by UK Medical Research Council grant number MR/L011115/1. We would like to thank the 105 experts in behaviour change who have committed their time and offered their expertise for study 2 of this research. We are also very grateful to all those who sent us peer-reviewed behaviour change intervention descriptions for study 1. Finally, we would like thank Dr. Emma Beard and Dr. Dan Dediu for their statistical input and to all the researchers, particularly Holly Walton, who have assisted in the coding of papers for study 1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.

This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.

Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.

Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.