851 resultados para Coding Scheme


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB).IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth datatransmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] First, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one incontinuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be include CAC, traffic policing used for traffic control.QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator andanalytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB). IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth data transmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] first, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one in continuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be including CAC, traffic policing used for traffic control. QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator and analytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a new approach and coding scheme for solving economic dispatch problems (ED) in power systems through an effortless hybrid method (EHM). This novel coding scheme can effectively prevent futile searching and also prevents obtaining infeasible solutions through the application of stochastic search methods, consequently dramatically improves search efficiency and solution quality. The dominant constraint of an economic dispatch problem is power balance. The operational constraints, such as generation limitations, ramp rate limits, prohibited operating zones (POZ), network loss are considered for practical operation. Firstly, in the EHM procedure, the output of generator is obtained with a lambda iteration method and without considering POZ and later in a genetic based algorithm this constraint is satisfied. To demonstrate its efficiency, feasibility and fastness, the EHM algorithm was applied to solve constrained ED problems of power systems with 6 and 15 units. The simulation results obtained from the EHM were compared to those achieved from previous literature in terms of solution quality and computational efficiency. Results reveal that the superiority of this method in both aspects of financial and CPU time. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses biometric identification using large databases, in particular, iris databases. In such applications, it is critical to have low response time, while maintaining an acceptable recognition rate. Thus, the trade-off between speed and accuracy must be evaluated for processing and recognition parts of an identification system. In this paper, a graph-based framework for pattern recognition, called Optimum-Path Forest (OPF), is utilized as a classifier in a pre-developed iris recognition system. The aim of this paper is to verify the effectiveness of OPF in the field of iris recognition, and its performance for various scale iris databases. The existing Gauss-Laguerre Wavelet based coding scheme is used for iris encoding. The performance of the OPF and two other - Hamming and Bayesian - classifiers, is compared using small, medium, and large-scale databases. Such a comparison shows that the OPF has faster response for large-scale databases, thus performing better than the more accurate, but slower, classifiers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Majority of biometric researchers focus on the accuracy of matching using biometrics databases, including iris databases, while the scalability and speed issues have been neglected. In the applications such as identification in airports and borders, it is critical for the identification system to have low-time response. In this paper, a graph-based framework for pattern recognition, called Optimum-Path Forest (OPF), is utilized as a classifier in a pre-developed iris recognition system. The aim of this paper is to verify the effectiveness of OPF in the field of iris recognition, and its performance for various scale iris databases. This paper investigates several classifiers, which are widely used in iris recognition papers, and the response time along with accuracy. The existing Gauss-Laguerre Wavelet based iris coding scheme, which shows perfect discrimination with rotary Hamming distance classifier, is used for iris coding. The performance of classifiers is compared using small, medium, and large scale databases. Such comparison shows that OPF has faster response for large scale database, thus performing better than more accurate but slower Bayesian classifier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: High intercoder reliability (ICR) is required in qualitative content analysis for assuring quality when more than one coder is involved in data analysis. The literature is short of standardized procedures for ICR procedures in qualitative content analysis. OBJECTIVE: To illustrate how ICR assessment can be used to improve codings in qualitative content analysis. METHODS: Key steps of the procedure are presented, drawing on data from a qualitative study on patients' perspectives on low back pain. RESULTS: First, a coding scheme was developed using a comprehensive inductive and deductive approach. Second, 10 transcripts were coded independently by two researchers, and ICR was calculated. A resulting kappa value of .67 can be regarded as satisfactory to solid. Moreover, varying agreement rates helped to identify problems in the coding scheme. Low agreement rates, for instance, indicated that respective codes were defined too broadly and would need clarification. In a third step, the results of the analysis were used to improve the coding scheme, leading to consistent and high-quality results. DISCUSSION: The quantitative approach of ICR assessment is a viable instrument for quality assurance in qualitative content analysis. Kappa values and close inspection of agreement rates help to estimate and increase quality of codings. This approach facilitates good practice in coding and enhances credibility of analysis, especially when large samples are interviewed, different coders are involved, and quantitative results are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Our objective was to analyze subjective explanations for unsuccessful weight loss among bariatric surgery candidates. METHODS: This was a retrospective analysis of 909 bariatric surgery candidates (78.2% female, average body mass index [BMI] 47.3) at a university center from 2001 to April 2007 who answered an open-ended question about why they were unable to lose weight. We generated a coding scheme for answers to the question and established inter-rater reliability of the coding process. Associations with demographic parameters and initial BMI were tested. RESULTS: The most common categories of answers were nonspecific explanations related to diet (25.3%), physical activity (21.0%), or motivation (19.7%), followed by diet-related motivation (12.7%) and medical conditions or medications affecting physical activity (12.7%). Categories related to time, financial cost, social support, physical environment, and knowledge occurred in less than 4% each. Men were more likely than women to cite a medical condition or medication affecting physical activity (19.2% vs 10.8%, P = 0.002, odds ratio [OR] = 1.96, 95% confidence interval [CI] = 1.28-2.99) but less likely to cite diet-related motivation (7.1% vs 14.2%, P = 0.008, OR = 0.46, 95% CI = 0.26-0.82). CONCLUSIONS: Our findings suggest that addressing diet, physical activity, and motivation in a comprehensive approach would meet the stated needs of obese patients. Raising patient awareness of under-recognized barriers to weight loss, such as the physical environment and lack of social support, should also be considered. Lastly, anticipating gender-specific attributions may facilitate tailoring of interventions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research has shown that physical activity serves a preventive function against the development of several major chronic diseases. However, studying physical activity and its health benefits is difficult due to the complexity of measuring physical activity. The overall aim of this research is to contribute to the knowledge of both correlates and measurement of physical activity. Data from the Women On The Move study were used for this study (n = 260), and the results are presented in three papers. The first paper focuses on the measurement of physical activity and compares an alternate coding method with the standard coding method for calculating energy expenditure from a 7-day activity diary. Results indicate that the alternative coding scheme could produce similar results to the standard coding in terms of total activity expenditure. Even though agreement could not be achieved by dimension, the study lays the groundwork for a coding system that saves considerable amount of time in coding activity and has the ability to estimate expenditure more accurately for activities that can be performed at varying intensity levels. The second paper investigates intra-day variability in physical activity by estimating the variation in energy expenditure for workers and non-workers and identifying the number of days of diary self-report necessary to reliably estimate activity. The results indicate that 8 days of activity are needed to reliably estimate total activity for individuals who don't work and 12 days of activity are needed to reliably estimate total activity for those who work. Days of diary self-report required by dimension for those who don't work range from 6 to 16 and for those who work from 6 to 113. The final paper presents findings on the relationship between daily living activity and Type A behavior pattern. Significant findings are observed for total activity and leisure activity with the Temperament Scale summary score. Significant findings are also observed for total activity, household chores, work, leisure activity, exercise, and inactivity with one or more of the individual items on the Temperament Scale. However, even though some significant findings were observed, the overall models did not reveal meaningful associations. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research focused on the topic of end-of-life planning and decision-making for adults affected by mental retardation. Adults with mental retardation have unique challenges in this regard, including difficulty communicating their wishes without assistance and diminished decision-making skills. The primary research objective was to identify factors that can affect opportunities for adults with mental retardation in community-based services settings (and their advocates) to be involved in planning and deciding about their own end-of-life experience. ^ A descriptive qualitative inquiry was designed to explore issues related to death and dying, and the notion of end-of-life planning, from the perspective of adults with mental retardation who receive publicly-funded community services ("clients") and family members of individuals who receive such services. Study participants were recruited from a single mental retardation service provider in a large urban setting (the "Agency"). Sixteen clients and 14 families of Agency clients took part. Client data collection was accomplished through face-to-face interviews, focus group meetings, and record reviews; family members were involved in a face-to-face interview only. ^ An initial coding scheme was developed based upon literature and policy reviews, and themes related to the research questions. Analysis involved extracting data from transcripts and records and placing it into appropriate thematic categories, building support for each theme with the accumulated data. Coding themes were modified to accommodate new data when it challenged existing themes. ^ Findings suggest that adults with mental retardation do have the requisite knowledge, interest, and ability to participate in decisions about their end-of-life experience and handling of affairs. Siblings are overwhelmingly the chosen future surrogates and they (or their children) will likely be the end-of-life advocates for their brothers and sisters affected by mental retardation. Findings further point to a need for increased awareness, accurate information, and improved communication about end-of-life issues, both in general and particular to adults affected by mental retardation. Also suggested by the findings is a need to focus on creating accommodations and adaptations that can best uncover a person's authentic views on life and death and related end-of-life preferences. Practical implications and suggestions for further research are also discussed. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Transitions between dynamically stable activity patterns imposed on an associative neural network are shown to be induced by self-organized infinitesimal changes in synaptic connection strength and to be a kind of phase transition. A key event for the neural process of information processing in a population coding scheme is transition between the activity patterns encoding usual entities. We propose that the infinitesimal and short-term synaptic changes based on the Hebbian learning rule are the driving force for the transition. The phase transition between the following two dynamical stable states is studied in detail, the state where the firing pattern is changed temporally so as to itinerate among several patterns and the state where the firing pattern is fixed to one of several patterns. The phase transition from the pattern itinerant state to a pattern fixed state may be induced by the Hebbian learning process under a weak input relevant to the fixed pattern. The reverse transition may be induced by the Hebbian unlearning process without input. The former transition is considered as recognition of the input stimulus, while the latter is considered as clearing of the used input data to get ready for new input. To ensure that information processing based on the phase transition can be made by the infinitesimal and short-term synaptic changes, it is absolutely necessary that the network always stays near the critical state corresponding to the phase transition point.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The visual world is presented to the brain through patterns of action potentials in the population of optic nerve fibers. Single-neuron recordings show that each retinal ganglion cell has a spatially restricted receptive field, a limited integration time, and a characteristic spectral sensitivity. Collectively, these response properties define the visual message conveyed by that neuron's action potentials. Since the size of the optic nerve is strictly constrained, one expects the retina to generate a highly efficient representation of the visual scene. By contrast, the receptive fields of nearby ganglion cells often overlap, suggesting great redundancy among the retinal output signals. Recent multineuron recordings may help resolve this paradox. They reveal concerted firing patterns among ganglion cells, in which small groups of nearby neurons fire synchronously with delays of only a few milliseconds. As there are many more such firing patterns than ganglion cells, such a distributed code might allow the retina to compress a large number of distinct visual messages into a small number of optic nerve fibers. This paper will review the evidence for a distributed coding scheme in the retinal output. The performance limits of such codes are analyzed with simple examples, illustrating that they allow a powerful trade-off between spatial and temporal resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new method has been developed for prediction of transmembrane helices using support vector machines. Different coding schemes of protein sequences were explored, and their performances were assessed by crossvalidation tests. The best performance method can predict the transmembrane helices with sensitivity of 93.4% and precision of 92.0%. For each predicted transmembrane segment, a score is given to show the strength of transmembrane signal and the prediction reliability. In particular, this method can distinguish transmembrane proteins from soluble proteins with an accuracy of similar to99%. This method can be used to complement current transmembrane helix prediction methods and can be Used for consensus analysis of entire proteomes . The predictor is located at http://genet.imb.uq.edu.au/predictors/ SVMtm. (C) 2004 Wiley Periodicals, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This report presents and evaluates a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The benefits of the idea of MP performed in the transform domain are analysed in detail. The main contribution of this work is extending MP with wavelets to colour coding and proposing a coding method. We exploit correlations between image subbands after wavelet transformation in RGB colour space. Then, a new and simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE), inspired by the idea of coding indexes in relational databases, is applied. As a final coding step arithmetic coding is used assuming uniform distributions of MP atom parameters. The target application is compression at low and medium bit-rates. Coding performance is compared to JPEG 2000 showing the potential to outperform the latter with more sophisticated than uniform data models for arithmetic coder. The results are presented for grayscale and colour coding of 12 standard test images.