992 resultados para Multiple description coding
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
The aims of the study is to examine for intervention program of physical activity in the perception of fatigue, in patients with multiple sclerosis.
Resumo:
The aims of this study is to examine the interest for quality of life of an implementation of program physical activity, with patients of multiple sclerosis.
Resumo:
In this paper we wish to illustrate different perspectives used to create Multiple-Choice questions and we will show how we can improve these in the construction of math tests. As it is known, web technologies have a great influence on student’s behaviour. Based on an on-line project beginning at 2007 which has been contributing to help students on their individual work, we would like to share our experience and thoughts with colleagues who have a common concern when they have the task of constructing Multiple-Choice tests. We feel that Multiple-Choice tests play an important and a very useful supporting role in selfevaluation or self-examination of our students. Nonetheless, good Multiple–Choice Test Items are generally more complex and time-consuming to create than other types of tests. It requires a certain amount of skill. However, this skill maybe increases through study, practice and experience. This paper discusses a number of issues related to the use of Multiple-Choice questions, lists the advantages and disadvantages of this question format contrasting it with open questions. Some examples are given in this context.
Resumo:
Multiple-Choice items are used in many different kinds of tests in several areas of knowledge. They can be considered an interesting tool to the self-assessing or as an alternative or complementary instrument to the traditional methods for assessing knowledge. The objectivity and accuracy of the multiple-choice tests is an important reason to think about. They are especially useful when the number of students to evaluate is too large. Moodle (Modular Object-Oriented Dynamic Learning Environment) is an Open Source course management system centered around learners' needs and designed to support collaborative approaches to teaching and learning. Moodle offers to the users a rich interface, context-specific help buttons, and a wide variety of tools such as discussion forums, wikis, chat, surveys, quizzes, glossaries, journals, grade books and more, that allow them to learn and collaborate in a truly interactive space. Come together the interactivity of the Moodle platform and the objectivity of this kind of tests one can easily build manifold random tests. The proposal of this paper is to relate our journey in the construction of these tests and share our experience in the use of the Moodle platform to create, take advantage and improve the multiple-choices tests in the Mathematic area.
Resumo:
The purpose of this paper is to analyse if Multiple-Choice Tests may be considered an interesting alternative for assessing knowledge, particularly in the Mathematics area, as opposed to the traditional methods, such as open questions exams. In this sense we illustrate some opinions of the researchers in this area. Often the perception of the people about the construction of this kind of exams is that they are easy to create. But it is not true! Construct well written tests it’s a hard work and needs writing ability from the teachers. Our proposal is analyse the construction difficulties of multiple - choice tests as well some advantages and limitations of this type of tests. We also show the frequent critics and worries, since the beginning of this objective format usage. Finally in this context some examples of Multiple-Choice Items in the Mathematics area are given, and we illustrate as how we can take advantage and improve this kind of tests.
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
The application of a-SiC:H/a-Si:H pinpin photodiodes for optoelectronic applications as a WDM demultiplexer device has been demonstrated useful in optical communications that use the WDM technique to encode multiple signals in the visible light range. This is required in short range optical communication applications, where for costs reasons the link is provided by Plastic Optical Fibers. Characterization of these devices has shown the presence of large photocapacitive effects. By superimposing background illumination to the pulsed channel the device behaves as a filter, producing signal attenuation, or as an amplifier, producing signal gain, depending on the channel/background wavelength combination. We present here results, obtained by numerical simulations, about the internal electric configuration of a-SiC:H/a-Si:H pinpin photodiode. These results address the explanation of the device functioning in the frequency domain to a wavelength tunable photo-capacitance due to the accumulation of space charge localized at the bottom diode that, according to the Shockley-Read-Hall model, it is mainly due to defect trapping. Experimental result about measurement of the photodiode capacitance under different conditions of illumination and applied bias will be also presented. The combination of these analyses permits the description of a wavelength controlled photo-capacitance that combined with the series and parallel resistance of the diodes may result in the explicit definition of cut off frequencies for frequency capacitive filters activated by the light background or an oscillatory resonance of photogenerated carriers between the two diodes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The top velocity of high-speed trains is generally limited by the ability to supply the proper amount of energy through the pantograph-catenary interface. The deterioration of this interaction can lead to the loss of contact, which interrupts the energy supply and originates arcing between the pantograph and the catenary, or to excessive contact forces that promote wear between the contacting elements. Another important issue is assessing on how the front pantograph influences the dynamic performance of the rear one in trainsets with two pantographs. In this work, the track and environmental conditions influence on the pantograph-catenary is addressed, with particular emphasis in the multiple pantograph operations. These studies are performed for high speed trains running at 300 km/h with relation to the separation between pantographs. Such studies contribute to identify the service conditions and the external factors influencing the contact quality on the overhead system. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In the last years it has become increasingly clear that the mammalian transcriptome is highly complex and includes a large number of small non-coding RNAs (sncRNAs) and long noncoding RNAs (lncRNAs). Here we review the biogenesis pathways of the three classes of sncRNAs, namely short interfering RNAs (siRNAs), microRNAs (miRNAs) and PIWI-interacting RNAs (piRNAs). These ncRNAs have been extensively studied and are involved in pathways leading to specific gene silencing and the protection of genomes against virus and transposons, for example. Also, lncRNAs have emerged as pivotal molecules for the transcriptional and post-transcriptional regulation of gene expression which is supported by their tissue-specific expression patterns, subcellular distribution, and developmental regulation. Therefore, we also focus our attention on their role in differentiation and development. SncRNAs and lncRNAs play critical roles in defining DNA methylation patterns, as well as chromatin remodeling thus having a substantial effect in epigenetics. The identification of some overlaps in their biogenesis pathways and functional roles raises the hypothesis that these molecules play concerted functions in vivo, creating complex regulatory networks where cooperation with regulatory proteins is necessary. We also highlighted the implications of biogenesis and gene expression deregulation of sncRNAs and lncRNAs in human diseases like cancer.
Resumo:
A key aspect of decision-making in a disaster response scenario is the capability to evaluate multiple and simultaneously perceived goals. Current competing approaches to build decision-making agents are either mental-state based as BDI, or founded on decision-theoretic models as MDP. The BDI chooses heuristically among several goals and the MDP searches for a policy to achieve a specific goal. In this paper we develop a preferences model to decide among multiple simultaneous goals. We propose a pattern, which follows a decision-theoretic approach, to evaluate the expected causal effects of the observable and non-observable aspects that inform each decision. We focus on yes-or-no (i.e., pursue or ignore a goal) decisions and illustrate the proposal using the RoboCupRescue simulation environment.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
The aim of this study is to examine the implications of the IPPA in the perception of illness and wellbeing in MS patients. Methods - This is a quasi experimental study non-randomized study with 24 MS patients diagnosed at least 1 year before, and with an EDSS score of under 7. We used the IPPA in 3 groups of eight people in 3 Portuguese hospitals (Lisbon, Coimbra, and Porto). The sessions were held once a week for 90 minutes, over a period of 7 weeks. The instruments used were: We asked the subjects the question “Please classify the severity of your disease?” and used the Personal Wellbeing Scale (PWS) at the beginning (time A) and end (time B) of the IPPA. We used the SPSS version 20. A non-parametric statistical hypothesis test (Wilcoxon test) was used for the variable analysis. The intervention followed the recommendations of the Helsinki Declaration. Results – The results suggest that there are differences between time A and B, the perception of illness decreased (p<0.08), while wellbeing increased (p<0.01). Conclusions: The IPPA can play an important role in modifying the perception of disease severity and personal wellbeing.
Resumo:
A 9.9 kb DNA fragment from the right arm of chromosome VII of Saccharomyces cerevisiae has been sequenced and analysed. The sequence contains four open reading frames (ORFs) longer than 100 amino acids. One gene, PFK1, has already been cloned and sequenced and the other one is the probable yeast gene coding for the beta-subunit of the succinyl-CoA synthetase. The two remaining ORFs share homology with the deduced amino acid sequence (and their physical arrangement is similar to that) of the YHR161c and YHR162w ORFs from chromosome VIII.