958 resultados para information society. video conferencing
Resumo:
If the Internet could be used as a method of transmitting ultrasound images taken in the field quickly and effectively, it would bring tertiary consultation to even extremely remote centres. The aim of the study was to evaluate the maximum degree of compression of fetal ultrasound video-recordings that would not compromise signal quality. A digital fetal ultrasound videorecording of 90 s was produced, resulting in a file size of 512 MByte. The file was compressed to 2, 5 and 10 MByte. The recordings were viewed by a panel of four experienced observers who were blinded to the compression ratio used. Using a simple seven-point scoring system, the observers rated the quality of the clip on 17 items. The maximum compression ratio that was considered clinically acceptable was found to be 1:50-1:100. This produced final file sizes of 5-10 MByte, corresponding to a screen size of 320 x 240 pixels, running at 15 frames/s. This study expands the possibilities for providing tertiary perinatal services to the wider community.
Resumo:
The aim of this experiment was to determine the effectiveness of two video-based perceptual training approaches designed to improve the anticipatory skills of junior tennis players. Players were assigned equally to an explicit learning group, an implicit learning group, a placebo group or a control group. A progressive temporal occlusion paradigm was used to examine, before and after training, the ability of the players to predict the direction of an opponent's service in an in-vivo on-court setting. The players responded either through hitting a return stroke or making a verbal prediction of stroke direction. Results revealed that the implicit learning group, whose training required them to predict serve speed direction while viewing temporally occluded video footage of the return-of-serve scenario, significantly improved their prediction accuracy after the training intervention. However, this training effect dissipated after a 32 day unfilled retention interval. The explicit learning group, who received instructions about the specific aspects of the pre-contact service kinematics that are informative with respect to service direction, did not demonstrate any significant performance improvements after the intervention. This, together with the absence of any significant improvements for the placebo and control groups, demonstrated that the improvement observed for the implicit learning group was not a consequence of either expectancy or familiarity effects.
Resumo:
Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
Lifelong learning (LLL) has received increasing attention in recent years. It implies that learning should take place at all stages of the “life cycle and it should be life-wide, that is embedded in all life contexts from the school to the work place, the home and the community” (Green, 2002, p.613). The ‘learning society, is the vision of a society where there are recognized opportunities for learning for every person, wherever they are and however old they happen to be. Globalization and the rise of new information technologies are some of the driving forces that cause depreciation of specialised competences. This happens very quickly in terms of economic value; consequently, workers of all skills levels, during their working life, must have the opportunity to update “their technical skills and enhance general skills to keep pace with continuous technological change and new job requirements” (Fahr, 2005, p. 75). It is in this context that LLL tops the policy agenda of international bodies, national governments and non-governmental organizations, in the field of education and training, to justify the need for LLL opportunities for the population as they face contemporary employability challenges. It is in this context that the requirement and interest to analyse the behaviour patterns of adult learners has developed over the last few years
Resumo:
IRMA International Conference under the theme Managing Worldwide Operations and Communications with Information Technology, May 19-23, Vancouver, British Columbia, Canada
Resumo:
7th Mediterranean Conference on Information Systems, MCIS 2012, Guimaraes, Portugal, September 8-10, 2012, Proceedings Series: Lecture Notes in Business Information Processing, Vol. 129
Resumo:
Starting from the assumption that, the organizational learning process influences, in a positive way, the innovative environment and that it shows positive effects on individual, group and organizational performance, this paper deals with the analysis of companies that provide knowledge-intensive products. Referring to the initial information obtained through an adequate survey that is still being performed since May 2009, the main goal is to identify the ways to an organizational learning, measure its importance and identify the effects on the social and economic development. Our reflexion study handles two Portuguese knowledge-rich-organizations, based on the metropolitan area of Lisbon. The paper has the following methodological structure: in the first chapter we will make a theoretical contextualization about the organizational learning; in the second chapter we will handle the data recovered using the SPSS statistics software and, afterwards, presenting the main results. Finally, starting from the main conclusions achieved, we willgive clues to an ongoing reflection.
Resumo:
Knowledge is central to the modern economy and society. Indeed, the knowledge society has transformed the concept of knowledge and is more and more aware of the need to overcome the lack of knowledge when has to make options or address its problems and dilemmas. One’s knowledge is less based on exact facts and more on hypotheses, perceptions or indications. Even when we use new computational artefacts and novel methodologies for problem solving, like the use of Group Decision Support Systems (GDSSs), the question of incomplete information is in most of the situations marginalized. On the other hand, common sense tells us that when a decision is made it is impossible to have a perception of all the information involved and the nature of its intrinsic quality. Therefore, something has to be made in terms of the information available and the process of its evaluation. It is under this framework that a Multi-valued Extended Logic Programming language will be used for knowledge representation and reasoning, leading to a model that embodies the Quality-of-Information (QoI) and its quantification, along the several stages of the decision-making process. In this way, it is possible to provide a measure of the value of the QoI that supports the decision itself. This model will be here presented in the context of a GDSS for VirtualECare, a system aimed at sustaining online healthcare services.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.