9 resultados para Trial and error
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Resumo:
Go príomha, is tráchtas é seo a dhéanann staidéar ar ghné de litríocht iar-chlasaiceach na Gaeilge. Baineann sé go háirithe leis an sraith chaointe nó marbhnaí i bhfoirm véarsaíochta a cumadh do Shéamas Óg Mac Coitir (1689-1720), duine uasal Caitliceach ó Charraig Tuathail, Co. Chorcaí, nuair a ciontaíodh é in éigniú Elizabeth Squibb, bean de Chumann na gCarad; nuair a cuireadh pionós an bháis air; agus nuair a crochadh é i gCathair Chorcaí an 7 Bealtaine, 1720. Ó thaobh na staire de, scrúdaítear Clann Choitir mar shampla de theaghlach nár cheil a ndílseacht do chúis pholaitiúil na Stíobhartach agus a sheas an fód go cróga faoi mar a bhí a ngreim polaitiúil á dhaingniú ag an gCinsealacht Phrotastúnach ó dheireadh an 17ú haois amach. Tagraítear do sheicteachas na sochaí comhaimseartha agus don teannas idir an pobal Caitliceach agus an pobal Protastúnach ag an am. Déantar scagadh ar an véarsaíocht mar fhoinse luachmhar do dhearcadh míshásta an mhóraimh Chaitlicigh ar struchtúr polaitiúil chontae Chorcaí (agus na hÉireann) i dtosach an 18ú haois. Is feiniméan liteartha an dlús véarsaíochta seo a bhaineann go háirithe le traidisiún liteartha Chorcaí. Tá na dánta curtha in eagar agus aistriúchán go Béarla curtha ar fáil: is é seo croí an tráchtais. Tá an t-eagrán bunaithe ar scrúdú cuimsitheach ar thraidisiún na lsí; pléitear modheolaíocht na heagarthóireachta. Déantar iarracht ar na dánta a shuíomh sa traidisiún casta liteartha sa tráchtaireacht tosaigh; sa chuid eile den bhfearas scoláiriúil, scrúdaítear ceisteanna a bhaineann le cúrsaí teanga, foclóra, meadarachta agus stíle. Tá innéacsanna agus liosta foinsí le fáil i ndeireadh an tráchtais.
Resumo:
Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-perceived quality. Additionally, we propose a scheme that combines network coding and SDC to further improve the error resiliency. SDC yields upwards of 25% bandwidth savings over MDC. Additionally, our scheme features higher quality for longer durations even at high packet loss rates.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.
Resumo:
Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
Extracting wave energy from seas has been proven to be very difficult although various technologies have been developed since 1970s. Among the proposed technologies, only few of them have been actually progressed to the advanced stages such as sea trials or pre-commercial sea trial and engineering. One critical question may be how we can design an efficient wave energy converter or how the efficiency of a wave energy converter can be improved using optimal and control technologies, because higher energy conversion efficiency for a wave energy converter is always pursued and it mainly decides the cost of the wave energy production. In this first part of the investigation, some conventional optimal and control technologies for improving wave energy conversion are examined in a form of more physical meanings, rather than the purely complex mathematical expressions, in which it is hoped to clarify some confusions in the development and the terminologies of the technologies and to help to understand the physics behind the optimal and control technologies. As a result of the understanding of the physics and the principles of the optima, a new latching technology is proposed, in which the latching duration is simply calculated from the wave period, rather than based on the future information/prediction, hence the technology could remove one of the technical barriers in implementing this control technology. From the examples given in the context, this new latching control technology can achieve a phase optimum in regular waves, and hence significantly improve wave energy conversion. Further development on this latching control technologies can be found in the second part of the investigation.
Resumo:
Objective: To estimate the absolute treatment effect of statin therapy on major adverse cardiovascular events (MACE; myocardial infarction, stroke and vascular death) for the individual patient aged C70 years. Methods: Prediction models for MACE were derived in patients aged C70 years with (n = 2550) and without (n = 3253) vascular disease from the ‘‘PROspective Study of Pravastatin in Elderly at Risk’’ (PROSPER) trial and validated in the ‘‘Secondary Manifestations of ARTerial disease’’ (SMART) cohort study (n = 1442) and the ‘‘Anglo-Scandinavian Cardiac Outcomes Trial-Lipid Lowering Arm’’ (ASCOT-LLA) trial (n = 1893), respectively, using competing risk analysis. Prespecified predictors were various clinical characteristics including statin treatment. Individual absolute risk reductions (ARRs) for MACE in 5 and 10 years were estimated by subtracting ontreatment from off-treatment risk. Results: Individual ARRs were higher in elderly patients with vascular disease [5-year ARRs: median 5.1 %, interquartile range (IQR) 4.0–6.2 %, 10-year ARRs: median 7.8 %, IQR 6.8–8.6 %] than in patients without vascular disease (5-year ARRs: median 1.7 %, IQR 1.3–2.1 %, 10-year ARRs: 2.9 %, IQR 2.3–3.6 %). Ninetyeight percent of patients with vascular disease had a 5-year ARR C2.0 %, compared to 31 % of patients without vascular disease. Conclusions: With a multivariable prediction model the absolute treatment effect of a statin on MACE for individual elderly patients with and without vascular disease can be quantified. Because of high ARRs, treating all patients is more beneficial than prediction-based treatment for secondary prevention of MACE. For primary prevention of MACE, the prediction model can be used to identify those patients who benefit meaningfully from statin therapy.