2 resultados para Diagnosis and Recommendation Integrated System

em Glasgow Theses Service


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pulmonary hypertension (PH) is a rare but serious condition that causes progressive right ventricular (RV) failure and death. PH may be idiopathic, associated with underlying connective-tissue disease or hypoxic lung disease, and is also increasingly being observed in the setting of heart failure with preserved ejection fraction (HFpEF). The management of PH has been revolutionised by the recent development of new disease-targeted therapies which are beneficial in pulmonary arterial hypertension (PAH), but can be potentially harmful in PH due to left heart disease, so accurate diagnosis and classification of patients is essential. These PAH therapies improve exercise capacity and pulmonary haemodynamics, but their overall effect on the right ventricle remains unclear. Current practice in the UK is to assess treatment response with 6-minute walk test and NYHA functional class, neither of which truly reflects RV function. Cardiac magnetic resonance (CMR) imaging has been established as the gold standard for the evaluation of right ventricular structure and function, but it also allows a non-invasive and accurate study of the left heart. The aims of this thesis were to investigate the use of CMR in the diagnosis of PH, in the assessment of treatment response, and in predicting survival in idiopathic and connective-tissue disease associated PAH. In Chapter 3, a left atrial volume (LAV) threshold of 43 ml/m2 measured with CMR was able to distinguish idiopathic PAH from PH due to HFpEF (sensitivity 97%, specificity 100%). In Chapter 4, disease-targeted PAH therapy resulted in significant improvements in RV and left ventricular ejection fraction (p<0.001 and p=0.0007, respectively), RV stroke volume index (p<0.0001), and left ventricular end-diastolic volume index (p=0.0015). These corresponded to observed improvements in functional class and exercise capacity, although correlation coefficients between Δ 6MWD and Δ RVEF or Δ LVEDV were low. Finally, in Chapter 5, one-year and three-year survival was worse in CTD-PAH (75% and 53%) than in IPAH (83% and 74%), despite similar baseline clinical characteristics, lung function, pulmonary haemodynamics and treatment. Baseline right ventricular stroke volume index was an independent predictor of survival in both conditions. The presence of LV systolic dysfunction was of prognostic significance in CTD-PAH but not IPAH, and a higher LAV was observed in CTD-PAH suggesting a potential contribution from LV diastolic dysfunction in this group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).