925 resultados para Large-group methods
Resumo:
Background Good blood pressure (BP) control reduces the risk of recurrence of stroke/transient ischaemic attack (TIA). Although there is strong evidence that BP telemonitoring helps achieve good control, none of the major trials have considered the effectiveness in stroke/TIA survivors. We therefore conducted a feasibility study for a trial of BP telemonitoring for stroke/ TIA survivors with uncontrolled BP in primary care. Method Phase 1 was a pilot trial involving 55 patients stratified by stroke/TIA randomised 3:1 to BP telemonitoring for 6 months or usual care. Phase 2 was a qualitative evaluation and comprised semi-structured interviews with 16 trial participants who received telemonitoring and 3 focus groups with 23 members of stroke support groups and 7 carers. Results Overall, 125 patients (60 stroke patients, 65 TIA patients) were approached and 55 (44%) patients were randomised including 27 stroke patients and 28 TIA patients. Fifty-two participants (95%) attended the 6-month follow-up appointment, but one declined the second daytime ambulatory blood pressure monitoring (ABPM) measurement resulting in a 93% completion rate for ABPM − the proposed primary outcome measure for a full trial. Adherence to telemonitoring was good; of the 40 participants who were telemonitoring, 38 continued to provide readings throughout the 6 months. There was a mean reduction of 10.1 mmHg in systolic ABPM in the telemonitoring group compared with 3.8 mmHg in the control group, which suggested the potential for a substantial effect from telemonitoring. Our qualitative analysis found that many stroke patients were concerned about their BP and telemonitoring increased their engagement, was easy, convenient and reassuring Conclusions A full-scale trial is feasible, likely to recruit well and have good rates of compliance and follow-up.
Resumo:
Aims Surgery for infective endocarditis (IE) is associated with high mortality. Our objectives were to describe the experience with surgical treatment for IE in Spain, and to identify predictors of in-hospital mortality. Methods Prospective cohort of 1000 consecutive patients with IE. Data were collected in 26 Spanish hospitals. Results Surgery was performed in 437 patients (43.7%). Patients treated with surgery were younger and predominantly male. They presented fewer comorbid conditions and more often had negative blood cultures and heart failure. In-hospital mortality after surgery was lower than in the medical therapy group (24.3 vs 30.7%, p = 0.02). In patients treated with surgery, endocarditis involved a native valve in 267 patients (61.1%), a prosthetic valve in 122 (27.9%), and a pacemaker lead with no clear further valve involvement in 48 (11.0%). The most common aetiologies were Staphylococcus (186, 42.6%), Streptococcus (97, 22.2%), and Enterococcus (49, 11.2%). The main indications for surgery were heart failure and severe valve regurgitation. A risk score for in-hospital mortality was developed using 7 prognostic variables with a similar predictive value (OR between 1.7 and 2.3): PALSUSE: prosthetic valve, age ≥ 70, large intracardiac destruction, Staphylococcus spp, urgent surgery, sex [female], EuroSCORE ≥ 10. In-hospital mortality ranged from 0% in patients with a PALSUSE score of 0 to 45.4% in patients with PALSUSE score > 3. Conclusions The prognosis of IE surgery is highly variable. The PALSUSE score could help to identify patients with higher in-hospital mortality.
Resumo:
Lee M.H., Qualitative Circuit Models in Failure Analysis Reasoning, AI Journal. vol 111, pp239-276.1999.
Resumo:
Thomas, R., Spink, S., Durbin, J. & Urquhart, C. (2005). NHS Wales user needs study including knowledgebase tools report. Report for Informing Healthcare Strategy implementation programme. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: Informing Healthcare, NHS Wales
Resumo:
Tedd, L.A., Dahl, K., Francis, S.,Tet?evov?, M.& ?ihlavn?kov?, E.(2002).Training for professional librarians in Slovakia by distance-learning methods: an overview of the PROLIB and EDULIB projects. Library Hi Tech, 20(3), 340-351. Sponsorship: European Union and the Open Society Institute
Resumo:
Morgan, H.; Habbal, S. R., An empirical 3D model of the large-scale coronal structure based on the distribution of H? filaments on the solar disk, Astronomy and Astrophysics, Volume 464, Issue 1, March II 2007, pp.357-365
Resumo:
R. Daly and Q. Shen. Methods to accelerate the learning of bayesian network structures. Proceedings of the Proceedings of the 2007 UK Workshop on Computational Intelligence.
Resumo:
C. Shang and Q. Shen. Aiding classification of gene expression data with feature selection: a comparative study. Computational Intelligence Research, 1(1):68-76.
Resumo:
Oliver, A., Freixenet, J., Marti, R., Pont, J., Perez, E., Denton, E. R. E., Zwiggelaar, R. (2008). A novel breast tissue density classification framework. IEEE Transactions on Information Technology in BioMedicine, 12 (1), 55-65
Resumo:
Jasimuddin, Sajjad, 'Exploring knowledge transfer mechanisms: The case of a UK-based group within a high-tech global corporation', International Journal of Information Management (2007) 27(4) pp.294-300 RAE2008
Resumo:
Douglas, Robert; Cullen, M.J.P., (2002) 'Large-Amplitude nonlinear stability results for atmospheric circulations', The Quarterly Journal of the Royal Meteorological Society 129 pp.1969-1988 RAE2008
Resumo:
BACKGROUND:The Framingham Heart Study (FHS), founded in 1948 to examine the epidemiology of cardiovascular disease, is among the most comprehensively characterized multi-generational studies in the world. Many collected phenotypes have substantial genetic contributors; yet most genetic determinants remain to be identified. Using single nucleotide polymorphisms (SNPs) from a 100K genome-wide scan, we examine the associations of common polymorphisms with phenotypic variation in this community-based cohort and provide a full-disclosure, web-based resource of results for future replication studies.METHODS:Adult participants (n = 1345) of the largest 310 pedigrees in the FHS, many biologically related, were genotyped with the 100K Affymetrix GeneChip. These genotypes were used to assess their contribution to 987 phenotypes collected in FHS over 56 years of follow up, including: cardiovascular risk factors and biomarkers; subclinical and clinical cardiovascular disease; cancer and longevity traits; and traits in pulmonary, sleep, neurology, renal, and bone domains. We conducted genome-wide variance components linkage and population-based and family-based association tests.RESULTS:The participants were white of European descent and from the FHS Original and Offspring Cohorts (examination 1 Offspring mean age 32 +/- 9 years, 54% women). This overview summarizes the methods, selected findings and limitations of the results presented in the accompanying series of 17 manuscripts. The presented association results are based on 70,897 autosomal SNPs meeting the following criteria: minor allele frequency [greater than or equal to] 10%, genotype call rate [greater than or equal to] 80%, Hardy-Weinberg equilibrium p-value [greater than or equal to] 0.001, and satisfying Mendelian consistency. Linkage analyses are based on 11,200 SNPs and short-tandem repeats. Results of phenotype-genotype linkages and associations for all autosomal SNPs are posted on the NCBI dbGaP website at http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?id=phs000007.CONCLUSION:We have created a full-disclosure resource of results, posted on the dbGaP website, from a genome-wide association study in the FHS. Because we used three analytical approaches to examine the association and linkage of 987 phenotypes with thousands of SNPs, our results must be considered hypothesis-generating and need to be replicated. Results from the FHS 100K project with NCBI web posting provides a resource for investigators to identify high priority findings for replication.
Resumo:
Estimation of 3D hand pose is useful in many gesture recognition applications, ranging from human-computer interaction to automated recognition of sign languages. In this paper, 3D hand pose estimation is treated as a database indexing problem. Given an input image of a hand, the most similar images in a large database of hand images are retrieved. The hand pose parameters of the retrieved images are used as estimates for the hand pose in the input image. Lipschitz embeddings of edge images into a Euclidean space are used to improve the efficiency of database retrieval. In order to achieve interactive retrieval times, similarity queries are initially performed in this Euclidean space. The paper describes ongoing work that focuses on how to best choose reference images, in order to improve retrieval accuracy.
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.