924 resultados para NUDIST (Information retrieval system)
Resumo:
Context and objectives: Good clinical teaching is central to medical education but there is concern about maintaining this in contemporary, pressured health care environments. This paper aims to demonstrate that good clinical practice is at the heart of good clinical teaching. Methods: Seven roles are used as a framework for analysing good clinical teaching. The roles are medical expert, communicator, collaborator, manager, advocate, scholar and professional. Results: The analysis of clinical teaching and clinical practice demonstrates that they are closely linked. As experts, clinical teachers are involved in research, information retrieval and sharing of knowledge or teaching. Good communication with trainees, patients and colleagues defines teaching excellence. Clinicians can 'teach' collaboration by acting as role models and by encouraging learners to understand the responsibilities of other health professionals. As managers, clinicians can apply their skills to the effective management of learning resources. Similarly skills as advocates at the individual, community and population level can be passed on in educational encounters. The clinicians' responsibilities as scholars are most readily applied to teaching activities. Clinicians have clear roles in taking scholarly approaches to their practice and demonstrating them to others. Conclusion: Good clinical teaching is concerned with providing role models for good practice, making good practice visible and explaining it to trainees. This is the very basis of clinicians as professionals, the seventh role, and should be the foundation for the further development of clinicians as excellent clinical teachers.
Resumo:
Glioblastoma (GBM; grade IV astrocytoma) is a very aggressive form of brain cancer with a poor survival and few qualified predictive markers. This study integrates experimentally validated genes that showed specific upregulation in GBM along with their protein-protein interaction information. A system level analysis was used to construct GBM-specific network. Computation of topological parameters of networks showed scale-free pattern and hierarchical organization. From the large network involving 1,447 proteins, we synthesized subnetworks and annotated them with highly enriched biological processes. A careful dissection of the functional modules, important nodes, and their connections identified two novel intermediary molecules CSK21 and protein phosphatase 1 alpha (PP1A) connecting the two subnetworks CDC2-PTEN-TOP2A-CAV1-P53 and CDC2-CAV1-RB-P53-PTEN, respectively. Real-time quantitative reverse transcription-PCR analysis revealed CSK21 to be moderately upregulated and PP1A to be overexpressed by 20-fold in GBM tumor samples. Immunohistochemical staining revealed nuclear expression of PP1A only in GBM samples. Thus, CSK21 and PP1A, whose functions are intimately associated with cell cycle regulation, might play key role in gliomagenesis. Cancer Res; 70(16); 6437-47. (C)2010 AACR.
Resumo:
The following topics were dealt with: document analysis and recognition; multimedia document processing; character recognition; document image processing; cheque processing; form processing; music processing; document segmentation; electronic documents; character classification; handwritten character recognition; information retrieval; postal automation; font recognition; Indian language OCR; handwriting recognition; performance evaluation; graphics recognition; oriental character recognition; and word recognition
Resumo:
Ranking problems have become increasingly important in machine learning and data mining in recent years, with applications ranging from information retrieval and recommender systems to computational biology and drug discovery. In this paper, we describe a new ranking algorithm that directly maximizes the number of relevant objects retrieved at the absolute top of the list. The algorithm is a support vector style algorithm, but due to the different objective, it no longer leads to a quadratic programming problem. Instead, the dual optimization problem involves l1, ∞ constraints; we solve this dual problem using the recent l1, ∞ projection method of Quattoni et al (2009). Our algorithm can be viewed as an l∞-norm extreme of the lp-norm based algorithm of Rudin (2009) (albeit in a support vector setting rather than a boosting setting); thus we refer to the algorithm as the ‘Infinite Push’. Experiments on real-world data sets confirm the algorithm’s focus on accuracy at the absolute top of the list.
Resumo:
In this paper we propose a postprocessing technique for a spectrogram diffusion based harmonic/percussion decom- position algorithm. The proposed technique removes har- monic instrument leakages in the percussion enhanced out- puts of the baseline algorithm. The technique uses median filtering and an adaptive detection of percussive segments in subbands followed by piecewise signal reconstruction using envelope properties to ensure that percussion is enhanced while harmonic leakages are suppressed. A new binary mask is created for the percussion signal which upon applying on the original signal improves harmonic versus percussion separation. We compare our algorithm with two recent techniques and show that on a database of polyphonic Indian music, the postprocessing algorithm improves the harmonic versus percussion decomposition significantly.
Resumo:
We propose an iterative algorithm to detect transient segments in audio signals. Short time Fourier transform(STFT) is used to detect rapid local changes in the audio signal. The algorithm has two steps that iteratively - (a) calculate a function of the STFT and (b) build a transient signal. A dynamic thresholding scheme is used to locate the potential positions of transients in the signal. The iterative procedure ensures that genuine transients are built up while the localised spectral noise are suppressed by using an energy criterion. The extracted transient signal is later compared to a ground truth dataset. The algorithm performed well on two databases. On the EBU-SQAM database of monophonic sounds, the algorithm achieved an F-measure of 90% while on our database of polyphonic audio an F-measure of 91% was achieved. This technique is being used as a preprocessing step for a tempo analysis algorithm and a TSR (Transients + Sines + Residue) decomposition scheme.
Resumo:
Functions are important in designing. However, several issues hinder progress with the understanding and usage of functions: lack of a clear and overarching definition of function, lack of overall justifications for the inevitability of the multiple views of function, and scarcity of systematic attempts to relate these views with one another. To help resolve these, the objectives of this research are to propose a common definition of function that underlies the multiple views in literature and to identify and validate the views of function that are logically justified to be present in designing. Function is defined as a change intended by designers between two scenarios: before and after the introduction of the design. A framework is proposed that comprises the above definition of function and an empirically validated model of designing, extended generate, evaluate, modify, and select of state-change, and an action, part, phenomenon, input, organ, and effect model of causality (Known as GEMS of SAPPhIRE), comprising the views of activity, outcome, requirement-solution-information, and system-environment. The framework is used to identify the logically possible views of function in the context of designing and is validated by comparing these with the views of function in the literature. Describing the different views of function using the proposed framework should enable comparisons and determine relationships among the various views, leading to better understanding and usage of functions in designing.
Resumo:
Learning from Positive and Unlabelled examples (LPU) has emerged as an important problem in data mining and information retrieval applications. Existing techniques are not ideally suited for real world scenarios where the datasets are linearly inseparable, as they either build linear classifiers or the non-linear classifiers fail to achieve the desired performance. In this work, we propose to extend maximum margin clustering ideas and present an iterative procedure to design a non-linear classifier for LPU. In particular, we build a least squares support vector classifier, suitable for handling this problem due to symmetry of its loss function. Further, we present techniques for appropriately initializing the labels of unlabelled examples and for enforcing the ratio of positive to negative examples while obtaining these labels. Experiments on real-world datasets demonstrate that the non-linear classifier designed using the proposed approach gives significantly better generalization performance than the existing relevant approaches for LPU.
Resumo:
[ES] Se estudian las características comunes y específicas de los gestores personales de bases de datos de referencias bibliográficas más utilizados: Reference Manager, EndNote, ProCite, RefWorks y EndNote Web. Los Apartados analizados son: la entrada de datos, el control de autoridades, los comandos de edición global, la personalización de algunos aspectos de las bases de datos, la exportación de las referencias, la visualización de los Registros, la inserción de citas bibliográficas y la generación automática de bibliografías.