877 resultados para pacs: equipment and software evaluation methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Research suggests that nurses and nursing students lack competence in basic electrocardiogram (ECG) interpretation. Self-efficacy is considered to be paramount in the development of one's competence. The aim of this study was to develop and psychometrically evaluate a scale to assess self-efficacy of nursing students in basic ECG interpretation. Materials and methods: Observational cross-sectional study with a convenience sample of 293 nursing students. The basic ECG interpretation self-efficacy scale (ECG-SES) was developed and psychometrically tested in terms of reliability (internal consistency and temporal stability) and validity (content, criterion and construct). The ECG-SES’ internal consistency was explored by calculating the Cronbach's alpha coefficient (α); its temporal stability was investigated by calculating the Pearson correlation coefficient (r) between the participants’ results on a test–retest separated by a 4-week interval. The content validity index of the items (I-CVI) and the scale (S-CVI) was calculated based on the reviews of a panel of 16 experts. Criterion validity was explored by correlating the participants’ results on the ECG-SES with their results on the New General Self-Efficacy Scale (NGSE). 1 Construct validity was investigated by performing Principal Component Analysis (PCA) and known-group analysis. Results: The excellent reliability of the ECG-SES was evidenced by its internal consistency (α = 0.98) and its temporal stability at the 4-week re-test (r = 0.81; p < 0.01). The ECG-SES’ content validity was also excellent (all items’ I-CVI = 0.94–1; S-CVI = 0.99). A strong, significant correlation between the NGSE and the ECG-SES (r = 0.70; p < 0.01) showed its criterion validity. Corroborating the ECG-SES’ construct validity, PCA revealed that all its items loaded on a single factor that explained 74.6% of the total variance found. Furthermore, known-groups analysis showed the ECG-SES’ ability to detect expected differences in self-efficacy between groups with different training experiences (p < 0.01). Conclusion: The ECG-SES showed excellent psychometric properties for measuring the self-efficacy of nursing students in basic ECG interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To develop a high-performance liquid chromatography (HPLC) fingerprint method for the quality control and origin discrimination of Gastrodiae rhizoma . Methods: Twelve batches of G. rhizoma collected from Sichuan, Guizhou and Shanxi provinces in china were used to establish the fingerprint. The chromatographic peak (gastrodin) was taken as the reference peak, and all sample separation was performed on a Agilent C18 (250 mm×4.6 mmx5 μm) column with a column temperature of 25 °C. The mobile phase was acetonitrile/0.8 % phosphate water solution (in a gradient elution mode) and the flow rate of 1 mL/min. The detection wavelength was 270 nm. The method was validated as per the guidelines of Chinese Pharmacopoeia. Results: The chromatograms of the samples showed 11 common peaks, of which no. 4 was identified as that of Gastrodin. Data for the samples were analyzed statistically using similarity analysis and hierarchical cluster analysis (HCA). The similarity index between reference chromatogram and samples’ chromatograms were all > 0.80. The similarity index of G. rhizoma from Guizhou, Shanxi and Sichuan is evident as follows: 0.854 - 0.885, 0.915 - 0.930 and 0.820 - 0.848, respectively. The samples could be divided into three clusters at a rescaled distance of 7.5: S1 - S4 as cluster 1; S5 - S8 cluster 2, and others grouped into cluster 3. Conclusion: The findings indicate that HPLC fingerprinting technology is appropriate for quality control and origin discrimination of G. rhizoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to characterise and quantify the fungal fragment propagules derived and released from several fungal species (Penicillium, Aspergillus niger and Cladosporium cladosporioides) using different generation methods and different air velocities over the colonies. Real time fungal spore fragmentation was investigated using an Ultraviolet Aerodynamic Particle Sizer (UVASP) and a Scanning Mobility Particle Sizer (SMPS). The study showed that there were significant differences (p < 0.01) in the fragmentation percentage between different air velocities for the three generation methods, namely the direct, the fan and the fungal spore source strength tester (FSSST) methods. The percentage of fragmentation also proved to be dependant on fungal species. The study found that there was no fragmentation for any of the fungal species at an air velocity ≤ 0.4 m/s for any method of generation. Fluorescent signals, as well as mathematical determination also showed that the fungal fragments were derived from spores. Correlation analysis showed that the number of released fragments measured by the UVAPS under controlled conditions can be predicted on the basis of the number of spores, for Penicillium and Aspergillus niger, but not for Cladosporium cladosporioides. The fluorescence percentage of fragment samples was found to be significantly different to that of non-fragment samples (p < 0.0001) and the fragment sample fluorescence was always less than that of the non-fragment samples. Size distribution and concentration of fungal fragment particles were investigated qualitatively and quantitatively, by both UVAPS and SMPS, and it was found that the UVAPS was more sensitive than the SMPS for measuring small sample concentrations, and the results obtained from the UVAPS and SMAS were not identical for the same samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the service-oriented paradigm has been well established in the technical domain for quite some time now, service governance is still considered a research gap. To ensure adequate governance, there is a necessity to manage services as first-class assets throughout the lifecycle. Now that the concept of ser-vice-orientation is also increasingly applied on the business level to structure an organisation’s capabili-ties, the problem has become an even bigger chal-lenge. This paper presents a generic business and software service lifecycle and aligns it with the com-mon management layers in organisations. Using ser-vice analysis as an example, it moreover illustrates how activities in the service lifecycle may vary on lower levels of granularity depending on the focus on business or software services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To meet new challenges of Enterprise Systems that essentially go beyond the initial implementation, contemporary organizations seek employees with business process experts with software skills. Despite a healthy demand from the industry for such expertise, recent studies reveal that most Information Systems (IS) graduates are ill-equipped to meet the challenges of modern organizations. This paper shares insights and experiences from a course that is designed to provide a business process centric view of a market leading Enterprise System. The course, designed for both undergraduate and graduate students, uses two common business processes in a case study that employs both sequential and explorative exercises. Student feedback gained through two longitudinal surveys across two phases of the course demonstrates promising signs of the teaching approach.