975 resultados para training standards
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters, or text strings. A key issue with N-gram analysis is feature selection amidst the explosion of features that occurs when N is increased. The experiments within this paper represent programs as operational code (opcode) density histograms gained through dynamic analysis. A support vector machine is used to create a reference model, which is used to evaluate two methods of feature reduction, which are 'area of intersect' and 'subspace analysis using eigenvectors.' The findings show that the relationships between features are complex and simple statistics filtering approaches do not provide a viable approach. However, eigenvector subspace analysis produces a suitable filter.
Resumo:
A new approach to spectroscopy of laser induced proton beams using radiochromic film (RCF) is presented. This approach allows primary standards of absorbed dose-to-water as used in radiotherapy to be transferred to the calibration of GafChromic HD-810 and EBT in a 29 MeV proton beam from the Birmingham cyclotron. These films were then irradiated in a common stack configuration using the TARANIS Nd:Glass multi-terawatt laser at Queens University Belfast, which can accelerate protons to 10-12 MeV, and a depth-dose curve was measured from a collimated beam. Previous work characterizing the relative effectiveness (RE) of GafChromic film as a function of energy was implemented into Monte Carlo depth-dose curves using FLUKA. A Bragg peak (BP) "library" for proton energies 0-15 MeV was generated, both with and without the RE function. These depth-response curves were iteratively summed in a FORTRAN routine to solve for the measured RCF depth-dose using a simple direct search algorithm. By comparing resultant spectra with both BP libraries, it was found that the effect of including the RE function accounted for an increase in the total number of protons by about 50%. To account for the energy loss due to a 20 mu m aluminum filter in front of the film stack, FLUKA was used to create a matrix containing the energy loss transformations for each individual energy bin. Multiplication by the pseudo-inverse of this matrix resulted in "up-shifting" protons to higher energies. Applying this correction to two laser shots gave further increases in the total number of protons, N of 31% and 56%. Failure to consider the relative response of RCF to lower proton energies and neglecting energy losses in a stack filter foil can potentially lead to significant underestimates of the total number of protons in RCF spectroscopy of the low energy protons produced by laser ablation of thin targets.
Resumo:
Public health policy for arsenic needs to better reflect the ability to detect the risk(s).
Resumo:
Under EU legislation, total arsenic levels in drinking water should not exceed 10 microg l(-1), while in the US this figure is set at 10 microg l(-1) inorganic arsenic. All rice milk samples analysed in a supermarket survey (n = 19) would fail the EU limit with up to 3 times this concentration recorded, while out of the subset that had arsenic species determined (n = 15), 80% had inorganic arsenic levels above 10 microg l(-1), with the remaining 3 samples approaching this value. It is a point for discussion whether rice milk is seen as a water substitute or as a food, there are no EU or US food standards highlighting the disparity between water and food regulations in this respect.
Resumo:
Objectives
To explore the role of evidence of effectiveness when making decisions about over-the-counter (OTC) medication and to ascertain whether evidence-based medicine training raised awareness in decision-making. Additionally, this work aimed to complement the findings of a previous study because all participants in this current study had received training in evidence-based medicine (unlike the previous participants).
Methods
Following ethical approval and an e-mailed invitation, face-to-face, semi-structured interviews were conducted with newly registered pharmacists (who had received training in evidence-based medicine as part of their MPharm degree) to discuss the role of evidence of effectiveness with OTC medicines. Interviews were recorded and transcribed verbatim. Following transcription, all data were entered into the NVivo software package (version 8). Data were coded and analysed using a constant comparison approach.
Key findings
Twenty-five pharmacists (7 males and 18 females; registered for less than 4 months) were recruited and all participated in the study. Their primary focus with OTC medicines was safety; sales of products (including those that lack evidence of effectiveness) were justified provided they did no harm. Meeting patient expectation was also an important consideration and often superseded evidence. Despite knowledge of the concept, and an awareness of ethical requirements, an evidence-based approach was not routinely implemented by these pharmacists. Pharmacists did not routinely utilize evidence-based resources when making decisions about OTC medicines and some felt uncomfortable discussing the evidence-base for OTC products with patients.
Conclusions
The evidence-based medicine training that these pharmacists received appeared to have limited influence on OTC decision-making. More work could be conducted to ensure that an evidence-based approach is routinely implemented in practice
Resumo:
Although cognitive therapy (CT) has a large empirical base, research is lacking for CT supervision and supervision training, which presents an obstacle for evidence-based practice. A pilot CT supervision training programme, based on Milne’s (2007a, 2009) evidence-based supervision and Roth and Pilling (2008) supervision competences was developed by the Northern Ireland Centre for Trauma and Transformation (NICTT), an organisation specialising in CT therapy provision and training. This study qualitatively explores CT supervisors’ perceptions of the impact the training had on their practice. Semi-structured interviews were conducted with seven participants, transcribed verbatim and analysed using Burnard’s (1991) thematic content analysis.
Findings illustrated that experienced CT supervisors perceived benefit from training and that the majority of supervisors had implemented contracts, used specific supervision models and paid more attention to supervisee learning as a result of the training. Obstacles to ensuring good supervision included the lack of reliable user-friendly evaluation tools and supervisor consultancy structures.
Recommendations are also made for future research to establish the long-term effects of supervision training and its effect on patient outcomes. Implications for future training based on adult learning principles are discussed.
Resumo:
OBJECTIVES:: We assessed the effectiveness of ToT from VR laparoscopic simulation training in 2 studies. In a second study, we also assessed the TER. ToT is a detectable performance improvement between equivalent groups, and TER is the observed percentage performance differences between 2 matched groups carrying out the same task but with 1 group pretrained on VR simulation. Concordance between simulated and in-vivo procedure performance was also assessed. DESIGN:: Prospective, randomized, and blinded. PARTICIPANTS:: In Study 1, experienced laparoscopic surgeons (n = 195) and in Study 2 laparoscopic novices (n = 30) were randomized to either train on VR simulation before completing an equivalent real-world task or complete the real-world task only. RESULTS:: Experienced laparoscopic surgeons and novices who trained on the simulator performed significantly better than their controls, thus demonstrating ToT. Their performance showed a TER between 7% and 42% from the virtual to the real tasks. Simulation training impacted most on procedural error reduction in both studies (32- 42%). The correlation observed between the VR and real-world task performance was r > 0·96 (Study 2). CONCLUSIONS:: VR simulation training offers a powerful and effective platform for training safer skills.
Resumo:
Consideration of the ethical, social, and policy implications of research has become increasingly important to scientists and scholars whose work focuses on brain and mind, but limited empirical data exist on the education in ethics available to them. We examined the current landscape of ethics training in neuroscience programs, beginning with the Canadian context specifically, to elucidate the perceived needs of mentors and trainees and offer recommendations for resource development to meet those needs. We surveyed neuroscientists at all training levels and interviewed directors of neuroscience programs and training grants. A total of 88% of survey respondents reported general interest in ethics, and 96% indicated a desire for more ethics content as it applies to brain research and clinical translation. Expert interviews revealed formal ethics education in over half of programs and in 90% of grants-based programs. Lack of time, resources, and expertise, however, are major barriers to expanding ethics content in neuroscience education. We conclude with an initial set of recommendations to address these barriers which includes the development of flexible, tailored ethics education tools, increased financial support for ethics training, and strategies for fostering collaboration between ethics experts, neuroscience program directors, and funding agencies. © 2010 the Authors. Journal Compilation © 2010 International Mind, Brain, and Education Society and Blackwell Publishing, Inc.
Resumo:
The aims of this article are to examine Lifetime Home Standards (LTHS) and Part M of the UK Building Regulations and to discuss how relevant and successful they are. The UK government expects all new homes to be built to LTHS by 2013. This is increasingly important with an ageing population. The home environment can enable independence and provide a therapeutic place for everyone. As Part M of the building regulations are compulsory in all housing and LTHS are mandatory for public sector housing, a review of research articles was undertaken on these standards. The paper begins with a brief background on accessibility regulations, followed by a critical review of the standards that takes the body of literature that has been written around them into account. This review suggests that the standards should be improved and that designers and architects face challenges to creatively incorporate them into housing design.
Resumo:
This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.