848 resultados para usefulness of training
Resumo:
Objective: The aim of the present study was to determine the relationship between the characteristics of general practices and the perceptions of the psychological content of consultations by GPs in those practices. Methods: A cross-sectional survey was conducted of all GPs (22 GPs based in nine practices) serving a discrete inner city community of 41 000 residents. GPs were asked to complete a log-diary over a period of five working days, rating their perception of the psychological content of each consultation on a 4-point Likert scale, ranging from 0 (no psychological content) to 3 (entirely psychological in content). The influence of GP and practice characteristics on psychological content scores was examined. Results: Data were available for every surgery-based consultation (n = 2206) conducted by all 22 participating GPs over the study period. The mean psychological content score was 0.58 (SD 0.33). Sixty-four percent of consultations were recorded as being without any psychological content; 6% were entirely psychological in content. Higher psychological content scores were significantly associated with younger GPs, training practices (n = 3), group practices (n = 4), the presence of on-site mental health workers (n = 5), higher antidepressant prescribing volumes and the achievement of vaccine and smear targets. Training status had the greatest predictive power, explaining 51% of the variation in psychological content. Neither practice consultation rates, GP list size, annual psychiatric referral rates nor volumes of benzodiazepine prescribing were related to psychological content scores. Conclusion: Increased awareness by GPs of the psychological dimension within a consultation may be a feature of the educational environment of training practices.
Resumo:
In this paper an attempt is described to increase the range of human sensory capabilities by means of implant technology. The key aim is to create an additional sense by feeding signals directly to the human brain, via the nervous system rather than via a presently operable human sense. Neural implant technology was used to directly interface a human nervous system with a computer in a one off trial. The output from active ultrasonic sensors was then employed to directly stimulate the human nervous system. An experimental laboratory set up was used as a test bed to assess the usefulness of this sensory addition.
Resumo:
The usefulness of motor subtypes of delirium is unclear due to inconsistency in subtyping methods and a lack of validation with objective measures of activity. The activity of 40 patients was measured over 24 h with a commercial accelerometer-based activity monitor. Accelerometry data from patients with DSM-IV delirium that were readily divided into hyperactive, hypoactive and mixed motor subtypes, were used to create classification trees that were Subsequently applied to the remaining cohort to define motoric subtypes. The classification trees used the periods of sitting/lying, standing, stepping and number of postural transitions as measured by the activity monitor as determining factors from which to classify the delirious cohort. The use of a classification system shows how delirium subtypes can be categorised in relation to overall activity and postural changes, which was one of the most discriminating measures examined. The classification system was also implemented to successfully define other patient motoric subtypes. Motor subtypes of delirium defined by observed ward behaviour differ in electronically measured activity levels. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved.
Resumo:
The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models, and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas, and performing a pseudo-monochromatic radiation calculation for each point. In this paper it is first argued that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer pseudo-monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K d−1 due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K d−1 can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K d−1 for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.
Resumo:
The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasingly complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I) reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develops conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to building simulation scientists, initiates a dialogue and builds bridges between scientists and engineers, and stimulates future research about a wide range of issues on building environmental systems.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasing complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I), published in the previous issue, reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develop conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to (1) building simulation scientists and designers (2) initiating a dialogue between scientists and engineers, and (3) stimulating future research on a wide range of issues involved in designing and managing building environmental systems.
Resumo:
We are developing computational tools supporting the detailed analysis of the dependence of neural electrophysiological response on dendritic morphology. We approach this problem by combining simulations of faithful models of neurons (experimental real life morphological data with known models of channel kinetics) with algorithmic extraction of morphological and physiological parameters and statistical analysis. In this paper, we present the novel method for an automatic recognition of spike trains in voltage traces, which eliminates the need for human intervention. This enables classification of waveforms with consistent criteria across all the analyzed traces and so it amounts to reduction of the noise in the data. This method allows for an automatic extraction of relevant physiological parameters necessary for further statistical analysis. In order to illustrate the usefulness of this procedure to analyze voltage traces, we characterized the influence of the somatic current injection level on several electrophysiological parameters in a set of modeled neurons. This application suggests that such an algorithmic processing of physiological data extracts parameters in a suitable form for further investigation of structure-activity relationship in single neurons.
Resumo:
Adaptive filters used in code division multiple access (CDMA) receivers to counter interference have been formulated both with and without the assumption of training symbols being transmitted. They are known as training-based and blind detectors respectively. We show that the convergence behaviour of the blind minimum-output-energy (MOE) detector can be quite easily derived, unlike what was implied by the procedure outlined in a previous paper. The simplification results from the observation that the correlation matrix determining convergence performance can be made symmetric, after which many standard results from the literature on least mean square (LMS) filters apply immediately.
Resumo:
We examine whether a three-regime model that allows for dormant, explosive and collapsing speculative behaviour can explain the dynamics of the S&P 500. We extend existing models of speculative behaviour by including a third regime that allows a bubble to grow at a steady rate, and propose abnormal volume as an indicator of the probable time of bubble collapse. We also examine the financial usefulness of the three-regime model by studying a trading rule formed using inferences from it, whose use leads to higher Sharpe ratios and end of period wealth than from employing existing models or a buy-and-hold strategy.
Resumo:
Identifying a periodic time-series model from environmental records, without imposing the positivity of the growth rate, does not necessarily respect the time order of the data observations. Consequently, subsequent observations, sampled in the environmental archive, can be inversed on the time axis, resulting in a non-physical signal model. In this paper an optimization technique with linear constraints on the signal model parameters is proposed that prevents time inversions. The activation conditions for this constrained optimization are based upon the physical constraint of the growth rate, namely, that it cannot take values smaller than zero. The actual constraints are defined for polynomials and first-order splines as basis functions for the nonlinear contribution in the distance-time relationship. The method is compared with an existing method that eliminates the time inversions, and its noise sensitivity is tested by means of Monte Carlo simulations. Finally, the usefulness of the method is demonstrated on the measurements of the vessel density, in a mangrove tree, Rhizophora mucronata, and the measurement of Mg/Ca ratios, in a bivalve, Mytilus trossulus.
Resumo:
Background: Currently, all pharmacists and technicians registered with the Royal Pharmaceutical Society of Great Britain must complete a minimum of nine Continuing Professional Development (CPD) record (entries) each year. From September 2010 a new regulatory body, the General Pharmaceutical Council, will oversee the regulation (including revalidation) of all pharmacy registrants in Great Britain. CPD may provide part of the supporting evidence that a practitioner submits to the regulator as part of the revalidation process. Gaps in knowledge necessitated further research to examine the usefulness of CPD in a pharmacy revalidation Project aims: The overall aims of this project were to summarise pharmacy professionals’ past involvement in CPD, examine the usability of current CPD entries for the purpose of revalidation, and to examine the impact of ‘revalidation standards’ and a bespoke Outcomes Framework on the conduct and construction of CPD entries for future revalidation of pharmacy professionals. We completed a comprehensive review of the literature, devised, validated and tested the impact of a new CPD Outcomes Framework and related training material in an empirical investigation involving volunteer pharmacy professionals and also spoke with our participants to bring meaning and understanding to the process of CPD conduct and recording and to gain feedback on the study itself. Key findings: The comprehensive literature review identified perceived barriers to CPD and resulted in recommendations that could potentially rectify pharmacy professionals’ perceptions and facilitate participation in CPD. The CPD Outcomes Framework can be used to score CPD entries Compared to a control (CPD and ‘revalidation standards’ only), we found that training participants to apply the CPD Outcomes Framework resulted in entries that scored significantly higher in the context of a quantitative method of CPD assessment. Feedback from participants who had received the CPD Outcomes Framework was positive and a number of useful suggestions were made about improvements to the Framework and related training. Entries scored higher because participants had consciously applied concepts linked to the CPD Outcomes Framework whereas entries scored low where participants had been unable to apply the concepts of the Framework for a variety of reasons including limitations posed by the ‘Plan & Record’ template. Feedback about the nature of the ‘revalidation standards’ and their application to CPD was not positive and participants had not in the main sought to apply the standards to their CPD entries – but those in the intervention group were more likely to have referred to the revalidation standards for their CPD. As assessors, we too found the process of selecting and assigning ‘revalidation standards’ to individual CPD entries burdensome and somewhat unspecific. We believe that addressing the perceived barriers and drawing on the facilitators will help deal with the apparent lack of engagement with the revalidation standards and have been able to make a set of relevant recommendations. We devised a model to explain and tell the story of CPD behaviour. Based on the concepts of purpose, action and results, the model centres on explaining two types of CPD behaviour, one following the traditional CE pathway and the other a more genuine CPD pathway. Entries which scored higher when we applied the CPD Outcomes Framework were more likely to follow the CPD pathway in the model above. Significant to our finding is that while participants following both models of practice took part in this study, the CPD Outcomes Framework was able to change people’s CPD behaviour to make it more inline with the CPD pathway. The CPD Outcomes Framework in defining the CPD criteria, the training pack in teaching the basis and use of the Framework and the process of assessment in using the CPD Outcomes Framework, would have interacted to improve participants’ CPD through a collective process. Participants were keen to receive a curriculum against which certainly CE-type activities could be conducted and another important observation relates to whether CE has any role to play in pharmacy professionals’ revalidation. We would recommend that the CPD Outcomes Framework is used in the revalidation of pharmacy professionals in the future provided the requirement to submit 9 CPD entries per annum is re-examined and expressed more clearly in relation to what specifically participants are being asked to submit – i.e. the ratio of CE to CPD entries. We can foresee a benefit in setting more regular intervals which would act as deadlines for CPD submission in the future. On the whole, there is value in using CPD for the purpose of pharmacy professionals’ revalidation in the future.
Resumo:
International competitiveness ultimately depends upon the linkages between a firm’s unique, idiosyncratic capabilities (firm-specific advantages, FSAs) and its home country assets (country-specific advantages, CSAs). In this paper, we present a modified FSA/CSA matrix building upon the FSA/CSA matrix (Rugman 1981). We relate this to the diamond framework for national competitiveness (Porter 1990), and the double diamond model (Rugman and D’Cruz 1993). We provide empirical evidence to demonstrate the merits and usefulness of the modified FSA/CSA matrix using the Fortune Global 500 firms. We examine the FSAs based on the geographic scope of sales and CSAs that can lead to national, home region, and global competitiveness. Our empirical analysis suggests that the world’s largest 500 firms have increased their firm-level international competitiveness. However, much of this is still being achieved within their home region. In other words, international competitiveness is a regional not a global phenomenon. Our findings have significant implications for research and practice. Future research in international marketing should take into account the multi-faceted nature of FSAs and CSAs across different levels. For MNE managers, our study provides useful insights for strategic marketing planning and implementation.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
Department of Health staff wished to use systems modelling to discuss acute patient flows with groups of NHS staff. The aim was to assess the usefulness of system dynamics (SD) in a healthcare context and to elicit proposals concerning ways of improving patient experience. Since time restrictions excluded simulation modelling, a hybrid approach using stock/flow symbols from SD was created. Initial interviews and hospital site visits generated a series of stock/flow maps. A ‘Conceptual Framework’ was then created to introduce the mapping symbols and to generate a series of questions about different patient paths and what might speed or slow patient flows. These materials formed the centre of three workshops for NHS staff. The participants were able to propose ideas for improving patient flows and the elicited data was subsequently employed to create a finalized suite of maps of a general acute hospital. The maps and ideas were communicated back to the Department of Health and subsequently assisted the work of the Modernization Agency.