836 resultados para Framework Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Currently, all pharmacists and technicians registered with the Royal Pharmaceutical Society of Great Britain must complete a minimum of nine Continuing Professional Development (CPD) record (entries) each year. From September 2010 a new regulatory body, the General Pharmaceutical Council, will oversee the regulation (including revalidation) of all pharmacy registrants in Great Britain. CPD may provide part of the supporting evidence that a practitioner submits to the regulator as part of the revalidation process. Gaps in knowledge necessitated further research to examine the usefulness of CPD in a pharmacy revalidation Project aims: The overall aims of this project were to summarise pharmacy professionals’ past involvement in CPD, examine the usability of current CPD entries for the purpose of revalidation, and to examine the impact of ‘revalidation standards’ and a bespoke Outcomes Framework on the conduct and construction of CPD entries for future revalidation of pharmacy professionals. We completed a comprehensive review of the literature, devised, validated and tested the impact of a new CPD Outcomes Framework and related training material in an empirical investigation involving volunteer pharmacy professionals and also spoke with our participants to bring meaning and understanding to the process of CPD conduct and recording and to gain feedback on the study itself. Key findings: The comprehensive literature review identified perceived barriers to CPD and resulted in recommendations that could potentially rectify pharmacy professionals’ perceptions and facilitate participation in CPD. The CPD Outcomes Framework can be used to score CPD entries Compared to a control (CPD and ‘revalidation standards’ only), we found that training participants to apply the CPD Outcomes Framework resulted in entries that scored significantly higher in the context of a quantitative method of CPD assessment. Feedback from participants who had received the CPD Outcomes Framework was positive and a number of useful suggestions were made about improvements to the Framework and related training. Entries scored higher because participants had consciously applied concepts linked to the CPD Outcomes Framework whereas entries scored low where participants had been unable to apply the concepts of the Framework for a variety of reasons including limitations posed by the ‘Plan & Record’ template. Feedback about the nature of the ‘revalidation standards’ and their application to CPD was not positive and participants had not in the main sought to apply the standards to their CPD entries – but those in the intervention group were more likely to have referred to the revalidation standards for their CPD. As assessors, we too found the process of selecting and assigning ‘revalidation standards’ to individual CPD entries burdensome and somewhat unspecific. We believe that addressing the perceived barriers and drawing on the facilitators will help deal with the apparent lack of engagement with the revalidation standards and have been able to make a set of relevant recommendations. We devised a model to explain and tell the story of CPD behaviour. Based on the concepts of purpose, action and results, the model centres on explaining two types of CPD behaviour, one following the traditional CE pathway and the other a more genuine CPD pathway. Entries which scored higher when we applied the CPD Outcomes Framework were more likely to follow the CPD pathway in the model above. Significant to our finding is that while participants following both models of practice took part in this study, the CPD Outcomes Framework was able to change people’s CPD behaviour to make it more inline with the CPD pathway. The CPD Outcomes Framework in defining the CPD criteria, the training pack in teaching the basis and use of the Framework and the process of assessment in using the CPD Outcomes Framework, would have interacted to improve participants’ CPD through a collective process. Participants were keen to receive a curriculum against which certainly CE-type activities could be conducted and another important observation relates to whether CE has any role to play in pharmacy professionals’ revalidation. We would recommend that the CPD Outcomes Framework is used in the revalidation of pharmacy professionals in the future provided the requirement to submit 9 CPD entries per annum is re-examined and expressed more clearly in relation to what specifically participants are being asked to submit – i.e. the ratio of CE to CPD entries. We can foresee a benefit in setting more regular intervals which would act as deadlines for CPD submission in the future. On the whole, there is value in using CPD for the purpose of pharmacy professionals’ revalidation in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What does the saving–investment (SI) relation really measure and how should the SI relation be measured? These are two of the most discussed issues triggered by the so-called Feldstein–Horioka puzzle. Based on panel data we introduce a new variant of functional coefficient models that allows to separate long and short to medium run parameter dependence. The new modeling framework is applied to uncover the determinants of the SI relation. Macroeconomic state variables such as openness, the age dependency ratio, government current and consumption expenditures are found to affect the SI relation significantly in the long run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KMGAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KMGAP is based on the PRA model framework (P¨oschl-Rudich- Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modeled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system and the computational constraints, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmospheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270K is close to unity (Winkler et al., 2006). Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for efficient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project is concerned with the way that illustrations, photographs, diagrams and graphs, and typographic elements interact to convey ideas on the book page. A framework for graphic description is proposed to elucidate this graphic language of ‘complex texts’. The model is built up from three main areas of study, with reference to a corpus of contemporary children’s science books. First, a historical survey puts the subjects for study in context. Then a multidisciplinary discussion of graphic communication provides a theoretical underpinning for the model; this leads to various proposals, such as the central importance of ratios and relationships among parts in creating meaning in graphic communication. Lastly a series of trials in description contribute to the structure of the model itself. At the heart of the framework is an organising principle that integrates descriptive models from fields of design, literary criticism, art history, and linguistics, among others, as well as novel categories designed specifically for book design. Broadly, design features are described in terms of elemental component parts (micro-level), larger groupings of these (macro-level), and finally in terms of overarching, ‘whole book’ qualities (meta-level). Various features of book design emerge at different levels; for instance, the presence of nested discursive structures, a form of graphic recursion in editorial design, is proposed at the macro-level. Across these three levels are the intersecting categories of ‘rule’ and ‘context’, offering different perspectives with which to describe graphic characteristics. Contextbased features are contingent on social and cultural environment, the reader’s previous knowledge, and the actual conditions of reading; rule-based features relate to the systematic or codified aspects of graphic language. The model aims to be a frame of reference for graphic description, of use in different forms of qualitative or quantitative research and as a heuristic tool in practice and teaching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As in any technology systems, analysis and design issues are among the fundamental challenges in persuasive technology. Currently, the Persuasive Systems Development (PSD) framework is considered to be the most comprehensive framework for designing and evaluation of persuasive systems. However, the framework is limited in terms of providing detailed information which can lead to selection of appropriate techniques depending on the variable nature of users or use over time. In light of this, we propose a model which is intended for analysing and implementing behavioural change in persuasive technology called the 3D-RAB model. The 3D-RAB model represents the three dimensional relationships between attitude towards behaviour, attitude towards change or maintaining a change, and current behaviour, and distinguishes variable levels in a user’s cognitive state. As such it provides a framework which could be used to select appropriate techniques for persuasive technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling strategies for monitoring the status and trends in wildlife populations are often determined before the first survey is undertaken. However, there may be little information about the distribution of the population and so the sample design may be inefficient. Through time, as data are collected, more information about the distribution of animals in the survey region is obtained but it can be difficult to incorporate this information in the survey design. This paper introduces a framework for monitoring motile wildlife populations within which the design of future surveys can be adapted using data from past surveys whilst ensuring consistency in design-based estimates of status and trends through time. In each survey, part of the sample is selected from the previous survey sample using simple random sampling. The rest is selected with inclusion probability proportional to predicted abundance. Abundance is predicted using a model constructed from previous survey data and covariates for the whole survey region. Unbiased design-based estimators of status and trends and their variances are derived from two-phase sampling theory. Simulations over the short and long-term indicate that in general more precise estimates of status and trends are obtained using this mixed strategy than a strategy in which all of the sample is retained or all selected with probability proportional to predicted abundance. Furthermore the mixed strategy is robust to poor predictions of abundance. Estimates of status are more precise than those obtained from a rotating panel design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Undeniably, anticipation plays a crucial role in cognition. By what means, to what extent, and what it achieves remain open questions. In a recent BBS target article, Clark (in press) depicts an integrative model of the brain that builds on hierarchical Bayesian models of neural processing (Rao and Ballard, 1999; Friston, 2005; Brown et al., 2011), and their most recent formulation using the free-energy principle borrowed from thermodynamics (Feldman and Friston, 2010; Friston, 2010; Friston et al., 2010). Hierarchical generative models of cognition, such as those described by Clark, presuppose the manipulation of representations and internal models of the world, in as much detail as is perceptually available. Perhaps surprisingly, Clark acknowledges the existence of a “virtual version of the sensory data” (p. 4), but with no reference to some of the historical debates that shaped cognitive science, related to the storage, manipulation, and retrieval of representations in a cognitive system (Shanahan, 1997), or accounting for the emergence of intentionality within such a system (Searle, 1980; Preston and Bishop, 2002). Instead of demonstrating how this Bayesian framework responds to these foundational questions, Clark describes the structure and the functional properties of an action-oriented, multi-level system that is meant to combine perception, learning, and experience (Niedenthal, 2007).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-bred cow adoption is an important and potent policy variable precipitating subsistence household entry into emerging milk markets. This paper focuses on the problem of designing policies that encourage and sustain milkmarket expansion among a sample of subsistence households in the Ethiopian highlands. In this context it is desirable to measure households’ ‘proximity’ to market in terms of the level of deficiency of essential inputs. This problem is compounded by four factors. One is the existence of cross-bred cow numbers (count data) as an important, endogenous decision by the household; second is the lack of a multivariate generalization of the Poisson regression model; third is the censored nature of the milk sales data (sales from non-participating households are, essentially, censored at zero); and fourth is an important simultaneity that exists between the decision to adopt a cross-bred cow, the decision about how much milk to produce, the decision about how much milk to consume and the decision to market that milk which is produced but not consumed internally by the household. Routine application of Gibbs sampling and data augmentation overcome these problems in a relatively straightforward manner. We model the count data from two sites close to Addis Ababa in a latent, categorical-variable setting with known bin boundaries. The single-equation model is then extended to a multivariate system that accommodates the covariance between crossbred-cow adoption, milk-output, and milk-sales equations. The latent-variable procedure proves tractable in extension to the multivariate setting and provides important information for policy formation in emerging-market settings

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Lagrangian model of photochemistry and mixing is described (CiTTyCAT, stemming from the Cambridge Tropospheric Trajectory model of Chemistry And Transport), which is suitable for transport and chemistry studies throughout the troposphere. Over the last five years, the model has been developed in parallel at several different institutions and here those developments have been incorporated into one "community" model and documented for the first time. The key photochemical developments include a new scheme for biogenic volatile organic compounds and updated emissions schemes. The key physical development is to evolve composition following an ensemble of trajectories within neighbouring air-masses, including a simple scheme for mixing between them via an evolving "background profile", both within the boundary layer and free troposphere. The model runs along trajectories pre-calculated using winds and temperature from meteorological analyses. In addition, boundary layer height and precipitation rates, output from the analysis model, are interpolated to trajectory points and used as inputs to the mixing and wet deposition schemes. The model is most suitable in regimes when the effects of small-scale turbulent mixing are slow relative to advection by the resolved winds so that coherent air-masses form with distinct composition and strong gradients between them. Such air-masses can persist for many days while stretching, folding and thinning. Lagrangian models offer a useful framework for picking apart the processes of air-mass evolution over inter-continental distances, without being hindered by the numerical diffusion inherent to global Eulerian models. The model, including different box and trajectory modes, is described and some output for each of the modes is presented for evaluation. The model is available for download from a Subversion-controlled repository by contacting the corresponding authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pig is a single-stomached omnivorous mammal and is an important model of human disease and nutrition. As such, it is necessary to establish a metabolic framework from which pathology-based variation can be compared. Here, a combination of one and two-dimensional 1H and 13C nuclear magnetic resonance spectroscopy (NMR) and high-resolution magic angle spinning (HR-MAS) NMR was used to provide a systems overview of porcine metabolism via characterisation of the urine, serum, liver and kidney metabolomes. The metabolites observed in each of these biological compartments were found to be qualitatively comparable to the metabolic signature of the same biological matrices in humans and rodents. The data were modelled using a combination of principal components analysis and Venn diagram mapping. Urine represented the most metabolically distinct biological compartment studied, with a relatively greater number of NMR detectable metabolites present, many of which are implicated in gut-microbial co-metabolic processes. The major interspecies differences observed were in the phase II conjugation of extra-genomic metabolites; the pig was observed to conjugate p-cresol, a gut microbial metabolite of tyrosine, with glucuronide rather than sulfate as seen in man. These observations are important to note when considering the translatability of experimental data derived from porcine models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In June 2009 the Sarychev volcano located in the Kuril Islands to the northeast of Japan erupted explosively, injecting ash and an estimated 1.2 ± 0.2 Tg of sulfur dioxide into the upper troposphere and lower stratosphere, making it arguably one of the 10 largest stratospheric injections in the last 50 years. During the period immediately after the eruption, we show that the sulfur dioxide (SO2) cloud was clearly detected by retrievals developed for the Infrared Atmospheric Sounding Interferometer (IASI) satellite instrument and that the resultant stratospheric sulfate aerosol was detected by the Optical Spectrograph and Infrared Imaging System (OSIRIS) limb sounder and CALIPSO lidar. Additional surface‐based instrumentation allows assessment of the impact of the eruption on the stratospheric aerosol optical depth. We use a nudged version of the HadGEM2 climate model to investigate how well this state‐of‐the‐science climate model can replicate the distributions of SO2 and sulfate aerosol. The model simulations and OSIRIS measurements suggest that in the Northern Hemisphere the stratospheric aerosol optical depth was enhanced by around a factor of 3 (0.01 at 550 nm), with resultant impacts upon the radiation budget. The simulations indicate that, in the Northern Hemisphere for July 2009, the magnitude of the mean radiative impact from the volcanic aerosols is more than 60% of the direct radiative forcing of all anthropogenic aerosols put together. While the cooling induced by the eruption will likely not be detectable in the observational record, the combination of modeling and measurements would provide an ideal framework for simulating future larger volcanic eruptions.