42 resultados para Lecture capture

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A quantitative model of wheat root systems is developed that links the size and distribution of the root system to the capture of water and nitrogen (which are assumed to be evenly distributed with depth) during grain filling, and allows estimates of the economic consequences of this capture to be assessed. A particular feature of the model is its use of summarizing concepts, and reliance on only the minimum number of parameters (each with a clear biological meaning). The model is then used to provide an economic sensitivity analysis of possible target characteristics for manipulating root systems. These characteristics were: root distribution with depth, proportional dry matter partitioning to roots, resource capture coefficients, shoot dry weight at anthesis, specific root weight and water use efficiency. From the current estimates of parameters it is concluded that a larger investment by the crop in fine roots at depth in the soil, and less proliferation of roots in surface layers, would improve yields by accessing extra resources. The economic return on investment in roots for water capture was twice that of the same amount invested for nitrogen capture. (C) 2003 Annals of Botany Company.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With 6-man Chess essentially solved, the available 6-man Endgame Tables (EGTs) have been scanned for zugzwang positions where, unusually, having the move is a disadvantage. Review statistics together with some highlights and positions are provided here: the complete information is available on the ICGA website. An outcome of the review is the observation that the definition of zugzwang should be revisited, if only because the presence of en passant capture moves gives rise to three new, asymmetric types of zugzwang.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the 12th annual Broadbent Lecture at the Annual Conference Dianne Berry outlined Broadbent’s explicit and implicit influences on psychological science and scientists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The community pharmacy service medicines use review (MUR) was introduced in 2005 ‘to improve patient knowledge, concordance and use of medicines’ through a private patient–pharmacist consultation. The MUR presents a fundamental change in community pharmacy service provision. While traditionally pharmacists are dispensers of medicines and providers of medicines advice, and patients as recipients, the MUR considers pharmacists providing consultation-type activities and patients as active participants. The MUR facilitates a two-way discussion about medicines use. Traditional patient–pharmacist behaviours transform into a new set of behaviours involving the booking of appointments, consultation processes and form completion, and the physical environment of the patient–pharmacist interaction moves from the traditional setting of the dispensary and medicines counter to a private consultation room. Thus, the new service challenges traditional identities and behaviours of the patient and the pharmacist as well as the environment in which the interaction takes place. In 2008, the UK government concluded there is at present too much emphasis on the quantity of MURs rather than on their quality.[1] A number of plans to remedy the perceived imbalance included a suggestion to reward ‘health outcomes’ achieved, with calls for a more focussed and scientific approach to the evaluation of pharmacy services using outcomes research. Specifically, the UK government set out the main principal research areas for the evaluation of pharmacy services to include ‘patient and public perceptions and satisfaction’as well as ‘impact on care and outcomes’. A limited number of ‘patient satisfaction with pharmacy services’ type questionnaires are available, of varying quality, measuring dimensions relating to pharmacists’ technical competence, behavioural impressions and general satisfaction. For example, an often cited paper by Larson[2] uses two factors to measure satisfaction, namely ‘friendly explanation’ and ‘managing therapy’; the factors are highly interrelated and the questions somewhat awkwardly phrased, but more importantly, we believe the questionnaire excludes some specific domains unique to the MUR. By conducting patient interviews with recent MUR recipients, we have been working to identify relevant concepts and develop a conceptual framework to inform item development for a Patient Reported Outcome Measure questionnaire bespoke to the MUR. We note with interest the recent launch of a multidisciplinary audit template by the Royal Pharmaceutical Society of Great Britain (RPSGB) in an attempt to review the effectiveness of MURs and improve their quality.[3] This template includes an MUR ‘patient survey’. We will discuss this ‘patient survey’ in light of our work and existing patient satisfaction with pharmacy questionnaires, outlining a new conceptual framework as a basis for measuring patient satisfaction with the MUR. Ethical approval for the study was obtained from the NHS Surrey Research Ethics Committee on 2 June 2008. References 1. Department of Health (2008). Pharmacy in England: Building on Strengths – Delivering the Future. London: HMSO. www. official-documents.gov.uk/document/cm73/7341/7341.pdf (accessed 29 September 2009). 2. Larson LN et al. Patient satisfaction with pharmaceutical care: update of a validated instrument. JAmPharmAssoc 2002; 42: 44–50. 3. Royal Pharmaceutical Society of Great Britain (2009). Pharmacy Medicines Use Review – Patient Audit. London: RPSGB. http:// qi4pd.org.uk/index.php/Medicines-Use-Review-Patient-Audit. html (accessed 29 September 2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the applications of capture-recapture methods to human populations. Capture-recapture methods are commonly used in estimating the size of wildlife populations but can also be used in epidemiology and social sciences, for estimating prevalence of a particular disease or the size of the homeless population in a certain area. Here we focus on estimating the prevalence of infectious diseases. Several estimators of population size are considered: the Lincoln-Petersen estimator and its modified version, the Chapman estimator, Chao's lower bound estimator, the Zelterman's estimator, McKendrick's moment estimator and the maximum likelihood estimator. In order to evaluate these estimators, they are applied to real, three-source, capture-recapture data. By conditioning on each of the sources of three source data, we have been able to compare the estimators with the true value that they are estimating. The Chapman and Chao estimators were compared in terms of their relative bias. A variance formula derived through conditioning is suggested for Chao's estimator, and normal 95% confidence intervals are calculated for this and the Chapman estimator. We then compare the coverage of the respective confidence intervals. Furthermore, a simulation study is included to compare Chao's and Chapman's estimator. Results indicate that Chao's estimator is less biased than Chapman's estimator unless both sources are independent. Chao's estimator has also the smaller mean squared error. Finally, the implications and limitations of the above methods are discussed, with suggestions for further development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 x 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture-recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture-recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.