733 resultados para Missed appointments
Resumo:
Wydział Chemii: Pracownia Chemii Bioorganicznej
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
The community pharmacy service medicines use review (MUR) was introduced in 2005 ‘to improve patient knowledge, concordance and use of medicines’ through a private patient–pharmacist consultation. The MUR presents a fundamental change in community pharmacy service provision. While traditionally pharmacists are dispensers of medicines and providers of medicines advice, and patients as recipients, the MUR considers pharmacists providing consultation-type activities and patients as active participants. The MUR facilitates a two-way discussion about medicines use. Traditional patient–pharmacist behaviours transform into a new set of behaviours involving the booking of appointments, consultation processes and form completion, and the physical environment of the patient–pharmacist interaction moves from the traditional setting of the dispensary and medicines counter to a private consultation room. Thus, the new service challenges traditional identities and behaviours of the patient and the pharmacist as well as the environment in which the interaction takes place. In 2008, the UK government concluded there is at present too much emphasis on the quantity of MURs rather than on their quality.[1] A number of plans to remedy the perceived imbalance included a suggestion to reward ‘health outcomes’ achieved, with calls for a more focussed and scientific approach to the evaluation of pharmacy services using outcomes research. Specifically, the UK government set out the main principal research areas for the evaluation of pharmacy services to include ‘patient and public perceptions and satisfaction’as well as ‘impact on care and outcomes’. A limited number of ‘patient satisfaction with pharmacy services’ type questionnaires are available, of varying quality, measuring dimensions relating to pharmacists’ technical competence, behavioural impressions and general satisfaction. For example, an often cited paper by Larson[2] uses two factors to measure satisfaction, namely ‘friendly explanation’ and ‘managing therapy’; the factors are highly interrelated and the questions somewhat awkwardly phrased, but more importantly, we believe the questionnaire excludes some specific domains unique to the MUR. By conducting patient interviews with recent MUR recipients, we have been working to identify relevant concepts and develop a conceptual framework to inform item development for a Patient Reported Outcome Measure questionnaire bespoke to the MUR. We note with interest the recent launch of a multidisciplinary audit template by the Royal Pharmaceutical Society of Great Britain (RPSGB) in an attempt to review the effectiveness of MURs and improve their quality.[3] This template includes an MUR ‘patient survey’. We will discuss this ‘patient survey’ in light of our work and existing patient satisfaction with pharmacy questionnaires, outlining a new conceptual framework as a basis for measuring patient satisfaction with the MUR. Ethical approval for the study was obtained from the NHS Surrey Research Ethics Committee on 2 June 2008. References 1. Department of Health (2008). Pharmacy in England: Building on Strengths – Delivering the Future. London: HMSO. www. official-documents.gov.uk/document/cm73/7341/7341.pdf (accessed 29 September 2009). 2. Larson LN et al. Patient satisfaction with pharmaceutical care: update of a validated instrument. JAmPharmAssoc 2002; 42: 44–50. 3. Royal Pharmaceutical Society of Great Britain (2009). Pharmacy Medicines Use Review – Patient Audit. London: RPSGB. http:// qi4pd.org.uk/index.php/Medicines-Use-Review-Patient-Audit. html (accessed 29 September 2009).
Resumo:
Introduction The medicines use review (MUR), a new community pharmacy ‘service’, was launched in England and Wales to improve patients’ knowledge and use of medicines through a private, patient–pharmacist appointment. After 18 months, only 30% of pharmacies are providing MURs; at an average of 120 per annum (maximum 400 allowed).1 One reason linked to low delivery is patient recruitment.2 Our aim was to examine how the MUR is symbolised and given meaning via printed patient information, and potential implications. Method The language of 10 MUR patient leaflets, including the NHS booklet,3 and leaflets from multiples and wholesalers was evaluated by discourse analysis. Results and Discussion Before experiencing MURs, patients conceivably ‘categorise’ relationships with pharmacists based on traditional interactions.4 Yet none of the leaflets explicitly describe the MUR as ‘new’ and presuppose patients would become involved in activities outside of their pre-existing relationship with pharmacists such as appointments, self-completion of charts, and pharmacy action plans. The MUR process is described inconsistently, with interchangeable use of formal (‘review meeting‘) and informal (‘friendly’) terminology, the latter presumably to portray an intended ‘negotiation model’ of interaction.5 Assumptions exist about attitudes (‘not understanding’; ‘problems’) that might lead patients to an appointment. However, research has identified a multitude of reasons why patients choose (or not) to consult practitioners,6 and marketing of MURs should also consider other barriers. For example, it may be prudent to remove time limits to avoid implying patients might not be listened to fully, during what is for them an additional practitioner consultation.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
Sixty cattle farmers in England were questioned about the costs associated with premovement testing for bovine tuberculosis (TB). On average, the farmers had premovement tested 2-45 times in the previous 12 months, but the majority had tested only once. An average of 28.6 animals were tested on each occasion, but there were wide variations. The average farm labour costs were (sic)4.00 per animal tested, veterinary costs were (sic)4.33 and other costs were (sic)0.51, giving a total cost of (sic)8.84, but there were wide variations between farms, and many incurred costs of more than (sic)20 per animal. A majority of the farmers also cited disruption to the farm business or missed market opportunities as costs, but few could estimate their financial cost. Most of the farmers thought that premovement testing was a cost burden on their business, and over half thought It was not an effective policy to control bovine TB.
Resumo:
It is well established that crop production is inherently vulnerable to variations in the weather and climate. More recently the influence of vegetation on the state of the atmosphere has been recognized. The seasonal growth of crops can influence the atmosphere and have local impacts on the weather, which in turn affects the rate of seasonal crop growth and development. Considering the coupled nature of the crop-climate system, and the fact that a significant proportion of land is devoted to the cultivation of crops, important interactions may be missed when studying crops and the climate system in isolation, particularly in the context of land use and climate change. To represent the two-way interactions between seasonal crop growth and atmospheric variability, we integrate a crop model developed specifically to operate at large spatial scales (General Large Area Model for annual crops) into the land surface component of a global climate model (GCM; HadAM3). In the new coupled crop-climate model, the simulated environment (atmosphere and soil states) influences growth and development of the crop, while simultaneously the temporal variations in crop leaf area and height across its growing season alter the characteristics of the land surface that are important determinants of surface fluxes of heat and moisture, as well as other aspects of the land-surface hydrological cycle. The coupled model realistically simulates the seasonal growth of a summer annual crop in response to the GCM's simulated weather and climate. The model also reproduces the observed relationship between seasonal rainfall and crop yield. The integration of a large-scale single crop model into a GCM, as described here, represents a first step towards the development of fully coupled crop and climate models. Future development priorities and challenges related to coupling crop and climate models are discussed.
Resumo:
Background A significant proportion of women who are vulnerable to postnatal depression refuse to engage in treatment programmes. Little is known about them, other than some general demographic characteristics. In particular, their access to health care and their own and their infants' health outcomes are uncharted. Methods We conducted a nested cohort case-control study, using data from computerized health systems, and general practitioner (GP) and maternity records, to identify the characteristics, health service contacts, and maternal and infant health outcomes for primiparous antenatal clinic attenders at high risk for postnatal depression who either refused (self-exclusion group) or else agreed (take-up group) to receive additional Health Visiting support in pregnancy and the first 2 months postpartum. Results Women excluding themselves from Health Visitor support were younger and less highly educated than women willing to take up the support. They were less likely to attend midwifery, GP and routine Health Visitor appointments, but were more likely to book in late and to attend accident and emergency department (A&E). Their infants had poorer outcome in terms of gestation, birthweight and breastfeeding. Differences between the groups still obtained when age and education were taken into account for midwifery contacts, A&E attendance and gestation;the difference in the initiation of breast feeding was attenuated, but not wholly explained, by age and education. Conclusion A subgroup of psychologically vulnerable childbearing women are at particular risk for poor access to health care and adverse infant outcome. Barriers to take-up of services need to be understood in order better to deliver care.
Resumo:
Background: The government has proposed a 48-hour target for GP availability. Although many practices are moving towards delivering that goal, recent national patient surveys have reported a deterioration in patients' reports of doctor availability. What practice factors contribute to patients' perceptions of doctor availability? Method: A cross sectional patient survey (11 000 patients from 54 inner London practices, 7247 (66%) respondents) using the General Practice Assessment Survey. We asked patients how soon they could be seen in their practice following non-urgent consultation requests and related their aggregated responses to the characteristics of their practice. Results: Three factors relating to practice administration and appointments systems operation independently predicted patients' reports of doctor availability. These were the proportion of patients asked to attend the surgery and wait to be seen, the proportion of patients seen using an emergency surgery arrangement, and the extent of practice computerization. Conclusion: Some practices may have difficulty in meeting the target for GP availability. Meeting the target will involve careful review of practice administrative procedures.
Resumo:
If people monitor a visual stimulus stream for targets they often miss the second (T2) if it appears soon after the first (T1)-the attentional blink. There is one exception: T2 is often not missed if it appears right after T1, i.e., at lag 1. This lag-l sparing is commonly attributed to the possibility that T1 processing opens an attentional gate, which may be so sluggish that an early T2 can slip in before it closes. We investigated why the gate may close and exclude further stimuli from processing. We compared a control approach, which assumes that gate closing is exogenously triggered by the appearance of nontargets, and an integration approach, which assumes that gate closing is under endogenous control. As predicted by the latter but not the former, T2 performance and target reversals were strongly affected by the temporal distance between T1 and T2, whereas the presence or the absence of a nontarget intervening between T1 and T2 had little impact. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.
Resumo:
Most of the dissolved organic carbon (DOC) exported from catchments is transported during storm events. Accurate assessments of DOC fluxes are essential to understand long-term trends in the transport of DOC from terrestrial to aquatic systems, and also the loss of carbon from peatlands to determine changes in the source/sink status of peatland carbon stores. However, many long-term monitoring programmes collect water samples at a frequency (e.g. weekly/monthly) less than the time period of a typical storm event (typically <1–2 days). As widespread observations in catchments dominated by organo-mineral soils have shown that both concentration and flux of DOC increases during storm events, lower frequency monitoring could result in substantial underestimation of DOC flux as the most dynamic periods of transport are missed. However, our intensive monitoring study in a UK upland peatland catchment showed a contrasting response to these previous studies. Our results showed that (i) DOC concentrations decreased during autumn storm events and showed a poor relationship with flow during other seasons; and that (ii) this decrease in concentrations during autumn storms caused DOC flux estimates based on weekly monitoring data to be over-estimated, rather than under-estimated, because of over rather than under estimation of the flow-weighted mean concentration used in flux calculations. However, as DOC flux is ultimately controlled by discharge volume, and therefore rainfall, and the magnitude of change in discharge was greater than the magnitude of decline in concentrations, DOC flux increased during individual storm events. The implications for long-term DOC trends are therefore contradictory, as increased rainfall could increase flux but cause an overall decrease in DOC concentrations from peatland streams. Care needs to be taken when interpreting long-term trends in DOC flux rather than concentration; as flux is calculated from discharge estimates, and discharge is controlled by rainfall, DOC flux and rainfall/discharge will always be well correlated.
Resumo:
This article focuses on the final report of Lord Butler’s review of British intelligence on weapons of mass destruction (WMD), specifically on its treatment of the accuracy of the use of intelligence on Iraqi WMD in a government dossier published in September 2002 ahead of the 2003 Iraq war. In the report, the demonstration of the accuracy of the “September Dossier” hinges on the insertion of tables that compare side-by-side quotations from this document and from intelligence assessments. The analysis of the textual and visual methods by which the report is written reveals how the logic of the comparative tables is missed in the Butler report: the logic of these tables requires that the comparison between quotations from the two documents should be performed at the level of their details but the Butler report performs its comparison only at a broad and general level.
Resumo:
Records of Atlantic basin tropical cyclones (TCs) since the late nineteenth century indicate a very large upward trend in storm frequency. This increase in documented TCs has been previously interpreted as resulting from anthropogenic climate change. However, improvements in observing and recording practices provide an alternative interpretation for these changes: recent studies suggest that the number of potentially missed TCs is sufficient to explain a large part of the recorded increase in TC counts. This study explores the influence of another factor—TC duration—on observed changes in TC frequency, using a widely used Atlantic hurricane database (HURDAT). It is found that the occurrence of short-lived storms (duration of 2 days or less) in the database has increased dramatically, from less than one per year in the late nineteenth–early twentieth century to about five per year since about 2000, while medium- to long-lived storms have increased little, if at all. Thus, the previously documented increase in total TC frequency since the late nineteenth century in the database is primarily due to an increase in very short-lived TCs. The authors also undertake a sampling study based upon the distribution of ship observations, which provides quantitative estimates of the frequency of missed TCs, focusing just on the moderate to long-lived systems with durations exceeding 2 days in the raw HURDAT. Upon adding the estimated numbers of missed TCs, the time series of moderate to long-lived Atlantic TCs show substantial multidecadal variability, but neither time series exhibits a significant trend since the late nineteenth century, with a nominal decrease in the adjusted time series. Thus, to understand the source of the century-scale increase in Atlantic TC counts in HURDAT, one must explain the relatively monotonic increase in very short-duration storms since the late nineteenth century. While it is possible that the recorded increase in short-duration TCs represents a real climate signal, the authors consider that it is more plausible that the increase arises primarily from improvements in the quantity and quality of observations, along with enhanced interpretation techniques. These have allowed National Hurricane Center forecasters to better monitor and detect initial TC formation, and thus incorporate increasing numbers of very short-lived systems into the TC database.