911 resultados para second medical use
Resumo:
A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley
Resumo:
Traditional vaccines such as inactivated or live attenuated vaccines, are gradually giving way to more biochemically defined vaccines that are most often based on a recombinant antigen known to possess neutralizing epitopes. Such vaccines can offer improvements in speed, safety and manufacturing process but an inevitable consequence of their high degree of purification is that immunogenicity is reduced through the lack of the innate triggering molecules present in more complex preparations. Targeting recombinant vaccines to antigen presenting cells (APCs) such as dendritic cells however can improve immunogenicity by ensuring that antigen processing is as efficient as possible. Immune complexes, one of a number of routes of APC targeting, are mimicked by a recombinant approach, crystallizable fragment (Fc) fusion proteins, in which the target immunogen is linked directly to an antibody effector domain capable of interaction with receptors, FcR, on the APC cell surface. A number of virus Fc fusion proteins have been expressed in insect cells using the baculovirus expression system and shown to be efficiently produced and purified. Their use for immunization next to non-Fc tagged equivalents shows that they are powerfully immunogenic in the absence of added adjuvant and that immune stimulation is the result of the Fc-FcR interaction.
Resumo:
Technology Acceptance Model (TAM) posits that Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) influence the ‘intention to use’. The Post-Acceptance Model (PAM) posits that continued use is influenced by prior experience. In order to study the factors that influence how professionals use complex systems, we create a tentative research model that builds on PAM and TAM. Specifically we include PEOU and the construct ‘Professional Association Guidance’. We postulate that feature usage is enhanced when professional associations influence PU by highlighting additional benefits. We explore the theory in the context of post-adoption use of Electronic Medical Records (EMRs) by primary care physicians in Ontario. The methodology can be extended to other professional environments and we suggest directions for future research.
Resumo:
A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.
Resumo:
The increasing use of drug combinations to treat disease states, such as cancer, calls for improved delivery systems that are able to deliver multiple agents. Herein, we report a series of novel Janus dendrimers with potential for use in combination therapy. Different generations (first and second) of PEG-based dendrons containing two different “model drugs”, benzyl alcohol (BA) and 3-phenylpropionic acid (PPA), were synthesized. BA and PPA were attached via two different linkers (carbonate and ester, respectively) to promote differential drug release. The four dendrons were coupled together via (3 + 2) cycloaddition chemistries to afford four Janus dendrimers, which contained varying amounts and different ratios of BA and PPA, namely, (BA)2-G1-G1-(PPA)2, (BA)4-G2-G1-(PPA)2, (BA)2-G1-G2-(PPA)4, and (BA)4-G2-G2-(PPA)4. Release studies in plasma showed that the dendrimers provided sequential release of the two model drugs, with BA being released faster than PPA from all of the dendrons. The different dendrimers allowed delivery of increasing amounts (0.15–0.30 mM) and in exact molecular ratios (1:2; 2:1; 1:2; 2:2) of the two model drug compounds. The dendrimers were noncytotoxic (100% viability at 1 mg/mL) toward human umbilical vein endothelial cells (HUVEC) and nontoxic toward red blood cells, as confirmed by hemolysis studies. These studies demonstrate that these Janus PEG-based dendrimers offer great potential for the delivery of drugs via combination therapy.
Resumo:
This research presents a novel multi-functional system for medical Imaging-enabled Assistive Diagnosis (IAD). Although the IAD demonstrator has focused on abdominal images and supports the clinical diagnosis of kidneys using CT/MRI imaging, it can be adapted to work on image delineation, annotation and 3D real-size volumetric modelling of other organ structures such as the brain, spine, etc. The IAD provides advanced real-time 3D visualisation and measurements with fully automated functionalities as developed in two stages. In the first stage, via the clinically driven user interface, specialist clinicians use CT/MRI imaging datasets to accurately delineate and annotate the kidneys and their possible abnormalities, thus creating “3D Golden Standard Models”. Based on these models, in the second stage, clinical support staff i.e. medical technicians interactively define model-based rules and parameters for the integrated “Automatic Recognition Framework” to achieve results which are closest to that of the clinicians. These specific rules and parameters are stored in “Templates” and can later be used by any clinician to automatically identify organ structures i.e. kidneys and their possible abnormalities. The system also supports the transmission of these “Templates” to another expert for a second opinion. A 3D model of the body, the organs and their possible pathology with real metrics is also integrated. The automatic functionality was tested on eleven MRI datasets (comprising of 286 images) and the 3D models were validated by comparing them with the metrics from the corresponding “3D Golden Standard Models”. The system provides metrics for the evaluation of the results, in terms of Accuracy, Precision, Sensitivity, Specificity and Dice Similarity Coefficient (DSC) so as to enable benchmarking of its performance. The first IAD prototype has produced promising results as its performance accuracy based on the most widely deployed evaluation metric, DSC, yields 97% for the recognition of kidneys and 96% for their abnormalities; whilst across all the above evaluation metrics its performance ranges between 96% and 100%. Further development of the IAD system is in progress to extend and evaluate its clinical diagnostic support capability through development and integration of additional algorithms to offer fully computer-aided identification of other organs and their abnormalities based on CT/MRI/Ultra-sound Imaging.
Resumo:
Background: Since their inception, Twitter and related microblogging systems have provided a rich source of information for researchers and have attracted interest in their affordances and use. Since 2009 PubMed has included 123 journal articles on medicine and Twitter, but no overview exists as to how the field uses Twitter in research. // Objective: This paper aims to identify published work relating to Twitter indexed by PubMed, and then to classify it. This classification will provide a framework in which future researchers will be able to position their work, and to provide an understanding of the current reach of research using Twitter in medical disciplines. Limiting the study to papers indexed by PubMed ensures the work provides a reproducible benchmark. // Methods: Papers, indexed by PubMed, on Twitter and related topics were identified and reviewed. The papers were then qualitatively classified based on the paper’s title and abstract to determine their focus. The work that was Twitter focused was studied in detail to determine what data, if any, it was based on, and from this a categorization of the data set size used in the studies was developed. Using open coded content analysis additional important categories were also identified, relating to the primary methodology, domain and aspect. // Results: As of 2012, PubMed comprises more than 21 million citations from biomedical literature, and from these a corpus of 134 potentially Twitter related papers were identified, eleven of which were subsequently found not to be relevant. There were no papers prior to 2009 relating to microblogging, a term first used in 2006. Of the remaining 123 papers which mentioned Twitter, thirty were focussed on Twitter (the others referring to it tangentially). The early Twitter focussed papers introduced the topic and highlighted the potential, not carrying out any form of data analysis. The majority of published papers used analytic techniques to sort through thousands, if not millions, of individual tweets, often depending on automated tools to do so. Our analysis demonstrates that researchers are starting to use knowledge discovery methods and data mining techniques to understand vast quantities of tweets: the study of Twitter is becoming quantitative research. // Conclusions: This work is to the best of our knowledge the first overview study of medical related research based on Twitter and related microblogging. We have used five dimensions to categorise published medical related research on Twitter. This classification provides a framework within which researchers studying development and use of Twitter within medical related research, and those undertaking comparative studies of research relating to Twitter in the area of medicine and beyond, can position and ground their work.
Resumo:
Clinical pathways have been adopted for various diseases in clinical departments for quality improvement as a result of standardization of medical activities in treatment process. Using knowledge-based decision support on the basis of clinical pathways is a promising strategy to improve medical quality effectively. However, the clinical pathway knowledge has not been fully integrated into treatment process and thus cannot provide comprehensive support to the actual work practice. Therefore this paper proposes a knowledgebased clinical pathway management method which contributes to make use of clinical knowledge to support and optimize medical practice. We have developed a knowledgebased clinical pathway management system to demonstrate how the clinical pathway knowledge comprehensively supports the treatment process. The experiences from the use of this system show that the treatment quality can be effectively improved by the extracted and classified clinical pathway knowledge, seamless integration of patient-specific clinical pathway recommendations with medical tasks and the evaluating pathway deviations for optimization.
Resumo:
The main goal of all approaches to adult second language acquisition (SLA) is to accurately describe and explain the overall acquisition process. To accomplish this, SLA researchers must come to agree on some key issues. In this commentary, I defend the necessity of the competence/performance distinction and how this relates to why an examination of morphological production presents challenges for SLA research. I suggest that such a methodology is meaningful only when it is dovetailed with procedures that test for related syntactic/semantic knowledge.
Resumo:
The experience of learning and using a second language (L2) has been shown to affect the grey matter (GM) structure of the brain. Importantly, GM density in several cortical and subcortical areas has been shown to be related to performance in L2 tasks. Here we show that bilingualism can lead to increased GM volume in the cerebellum, a structure that has been related to the processing of grammatical rules. Additionally, the cerebellar GM volume of highly proficient L2 speakers is correlated to their performance in a task tapping on grammatical processing in a L2, demonstrating the importance of the cerebellum for the establishment and use of grammatical rules in a L2.
Resumo:
This review summarises the history of transgenic (GM) cereals, principally maize, and then focuses on the scientific literature published in the last two years. It describes the production of GM cereals with modified traits, divided into input traits and output traits. The first category includes herbicide tolerance and insect resistance, and resistance to abiotic and biotic stresses; the second includes altered grains for starch, protein or nutrient quality, the use of cereals for the production of high value medical or other products, and the generation of plants with improved efficiency of biofuel production. Using data from field trial and patent databases the review considers the diversity of GM lines being tested for possible future development. It also summarises the dichotomy of response to GM products in various countries, describes the basis for the varied public acceptability of such products, and assesses the development of novel breeding techniques in the light of current GM regulatory procedures.
Resumo:
As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review article provides a review of a class of newer statistical models that have not yet been widely adopted in the field, but have garnered interest in other fields of language research. The class of statistical models called mixed-effects models are introduced, and the potential benefits of these models for the second language researcher are discussed. A simple example of mixed-effects data analysis using the statistical software package R (R Development Core Team, 2011) is provided as an introduction to the use of these statistical techniques, and to exemplify how such analyses can be reported in research articles. It is concluded that mixed-effects models provide the second language researcher with a powerful tool for the analysis of a variety of types of second language acquisition data.
Resumo:
Learning to talk about motion in a second language is very difficult because it involves restructuring deeply entrenched patterns from the first language (Slobin 1996). In this paper we argue that statistical learning (Saffran et al. 1997) can explain why L2 learners are only partially successful in restructuring their second language grammars. We explore to what extent L2 learners make use of two mechanisms of statistical learning, entrenchment and pre-emption (Boyd and Goldberg 2011) to acquire target-like expressions of motion and retreat from overgeneralisation in this domain. Paying attention to the frequency of existing patterns in the input can help learners to adjust the frequency with which they use path and manner verbs in French but is insufficient to acquire the boundary crossing constraint (Slobin and Hoiting 1994) and learn what not to say. We also look at the role of language proficiency and exposure to French in explaining the findings.
Resumo:
Ethnobotanical relevance Cancer patients commonly use traditional medicines (TM) and in Thailand these are popular for both self-medication and as prescribed by TM practitioners, and are rarely monitored. A study was conducted at Wat Khampramong, a Thai Buddhist temple herbal medicine hospice, to document some of these practices as well as the hospice regime. Materials and methods Cancer patients (n=286) were surveyed shortly after admission as to which TMs they had previously taken and perceptions of effects experienced. They were also asked to describe their current symptoms. Treatment at the hospice is built upon an 11-herb anti-cancer formula, yod-ya-mareng, prescribed for all patients, and ideally, its effects would have been evaluated. However other herbal medicines and holistic practices are integral to the regime, so instead we attempted to assess the value of the patients׳ stay at the hospice by measuring any change in symptom burden, as they perceived it. Surviving patients (n=270) were therefore asked to describe their symptoms again just before leaving. Results 42% of patients (120/286; 95% CI 36.4%, 47.8%) had used herbal medicines before their arrival, with 31.7% (38/120; 95% CI 24%, 40.4%) using several at once. Mixed effects were reported for these products. After taking the herbal regime at Khampramong, 77% (208/270 95% CI; 71.7%, 81.7%) reported benefit, and a comparison of the incidence of the most common (pain, dyspepsia, abdominal or visceral pain, insomnia, fatigue) showed statistical significance (χ2 57.1, df 7, p<0.001). Conclusions A wide range of TMs is taken by cancer patients in Thailand and considered to provide more benefit than harm, and this perception extends to the temple regime. Patients reported a significant reduction in symptoms after staying at Khampramong, indicating an improvement in quality of life, the aim of hospices everywhere. Based on this evidence, it is not possible to justify the use of TM for cancer in general, but this study suggests that further research is warranted. The uncontrolled use of TMs, many of which are uncharacterised, raises concerns, and this work also highlights the fact that validated, robust methods of assessing holistic medical regimes are urgently needed.
Resumo:
This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.