839 resultados para PHARMACY-BASED MEASURES
Resumo:
Applikationsfertige Zytostatikazubereitungen werden heute unter der Verantwortung eines Apothekers in zentralisierten Herstellungsbereichen hergestellt. Weil die Verordnung der Chemotherapie ein großes Fehlerrisiko birgt, ist konsequentes Verordnungsmonitoring ein wesentlicher Teilprozess der zentralen Zytostatikazubereitung. rnDie aktuelle Umsetzung und die Ergebnisse des Verordnungsmonitorings in den Universitätskliniken Deutschlands wurden im Rahmen dieser Arbeit in einer prospektiven Erhebung erfasst. Als häufigste Verordnungsirrtümer wurden Dosisberechnungsfehler (48%), welche als von hoher Relevanz (78%) für die Patientensicherheit angesehen wurden, genannt. Die Inzidenz der Verordnungsfehler betrug durchschnittlich 0,77% bei rund 1950 Verordnungen pro Tag. Das konsequente Verordnungsmonitoring von pharmazeutischer Seite erfolgt höchst effizient und leistet einen hohen Beitrag zur Patienten- und Arzneimitteltherapiesicherheit in der Onkologie.rnFür die Herstellung der applikationsfertiger Zytostatika-Zubereitungen sind fundierte Kenntnisse zu deren physikalisch-chemischen Stabilität erforderlich. Zu neu zugelassenen Zytostatika und insbesondere Biologicals, stehen häufig noch keine Daten zur Stabilität der applikationsfertigen Lösungen zur Verfügung. Die Bestimmung der physikalisch-chemischen Stabilität war daher Gegenstand dieser Arbeit. Die applikationsfertigen Infusionslösungen der Purin-Analoga Nelarabin und Clofarabin (RP-HPLC), sowie des monoklonalen Antiköpers Trastuzumab (SEC, UV-Spektroskopie, SDS-Page), erwiesen sich über einen Zeitraum von mindestens 28 Tagen als stabil. Die Stabilität zweier Camptothecin-Derivate (Topotecan und Irinotecan) beladen auf DC Beads™, wie auch die Ladungskapazität und Kompatibilität mit Kontrastmitteln, wurde ebenfalls bewiesen. rn
Resumo:
Area-based measures of socioeconomic position (SEP) suitable for epidemiological research are lacking in Switzerland. The authors developed the Swiss neighbourhood index of SEP (Swiss-SEP).
Resumo:
Data from 50 residents of a long-term care facility were used to examine the extent to which performance on a brief, objective inventory could predict a clinical psychologist's evaluation of competence to participate in decisions about medical care. Results indicate that the competence to participate in medical decisions of two-thirds of the residents could be accurately assessed using scores on a mental status instrument and two vignette-based measures of medical decision-making. These procedures could enable nursing home staff to objectively assess the competence of residents to participate in important decisions about their medical care.
Resumo:
When it comes to platform sustainability, mitigating user privacy concerns and enhancing trust represent two major tasks providers of Social Networking Sites (SNSs) are facing today. State-of-the-art research advocates reliance on the justice-based measures as possible means to address these challenges. However, as providers are increasingly expanding into foreign markets, the effectiveness of these measures in a cross-cultural setting is questioned. In an attempt to address this set of issues, in this study we build on the existing model to examine the impact of culture on the robustness of four justice-based means in mitigating privacy concerns and ensuring trust. Survey responses from German and Russian SNS members are used to evaluate the two structural equation models, which are then compared. We find that perceptions regarding Procedural and Informational Justice are universally important and hence should be addressed as part of the basic strategy by the SNS provider. When expanding to collectivistic countries like Russia, measures enhancing perceptions of Distributive and Interpersonal Justice can be additionally applied. Beyond practical implications, our study makes a significant contribution to the theoretical discourse on the role of culture in determining individual perceptions and behavior.
Resumo:
Biodiversity, a multidimensional property of natural systems, is difficult to quantify partly because of the multitude of indices proposed for this purpose. Indices aim to describe general properties of communities that allow us to compare different regions, taxa, and trophic levels. Therefore, they are of fundamental importance for environmental monitoring and conservation, although there is no consensus about which indices are more appropriate and informative. We tested several common diversity indices in a range of simple to complex statistical analyses in order to determine whether some were better suited for certain analyses than others. We used data collected around the focal plant Plantago lanceolata on 60 temperate grassland plots embedded in an agricultural landscape to explore relationships between the common diversity indices of species richness (S), Shannon's diversity (H'), Simpson's diversity (D-1), Simpson's dominance (D-2), Simpson's evenness (E), and Berger-Parker dominance (BP). We calculated each of these indices for herbaceous plants, arbuscular mycorrhizal fungi, aboveground arthropods, belowground insect larvae, and P.lanceolata molecular and chemical diversity. Including these trait-based measures of diversity allowed us to test whether or not they behaved similarly to the better studied species diversity. We used path analysis to determine whether compound indices detected more relationships between diversities of different organisms and traits than more basic indices. In the path models, more paths were significant when using H', even though all models except that with E were equally reliable. This demonstrates that while common diversity indices may appear interchangeable in simple analyses, when considering complex interactions, the choice of index can profoundly alter the interpretation of results. Data mining in order to identify the index producing the most significant results should be avoided, but simultaneously considering analyses using multiple indices can provide greater insight into the interactions in a system.
Resumo:
Background: Disturbed interpersonal communication is a core problem in schizophrenia. Patients with schizophrenia often appear disconnected and "out of sync" when interacting with others. This may involve perception, cognition, motor behavior, and nonverbal expressiveness. Although well-known from clinical observation, mainstream research has neglected this area. Corresponding theoretical concepts, statistical methods, and assessment were missing. In recent research, however, it has been shown that objective, video-based measures of nonverbal behavior can be used to reliably quantify nonverbal behavior in schizophrenia. Newly developed algorithms allow for a calculation of movement synchrony. We found that the objective amount of movement of patients with schizophrenia during social interactions was closely related to the symptom profiles of these patients (Kupper et al., 2010). In addition and above the mere amount of movement, the degree of synchrony between patients and healthy interactants may be indicative of various problems in the domain of interpersonal communication and social cognition. Methods: Based on our earlier study, head movement synchrony was assessed objectively (using Motion Energy Analysis, MEA) in 378 brief, videotaped role-play scenes involving 27 stabilized outpatients diagnosed with paranoid-type schizophrenia. Results: Lower head movement synchrony was indicative of symptoms (negative symptoms, but also of conceptual disorganization and lack of insight), verbal memory, patients’ self-evaluation of competence, and social functioning. Many of these relationships remained significant even when corrected for the amount of movement of the patients. Conclusion: The results suggest that nonverbal synchrony may be an objective and sensitive indicator of the severity of symptoms, cognition and social functioning.
Resumo:
BACKGROUND Estimating the prevalence of comorbidities and their associated costs in patients with diabetes is fundamental to optimizing health care management. This study assesses the prevalence and health care costs of comorbid conditions among patients with diabetes compared with patients without diabetes. Distinguishing potentially diabetes- and nondiabetes-related comorbidities in patients with diabetes, we also determined the most frequent chronic conditions and estimated their effect on costs across different health care settings in Switzerland. METHODS Using health care claims data from 2011, we calculated the prevalence and average health care costs of comorbidities among patients with and without diabetes in inpatient and outpatient settings. Patients with diabetes and comorbid conditions were identified using pharmacy-based cost groups. Generalized linear models with negative binomial distribution were used to analyze the effect of comorbidities on health care costs. RESULTS A total of 932,612 persons, including 50,751 patients with diabetes, were enrolled. The most frequent potentially diabetes- and nondiabetes-related comorbidities in patients older than 64 years were cardiovascular diseases (91%), rheumatologic conditions (55%), and hyperlipidemia (53%). The mean total health care costs for diabetes patients varied substantially by comorbidity status (US$3,203-$14,223). Patients with diabetes and more than two comorbidities incurred US$10,584 higher total costs than patients without comorbidity. Costs were significantly higher in patients with diabetes and comorbid cardiovascular disease (US$4,788), hyperlipidemia (US$2,163), hyperacidity disorders (US$8,753), and pain (US$8,324) compared with in those without the given disease. CONCLUSION Comorbidities in patients with diabetes are highly prevalent and have substantial consequences for medical expenditures. Interestingly, hyperacidity disorders and pain were the most costly conditions. Our findings highlight the importance of developing strategies that meet the needs of patients with diabetes and comorbidities. Integrated diabetes care such as used in the Chronic Care Model may represent a useful strategy.
Resumo:
Background Disordered interpersonal communication can be a serious problem in schizophrenia. Recent advances in computer-based measures allow reliable and objective quantification of nonverbal behavior. Research using these novel measures has shown that objective amounts of body and head movement in patients with schizophrenia during social interactions are closely related to the symptom profiles of these patients. In addition to and above mere amounts of movement, the degree of synchrony, or imitation, between patients and normal interactants may be indicative of core deficits underlying various problems in domains related to interpersonal communication, such as symptoms, social competence, and social functioning. Methods Nonverbal synchrony was assessed objectively using Motion Energy Analysis (MEA) in 378 brief, videotaped role-play scenes involving 27 stabilized outpatients diagnosed with paranoid-type schizophrenia. Results Low nonverbal synchrony was indicative of symptoms, low social competence, impaired social functioning, and low self-evaluation of competence. These relationships remained largely significant when correcting for the amounts of patients‘ movement. When patients showed reduced imitation of their interactants’ movements, negative symptoms were likely to be prominent. Conversely, positive symptoms were more prominent in patients when their interaction partners’ imitation of their movements was reduced. Conclusions Nonverbal synchrony can be an objective and sensitive indicator of the severity of patients’ problems. Furthermore, quantitative analysis of nonverbal synchrony may provide novel insights into specific relationships between symptoms, cognition, and core communicative problems in schizophrenia.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
Research Question/Issue: In this paper, we empirically investigate whether US listed commercial banks with effective corporate governance structures engage in higher levels of conservative financial accounting and reporting. Research Findings/Insights: Using both market- and accrual-based measures of conservatism and both composite and disaggregated governance indices, we document convincing evidence that well-governed banks engage in significantly higher levels of conditional conservatism in their financial reporting practices. For example, we find that banks with effective governance structures, particularly those with effective board and audit governance structures, recognize loan loss provisions that are larger relative to changes in nonperforming loans compared to their counterparts with ineffective governance structures. Theoretical/Academic Implications: We contribute to the extant literature on the relationship between corporate governance and quality of accounting information by providing evidence that banks with effective governance structures practice higher levels of accounting conservatism. Practitioner/Policy Implications: The findings of this study would be useful to US bank regulators/supervisors in improving the existing regulatory framework by focusing on accounting conservatism as a complement to corporate governance in mitigating the opaqueness and intense information asymmetry that plague banks.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
Models for the conditional joint distribution of the U.S. Dollar/Japanese Yen and Euro/Japanese Yen exchange rates, from November 2001 until June 2007, are evaluated and compared. The conditional dependency is allowed to vary across time, as a function of either historical returns or a combination of past return data and option-implied dependence estimates. Using prices of currency options that are available in the public domain, risk-neutral dependency expectations are extracted through a copula repre- sentation of the bivariate risk-neutral density. For this purpose, we employ either the one-parameter \Normal" or a two-parameter \Gumbel Mixture" specification. The latter provides forward-looking information regarding the overall degree of covariation, as well as, the level and direction of asymmetric dependence. Specifications that include option-based measures in their information set are found to outperform, in-sample and out-of-sample, models that rely solely on historical returns.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
An assessment tool designed to measure a customer service orientation among RN's and LPN's was developed using a content-oriented approach. Critical incidents were first developed by asking two samples of healthcare managers (n = 52 and 25) to identify various customer-contact situations. The critical incidents were then used to formulate a 121-item instrument. Patient-contact workers from 3 hospitals (n = 102) completed the instrument along with the NEO-FFI, a measure of the Big Five personality factors. Concurrently, managers completed a performance evaluation scale on the employees participating in the study in order to determine the predictive validity of the instrument.^ Through a criterion-keying approach, the instrument was scaled down to 38 items. The correlation between HealthServe and the supervisory ratings of performance evaluation data supported the instrument's criterion-related validity (r =.66, p $<$.0001). Incremental validity of HealthServe over the Big Five was found with HealthServe accounting for 46% of the variance.^ The NEO-FFI was used to assess the correlation between personality traits and HealthServe. A factor analysis of HealthServe suggested 4 factors which were correlated with the NEO-FFI scores. Results indicated that HealthServe was related to Extraversion, Openness to Experience, Agreeableness, Conscientiousness and negatively related to Neuroticism.^ The benefits of the test construction procedure used here over the use of broad-based measures of personality were discussed as well as the limitations of using a concurrent validation strategy. Recommendations for future studies were provided. ^