8 resultados para Performance evaluation. Competencies. Pharmaceutical industry. Strategy. Drug sellers propagandists
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
System thinking allows companies to use subjective constructs indicators like recursiveness, cause-effect relationships and autonomy to performance evaluation. Thus, the question that motivates this paper is: Are Brazilian companies searching new performance measurement and evaluation models based on system thinking? The study investigates models looking for system thinking roots in their framework. It was both exploratory and descriptive based on a multiple four case studies strategy in chemical sector. The findings showed organizational models have some characteristics that can be related to system thinking as system control and communication. Complexity and autonomy are deficiently formalized by the companies. All data suggest, inside its context, that system thinking seems to be adequate to organizational performance evaluation but remains distant from the management proceedings.
Resumo:
We propose a new general Bayesian latent class model for evaluation of the performance of multiple diagnostic tests in situations in which no gold standard test exists based on a computationally intensive approach. The modeling represents an interesting and suitable alternative to models with complex structures that involve the general case of several conditionally independent diagnostic tests, covariates, and strata with different disease prevalences. The technique of stratifying the population according to different disease prevalence rates does not add further marked complexity to the modeling, but it makes the model more flexible and interpretable. To illustrate the general model proposed, we evaluate the performance of six diagnostic screening tests for Chagas disease considering some epidemiological variables. Serology at the time of donation (negative, positive, inconclusive) was considered as a factor of stratification in the model. The general model with stratification of the population performed better in comparison with its concurrents without stratification. The group formed by the testing laboratory Biomanguinhos FIOCRUZ-kit (c-ELISA and rec-ELISA) is the best option in the confirmation process by presenting false-negative rate of 0.0002% from the serial scheme. We are 100% sure that the donor is healthy when these two tests have negative results and he is chagasic when they have positive results.
Resumo:
In this work, we study the performance evaluation of resource-aware business process models. We define a new framework that allows the generation of analytical models for performance evaluation from business process models annotated with resource management information. This framework is composed of a new notation that allows the specification of resource management constraints and a method to convert a business process specification and its resource constraints into Stochastic Automata Networks (SANs). We show that the analysis of the generated SAN model provides several performance indices, such as average throughput of the system, average waiting time, average queues size, and utilization rate of resources. Using the BP2SAN tool - our implementation of the proposed framework - and a SAN solver (such as the PEPS tool) we show through a simple use-case how a business specialist with no skills in stochastic modeling can easily obtain performance indices that, in turn, can help to identify bottlenecks on the model, to perform workload characterization, to define the provisioning of resources, and to study other performance related aspects of the business process.
Resumo:
Background Genotyping of hepatitis C virus (HCV) has become an essential tool for prognosis and prediction of treatment duration. The aim of this study was to compare two HCV genotyping methods: reverse hybridization line probe assay (LiPA v.1) and partial sequencing of the NS5B region. Methods Plasma of 171 patients with chronic hepatitis C were screened using both a commercial method (LiPA HCV Versant, Siemens, Tarrytown, NY, USA) and different primers targeting the NS5B region for PCR amplification and sequencing analysis. Results Comparison of the HCV genotyping methods showed no difference in the classification at the genotype level. However, a total of 82/171 samples (47.9%) including misclassification, non-subtypable, discrepant and inconclusive results were not classified by LiPA at the subtype level but could be discriminated by NS5B sequencing. Of these samples, 34 samples of genotype 1a and 6 samples of genotype 1b were classified at the subtype level using sequencing of NS5B. Conclusions Sequence analysis of NS5B for genotyping HCV provides precise genotype and subtype identification and an accurate epidemiological representation of circulating viral strains.
Resumo:
Drug discovery has moved toward more rational strategies based on our increasing understanding of the fundamental principles of protein-ligand interactions. Structure( SBDD) and ligand-based drug design (LBDD) approaches bring together the most powerful concepts in modern chemistry and biology, linking medicinal chemistry with structural biology. The definition and assessment of both chemical and biological space have revitalized the importance of exploring the intrinsic complementary nature of experimental and computational methods in drug design. Major challenges in this field include the identification of promising hits and the development of high-quality leads for further development into clinical candidates. It becomes particularly important in the case of neglected tropical diseases (NTDs) that affect disproportionately poor people living in rural and remote regions worldwide, and for which there is an insufficient number of new chemical entities being evaluated owing to the lack of innovation and R&D investment by the pharmaceutical industry. This perspective paper outlines the utility and applications of SBDD and LBDD approaches for the identification and design of new small-molecule agents for NTDs.
Resumo:
Ceftazidime is a broad spectrum antibiotic administered mainly by the parenteral route, and it is especially effective against Pseudomonas aeruginosa. The period of time in which serum levels exceed the Minimum Inhibitory Concentration (MIC) is an important pharmacodynamic parameter for its efficacy. One of the forms to extend this period is to administer the antibiotic by continuous infusion, after prior dilution in a Parenteral Solution (PS). The present work assessed the stability of ceftazidime in 5% glucose PS for 24 hours, combined or not with aminophylline, through High Performance Liquid Chromatography (HPLC). The physicochemical evaluation was accompanied by in vitro antimicrobial activity compared MIC test in the 24-hour period. Escherichia coli and Pseudomonas aeruginosa were the microorganisms chosen for the MIC comparison. The HPLC analysis confirmed ceftazidime and aminophylline individual stability on PS, while the MIC values were slightly higher than the mean described in the literature. When both drugs were associated in the same PS, the ceftazidime concentration by HPLC decreased 25% after 24 hours. Not only did the MIC values show high loss of antibiotic activity within the same period, but also altered MIC values immediately after the preparation, which was not detected by HPLC. Our results indicate that this drug combination is not compatible, even if used right away, and that PS might not be the best vehicle for ceftazidime, emphasizing the importance of the MIC evaluation for drug interactions.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Background: In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded. Methods: We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood. Results: For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. In the treated lung cells, a cancer cell condition is simulated by the stimulation of EGF-receptors, but the effect would be counteracted due to the selective inhibition of EGF-receptors by Gefitinib. However, gene expression profiles are actually different between the conditions, and the genes related to the identified changes are considered as possible off-targets of Gefitinib. Conclusions: From the synthetically generated time series data, our proposed approach can identify changes on regulations more accurately than existing methods. By applying the proposed approach to the time series data on normal and treated Human lung cells, candidates of off-target genes of Gefitinib are found. According to the published clinical information, one of the genes can be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib.