908 resultados para Verification and validation technology
Resumo:
Chagas disease, a neglected illness, affects nearly 12-14 million people in endemic areas of Latin America. Although the occurrence of acute cases sharply has declined due to Southern Cone Initiative efforts to control vector transmission, there still remain serious challenges, including the maintenance of sustainable public policies for Chagas disease control and the urgent need for better drugs to treat chagasic patients. Since the introduction of benznidazole and nifurtimox approximately 40 years ago, many natural and synthetic compounds have been assayed against Trypanosoma cruzi, yet only a few compounds have advanced to clinical trials. This reflects, at least in part, the lack of consensus regarding appropriate in vitro and in vivo screening protocols as well as the lack of biomarkers for treating parasitaemia. The development of more effective drugs requires (i) the identification and validation of parasite targets, (ii) compounds to be screened against the targets or the whole parasite and (iii) a panel of minimum standardised procedures to advance leading compounds to clinical trials. This third aim was the topic of the workshop entitled Experimental Models in Drug Screening and Development for Chagas Disease, held in Rio de Janeiro, Brazil, on the 25th and 26th of November 2008 by the Fiocruz Program for Research and Technological Development on Chagas Disease and Drugs for Neglected Diseases Initiative. During the meeting, the minimum steps, requirements and decision gates for the determination of the efficacy of novel drugs for T. cruzi control were evaluated by interdisciplinary experts and an in vitro and in vivo flowchart was designed to serve as a general and standardised protocol for screening potential drugs for the treatment of Chagas disease.
Resumo:
The aim of the present study was to develop a short form of the Zuckerman-Kuhlman Personality Questionnaire (ZKPQ) with acceptable psychometric properties in four languages: English (United States), French (Switzerland), German (Germany), and Spanish (Spain). The total sample (N = 4,621) was randomly divided into calibration and validation samples. An exploratory factor analysis was conducted in the calibration sample. Eighty items, with loadings equal or higher than 0.30 on their own factor and lower on the remaining factors, were retained. A confirmatory factor analysis was performed over the survival items in the validation sample in order to select the best 10 items for each scale. This short version (named ZKPQ-50-CC) presents psychometric properties strongly similar to the original version in the four countries. Moreover, the factor structure are near equivalent across the four countries since the congruence indices were all higher than 0.90. It is concluded that the ZKPQ-50-CC presented a high cross-language replicability, and it could be an useful questionnaire that may be used for personality research.
Resumo:
The accuracy of the MicroScan WalkAway, BD Phoenix, and Vitek-2 systems for susceptibility testing of quinolones and aminoglycosides against 68 enterobacteria containing qnrB, qnrS, and/or aac(6 ')-Ib-cr was evaluated using reference microdilution. Overall, one very major error (0.09%), 6 major errors (0.52%), and 45 minor errors (3.89%) were noted.
Resumo:
Colorectal cancer is a heterogeneous disease that manifests through diverse clinical scenarios. During many years, our knowledge about the variability of colorectal tumors was limited to the histopathological analysis from which generic classifications associated with different clinical expectations are derived. However, currently we are beginning to understand that under the intense pathological and clinical variability of these tumors there underlies strong genetic and biological heterogeneity. Thus, with the increasing available information of inter-tumor and intra-tumor heterogeneity, the classical pathological approach is being displaced in favor of novel molecular classifications. In the present article, we summarize the most relevant proposals of molecular classifications obtained from the analysis of colorectal tumors using powerful high throughput techniques and devices. We also discuss the role that cancer systems biology may play in the integration and interpretation of the high amount of data generated and the challenges to be addressed in the future development of precision oncology. In addition, we review the current state of implementation of these novel tools in the pathological laboratory and in clinical practice.
Resumo:
In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.
Resumo:
How do organizations cope with extreme uncertainty? The existing literature is divided on this issue: some argue that organizations deal best with uncertainty in the environment by reproducing it in the organization, whereas others contend that the orga nization should be protected from the environment. In this paper we study the case of a Wall Street investment bank that lost its entire office and trading technology in the terrorist attack of September 11 th. The traders survived, but were forced to relocate to a makeshift trading room in New Jersey. During the six months the traders spent outside New York City, they had to deal with fears and insecurities inside the company as well as outside it: anxiety about additional attacks, questions of professional identity, doubts about the future of the firm, and ambiguities about the future re-location of the trading room. The firm overcame these uncertainties by protecting the traders' identities and their ability to engage in sensemaking. The organization held together through a leadership style that managed ambiguities and created the conditions for new solutions to emerge.
Resumo:
Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.
Resumo:
Using a sample of patients with coronary artery disease, this methodological study aimed to conduct a cross-cultural adaptation and validation of a questionnaire on knowledge of cardiovascular risk factors (Q-FARCS), lifestyle changes, and treatment adherence for use in Brazil. The questionnaire has three scales: general knowledge of risk factors (RFs); specific knowledge of these RFs; and lifestyle changes achieved. Cross-cultural adaptation included translation, synthesis, back-translation, expert committee review, and pretesting. Face and content validity, reliability, and construct validity were measured. Cronbach’s alpha for the total sample (n = 240) was 0.75. Assessment of psychometric properties revealed adequate face and content validity, and the construct revealed seven components. It was concluded that the Brazilian version of Q-FARCS had adequate reliability and validity for the assessment of knowledge of cardiovascular RFs.
Resumo:
This study aimed to evaluate the content validity of the nursing diagnosis of nausea in the immediate post-operative period, considering Fehring’s model. Descriptive study with 52 nurses experts who responded an instrument containing identification and validation of nausea diagnosis data. Most experts considered the domain 12 (Comfort), Class 1 (Physical Comfort) and the statement (Nausea) adequate to the diagnosis. Modifications were suggested in the current definition of this nursing diagnosis. Four defining characteristics were considered primary (reported nausea, increased salivation, aversion to food and vomiting sensation) and eight secondary (increased swallowing, sour taste in the mouth, pallor, tachycardia, diaphoresis, sensation of hot and cold, changes in blood pressure and pupil dilation). The total score for the diagnosis of nausea was 0.79. Reports of nausea, vomiting sensation, increased salivation and aversion to food are strong predictors of nursing diagnosis of nausea.
Resumo:
The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.
Resumo:
Huntington's disease (HD) is an autosomal dominant neurodegenerative disorder caused by an expansion of CAG repeats in the huntingtin (Htt) gene. Despite intensive efforts devoted to investigating the mechanisms of its pathogenesis, effective treatments for this devastating disease remain unavailable. The lack of suitable models recapitulating the entire spectrum of the degenerative process has severely hindered the identification and validation of therapeutic strategies. The discovery that the degeneration in HD is caused by a mutation in a single gene has offered new opportunities to develop experimental models of HD, ranging from in vitro models to transgenic primates. However, recent advances in viral-vector technology provide promising alternatives based on the direct transfer of genes to selected sub-regions of the brain. Rodent studies have shown that overexpression of mutant human Htt in the striatum using adeno-associated virus or lentivirus vectors induces progressive neurodegeneration, which resembles that seen in HD. This article highlights progress made in modeling HD using viral vector gene transfer. We describe data obtained with of this highly flexible approach for the targeted overexpression of a disease-causing gene. The ability to deliver mutant Htt to specific tissues has opened pathological processes to experimental analysis and allowed targeted therapeutic development in rodent and primate pre-clinical models.
Resumo:
This paper analyzes the formation of Research Corporations as an alternative governance structure for performing R&D compared to pursuing in-house R&D projects. Research Corporations are privatefor-profit research centers that bring together several firms with similar research goals. In a Research Corporation formal authority over the choice of projects is jointly exercised by the top management of the member firms. A private for-profit organization cannot commit not to interfere with the project choice of the researchers. However, increasing the number of member firms of the Research Corporation reduces the incentive of member firms to meddle with the research projects of researchers because exercising formal authority over the choice of research projects is a public good. The Research Corporation thus offers researchers greater autonomy than a single firm pursuing an identical research program in its in-house R&D department. This attracts higher ability researchers to the Research Corporation compared to the internal R&D department. The paper uses the theoretical model to analyze the organization of the Microelectronics and Computer Technology Corporation (MCC). The facts of this case confirm the existence of a tension between control over the choice of research projects and the ability of researchers that the organization is able to attract or hold onto.
Resumo:
Com características morfológicas e edafo-climáticas extremamente diversificadas, a ilha de Santo Antão em Cabo Verde apresenta uma reconhecida vulnerabilidade ambiental a par de uma elevada carência de estudos científicos que incidam sobre essa realidade e sirvam de base à uma compreensão integrada dos fenómenos. A cartografia digital e as tecnologias de informação geográfica vêm proporcionando um avanço tecnológico na colecção, armazenamento e processamento de dados espaciais. Várias ferramentas actualmente disponíveis permitem modelar uma multiplicidade de factores, localizar e quantificar os fenómenos bem como e definir os níveis de contribuição de diferentes factores no resultado final. No presente estudo, desenvolvido no âmbito do curso de pós-graduação e mestrado em sistemas de Informação geográfica realizado pela Universidade de Trás-os-Montes e Alto Douro, pretende-se contribuir para a minimização do deficit de informação relativa às características biofísicas da citada ilha, recorrendo-se à aplicação de tecnologias de informação geográfica e detecção remota, associadas à análise estatística multivariada. Nesse âmbito, foram produzidas e analisadas cartas temáticas e desenvolvido um modelo de análise integrada de dados. Com efeito, a multiplicidade de variáveis espaciais produzidas, de entre elas 29 variáveis com variação contínua passíveis de influenciar as características biofísicas da região e, possíveis ocorrências de efeitos mútuos antagónicos ou sinergéticos, condicionam uma relativa complexidade à interpretação a partir dos dados originais. Visando contornar este problema, recorre-se a uma rede de amostragem sistemática, totalizando 921 pontos ou repetições, para extrair os dados correspondentes às 29 variáveis nos pontos de amostragem e, subsequente desenvolvimento de técnicas de análise estatística multivariada, nomeadamente a análise em componentes principais. A aplicação destas técnicas permitiu simplificar e interpretar as variáreis originais, normalizando-as e resumindo a informação contida na diversidade de variáveis originais, correlacionadas entre si, num conjunto de variáveis ortogonais (não correlacionadas), e com níveis de importância decrescente, as componentes principais. Fixou-se como meta a concentração de 75% da variância dos dados originais explicadas pelas primeiras 3 componentes principais e, desenvolveu-se um processo interactivo em diferentes etapas, eliminando sucessivamente as variáveis menos representativas. Na última etapa do processo as 3 primeiras CP resultaram em 74,54% da variância dos dados originais explicadas mas, que vieram a demonstrar na fase posterior, serem insuficientes para retratar a realidade. Optou-se pela inclusão da 4ª CP (CP4), com a qual 84% da referida variância era explicada e, representando oito variáveis biofísicas: a altitude, a densidade hidrográfica, a densidade de fracturação geológica, a precipitação, o índice de vegetação, a temperatura, os recursos hídricos e a distância à rede hidrográfica. A subsequente interpolação da 1ª componente principal (CP1) e, das principais variáveis associadas as componentes CP2, CP3 e CP4 como variáveis auxiliares, recorrendo a técnicas geoestatística em ambiente ArcGIS permitiu a obtenção de uma carta representando 84% da variação das características biofísicas no território. A análise em clusters validada pelo teste “t de Student” permitiu reclassificar o território em 6 unidades biofísicas homogéneas. Conclui-se que, as tecnologias de informação geográfica actualmente disponíveis a par de facilitar análises interactivas e flexíveis, possibilitando que se faça variar temas e critérios, integrar novas informações e introduzir melhorias em modelos construídos com bases em informações disponíveis num determinado contexto, associadas a técnicas de análise estatística multivariada, possibilitam, com base em critérios científicos, desenvolver a análise integrada de múltiplas variáveis biofísicas cuja correlação entre si, torna complexa a compreensão integrada dos fenómenos.
Resumo:
With the failure of the traditional mechanisms of distributing bibliographic materials into developing countries, digital libraries show up as a strong alternative in accomplishing such job, despite the challenges of the digital divide. This paper discusses the challenges of building a digital library (DL) in a developing country. The case of Cape Verde as a digital divide country is analyzed, in terms of current digital library usage and its potentiality for fighting the difficulties in accessing bibliographic resources in the country. The paper also introduces an undergoing project of building a digital library at the University Jean Piaget of Cape Verde.