979 resultados para Testing Strategies
Resumo:
Sodium and potassium are the common alkalis present in fly ash. Excessive amounts of fly ash alkalis can cause efflorescence problems in concrete products and raise concern about the effectiveness of the fly ash to mitigate alkali-silica reaction (ASR). The available alkali test, which is commonly used to measure fly ash alkali, takes approximately 35 days for execution and reporting. Hence, in many instances the fly ash has already been incorporated into concrete before the test results are available. This complicates the job of the fly ash marketing agencies and it leads to disputes with fly ash users who often are concerned with accepting projects that contain materials that fail to meet specification limits. The research project consisted of a lab study and a field study. The lab study focused on the available alkali test and how fly ash alkali content impacts common performance tests (mortar-bar expansion tests). Twenty-one fly ash samples were evaluated during the testing. The field study focused on the inspection and testing of selected, well documented pavement sites that contained moderately reactive fine aggregate and high-alkali fly ash. A total of nine pavement sites were evaluated. Two of the sites were control sites that did not contain fly ash. The results of the lab study indicated that the available alkali test is prone to experimental errors that cause poor agreement between testing labs. A strong (linear) relationship was observed between available alkali content and total alkali content of Class C fly ash. This relationship can be used to provide a quicker, more precise method of estimating the available alkali content. The results of the field study failed to link the use of high-alkali fly ash with the occurrence of ASR in the various concrete sites. Petrographic examination of the pavement cores indicated that Wayland sand is an ASR-sensitive aggregate. This was in good agreement with Iowa DOT field service records. It was recommended that preventative measures should be used when this source of sand is used in concrete mixtures.
Resumo:
PRINCIPLES: Coeliac disease (gluten sensitive enteropathy) is a genetically determined disorder with an incidence in the general population that is comparable to type 2 diabetes mellitus. Awareness of this fact and of the often atypical and oligosymptomatic manifestations is only now gaining ground in the medical profession. A high index of suspicion is important in order to minimise diagnostic and therapeutic delay. METHODS: Testing patterns and follow-up for coeliac disease in our institution have been analysed retrospectively for the past five years. The current literature was reviewed with respect to recommendations for clinical practice. RESULTS: A total of 271 patients were tested for coeliac disease over a period of five years. Only in 24 patients were positive results found; after further work-up, the final number of cases with certain or presumed coeliac disease was four. Followup was often difficult, many patients being lost after a single visit. CONCLUSIONS: This study showed that the number of tests ordered in our institution, more often for abdominal than atypical symptoms, has started to increase in the past two years. It also showed that screening tests have found their place in general clinical practice, while the final choice of tests needs to be determined in accordance with available guidelines and local resources. Upper endoscopy with small bowel biopsy remains the gold standard for diagnosis, but its place in follow-up is less certain. Coeliac disease is a disorder for which there is a definite treatment (gluten free diet); if it is left untreated diminished quality of life and potentially serious complications may ensue. Further education of the medical profession regarding coeliac disease, its incidence, presentation and treatment, is clearly indicated..
Resumo:
This report on The Potential of Mode of Action (MoA) Information Derived from Non-testing and Screening Methodologies to Support Informed Hazard Assessment, resulted from a workshop organised within OSIRIS (Optimised Strategies for Risk Assessment of Industrial Chemicals through Integration of Non-test and Test Information), a project partly funded by the EU Commission within the Sixth Framework Programme. The workshop was held in Liverpool, UK, on 30 October 2008, with 35 attendees. The goal of the OSIRIS project is to develop integrated testing strategies (ITS) fit for use in the REACH system, that would enable a significant increase in the use of non-testing information for regulatory decision making, and thus minimise the need for animal testing. One way to improve the evaluation of chemicals may be through categorisation by way of mechanisms or modes of toxic action. Defining such groups can enhance read-across possibilities and priority settings for certain toxic modes or chemical structures responsible for these toxic modes. Overall, this may result in a reduction of in vivo testing on organisms, through combining available data on mode of action and a focus on the potentially most-toxic groups. In this report, the possibilities of a mechanistic approach to assist in and guide ITS are explored, and the differences between human health and environmental areas are summarised.
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Resumo:
Purpose: To compare microsatellite instability (MSI) testing with immunohistochemical (IHC) detection of hMLH1 and hMSH2 in colorectal cancer. Patients and Methods: Colorectal cancers from 1, 144 patients were assessed for DNA mismatch repair deficiency by two methods: MSI testing and IHC detection of hMLH1 and hMSH2 gene products. High-frequency MSI (MSI-H) was defined as more than 30% instability of at least five markers; low-level MSI (MSI-L) was defined as 1% to 29% of loci unstable. Results: Of 1, 144 tumors tested, 818 showed intact expression of hMLH1 and hMSH2. Of these, 680 were microsatellite stable (MSS), 27 were MSI-H, and 111 were MSI-L. In all, 228 tumors showed absence of hMLH1 expression and 98 showed absence of hMSH2 expression: all were MSI-H. Conclusion: IHC in colorectal tumors for protein products hMLH1 and hMSH2 provides a rapid, cost-effective, sensitive (92.3%), and extremely specific (100%) method for screening for DNA mismatch repair defects. The predictive value of normal IHC for an MSS/MSI-L phenotype was 96.7%, and the predictive value of abnormal IHC was 100% for an MSI-H phenotype. Testing strategies must take into account acceptability of missing some cases of MSI-H tumors if only IHC is performed. (C) 2002 by American Society of Clinical Oncology.
Resumo:
There is a need for more efficient methods giving insight into the complex mechanisms of neurotoxicity. Testing strategies including in vitro methods have been proposed to comply with this requirement. With the present study we aimed to develop a novel in vitro approach which mimics in vivo complexity, detects neurotoxicity comprehensively, and provides mechanistic insight. For this purpose we combined rat primary re-aggregating brain cell cultures with a mass spectrometry (MS)-based metabolomics approach. For the proof of principle we treated developing re-aggregating brain cell cultures for 48h with the neurotoxicant methyl mercury chloride (0.1-100muM) and the brain stimulant caffeine (1-100muM) and acquired cellular metabolic profiles. To detect toxicant-induced metabolic alterations the profiles were analysed using commercial software which revealed patterns in the multi-parametric dataset by principal component analyses (PCA), and recognised the most significantly altered metabolites. PCA revealed concentration-dependent cluster formations for methyl mercury chloride (0.1-1muM), and treatment-dependent cluster formations for caffeine (1-100muM) at sub-cytotoxic concentrations. Four relevant metabolites responsible for the concentration-dependent alterations following methyl mercury chloride treatment could be identified using MS-MS fragmentation analysis. These were gamma-aminobutyric acid, choline, glutamine, creatine and spermine. Their respective mass ion intensities demonstrated metabolic alterations in line with the literature and suggest that the metabolites could be biomarkers for mechanisms of neurotoxicity or neuroprotection. In addition, we evaluated whether the approach could identify neurotoxic potential by testing eight compounds which have target organ toxicity in the liver, kidney or brain at sub-cytotoxic concentrations. PCA revealed cluster formations largely dependent on target organ toxicity indicating possible potential for the development of a neurotoxicity prediction model. With such results it could be useful to perform a validation study to determine the reliability, relevance and applicability of this approach to neurotoxicity screening. Thus, for the first time we show the benefits and utility of in vitro metabolomics to comprehensively detect neurotoxicity and to discover new biomarkers.
Resumo:
Tässä diplomityössä tutkitaan automatisoitua testausta ja käyttöliittymätestauksen tekemistä helpommaksi Symbian-käyttöjärjestelmässä. Työssä esitellään Symbian ja Symbian-sovelluskehityksessä kohdattavia haasteita. Lisäksi kerrotaan testausstrategioista ja -tavoista sekä automatisoidusta testaamisesta. Lopuksi esitetään työkalu, jolla testitapausten luominen toiminnalisuus- ja järjestelmätestaukseen tehdään helpommaksi. Graafiset käyttöliittymättuovat ainutlaatuisia haasteita ohjelmiston testaamiseen. Ne tehdään usein monimutkaisista komponenteista ja niitä suunnitellaan jatkuvasti uusiksi ohjelmistokehityksen aikana. Graafisten käyttöliittymien testaukseen käytetään usein kaappaus- ja toistotyökaluja. Käyttöliittymätestauksen testitapausten suunnittelu ja toteutus vaatii paljon panostusta. Koska graafiset käyttöliittymät muodostavat suuren osan koodista, voitaisiin säästää paljon resursseja tekemällä testitapausten luomisesta helpompaa. Käytännön osuudessa toteutettu projekti pyrkii tähän tekemällä testiskriptien luomisesta visuaalista. Näin ollen itse testien skriptikieltä ei tarvitse ymmärtää ja testien hahmottaminen on myös helpompaa.
Resumo:
In spite of recent advances in describing the health outcomes of exposure to nanoparticles (NPs), it still remains unclear how exactly NPs interact with their cellular targets. Size, surface, mass, geometry, and composition may all play a beneficial role as well as causing toxicity. Concerns of scientists, politicians and the public about potential health hazards associated with NPs need to be answered. With the variety of exposure routes available, there is potential for NPs to reach every organ in the body but we know little about the impact this might have. The main objective of the FP7 NanoTEST project ( www.nanotest-fp7.eu ) was a better understanding of mechanisms of interactions of NPs employed in nanomedicine with cells, tissues and organs and to address critical issues relating to toxicity testing especially with respect to alternatives to tests on animals. Here we describe an approach towards alternative testing strategies for hazard and risk assessment of nanomaterials, highlighting the adaptation of standard methods demanded by the special physicochemical features of nanomaterials and bioavailability studies. The work has assessed a broad range of toxicity tests, cell models and NP types and concentrations taking into account the inherent impact of NP properties and the effects of changes in experimental conditions using well-characterized NPs. The results of the studies have been used to generate recommendations for a suitable and robust testing strategy which can be applied to new medical NPs as they are developed.
Resumo:
A major problem in developmental neurotoxicity (DNT) risk assessment is the lack of toxicological hazard information for most compounds. Therefore, new approaches are being considered to provide adequate experimental data that allow regulatory decisions. This process requires a matching of regulatory needs on the one hand and the opportunities provided by new test systems and methods on the other hand. Alignment of academically and industrially driven assay development with regulatory needs in the field of DNT is a core mission of the International STakeholder NETwork (ISTNET) in DNT testing. The first meeting of ISTNET was held in Zurich on 23-24 January 2014 in order to explore the concept of adverse outcome pathway (AOP) to practical DNT testing. AOPs were considered promising tools to promote test systems development according to regulatory needs. Moreover, the AOP concept was identified as an important guiding principle to assemble predictive integrated testing strategies (ITSs) for DNT. The recommendations on a road map towards AOP-based DNT testing is considered a stepwise approach, operating initially with incomplete AOPs for compound grouping, and focussing on key events of neurodevelopment. Next steps to be considered in follow-up activities are the use of case studies to further apply the AOP concept in regulatory DNT testing, making use of AOP intersections (common key events) for economic development of screening assays, and addressing the transition from qualitative descriptions to quantitative network modelling.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.
Resumo:
REACH (registration, evaluation, authorisation and restriction of chemicals) regulation requires that all the chemicals produced or imported in Europe above 1 tonne/year are registered. To register a chemical, physicochemical, toxicological and ecotoxicological information needs to be reported in a dossier. REACH promotes the use of alternative methods to replace, refine and reduce the use of animal (eco)toxicity testing. Within the EU OSIRIS project, integrated testing strategies (ITSs) have been developed for the rational use of non-animal testing approaches in chemical hazard assessment. Here we present an ITS for evaluating the bioaccumulation potential of organic chemicals. The scheme includes the use of all available data (also the non-optimal ones), waiving schemes, analysis of physicochemical properties related to the end point and alternative methods (both in silico and in vitro). In vivo methods are used only as last resort. Using the ITS, in vivo testing could be waived for about 67% of the examined compounds, but bioaccumulation potential could be estimated on the basis of non-animal methods. The presented ITS is freely available through a web tool.
Optimizing the aquatic toxicity assessment under REACH through an integrated testing strategy (ITS).
Resumo:
To satisfy REACH requirements a high number of data on chemical of interest should be supplied to the European Chemicals Agency. To organize the various kinds of information and help the registrants to choose the best strategy to obtain the needed information limiting at the minimum the use of animal testing, integrated testing strategies (ITSs) schemes can be used. The present work deals with regulatory data requirements for assessing the hazards of chemicals to the aquatic pelagic environment. We present an ITS scheme for organizing and using the complex existing data available for aquatic toxicity assessment. An ITS to optimize the choice of the correct prediction strategy for aquatic pelagic toxicity is described. All existing information (like physico-chemical information), and all the alternative methods (like in silico, in vitro or the acute-to-chronic ratio) are considered. Moreover the weight of evidence approach to combine the available data is included.