896 resultados para Software testing. Test generation. Grammars
Resumo:
The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate the existing equations that might be used to predict Iowa pavement concrete for the Level 2 pavement design. In this project, over 20,000 data were collected from the Iowa Department of Transportation (DOT) and other sources. These data, most of which were concrete compressive strength, slump, air content, and unit weight data, were synthesized and their statistical parameters (such as the mean values and standard variations) were analyzed. Based on the analyses, the typical input values of Iowa pavement concrete, such as 28-day compressive strength (f’c), splitting tensile strength (fsp), elastic modulus (Ec), and modulus of rupture (MOR), were evaluated. The study indicates that the 28-day MOR of Iowa concrete is 646 + 51 psi, very close to the MEPDG default value (650 psi). The 28-day Ec of Iowa concrete (based only on two available data of the Iowa Curling and Warping project) is 4.82 + 0.28x106 psi, which is quite different from the MEPDG default value (3.93 x106 psi); therefore, the researchers recommend re-evaluating after more Iowa test data become available. The drying shrinkage (εc) of a typical Iowa concrete (C-3WR-C20 mix) was tested at Concrete Technology Laboratory (CTL). The test results show that the ultimate shrinkage of the concrete is about 454 microstrain and the time for the concrete to reach 50% of ultimate shrinkage is at 32 days; both of these values are very close to the MEPDG default values. The comparison of the Iowa test data and the MEPDG default values, as well as the recommendations on the input values to be used in MEPDG for Iowa PCC pavement design, are summarized in Table 20 of this report. The available equations for predicting the above-mentioned concrete properties were also assembled. The validity of these equations for Iowa concrete materials was examined. Multiple-parameters nonlinear regression analyses, along with the artificial neural network (ANN) method, were employed to investigate the relationships among Iowa concrete material properties and to modify the existing equations so as to be suitable for Iowa concrete materials. However, due to lack of necessary data sets, the relationships between Iowa concrete properties were established based on the limited data from CP Tech Center’s projects and ISU classes only. The researchers suggest that the resulting relationships be used by Iowa pavement design engineers as references only. The present study furthermore indicates that appropriately documenting concrete properties, including flexural strength, elastic modulus, and information on concrete mix design, is essential for updating the typical Iowa material input values and providing rational prediction equations for concrete pavement design in the future.
Resumo:
Currently, no standard mix design procedure is available for CIR-emulsion in Iowa. The CIR-foam mix design process developed during the previous phase is applied for CIR-emulsion mixtures with varying emulsified asphalt contents. Dynamic modulus test, dynamic creep test, static creep test and raveling test were conducted to evaluate the short- and long-term performance of CIR-emulsion mixtures at various testing temperatures and loading conditions. A potential benefit of this research is a better understanding of CIR-emulsion material properties in comparison with those of CIR-foam material that would allow for the selection of the most appropriate CIR technology and the type and amount of the optimum stabilization material. Dynamic modulus, flow number and flow time of CIR-emulsion mixtures using CSS-h were generally higher than those of HFMS-2p. Flow number and flow time of CIR-emulsion using RAP materials from Story County was higher than those from Clayton County. Flow number and flow time of CIR-emulsion with 0.5% emulsified asphalt was higher than CIR-emulsion with 1.0% or 1.5%. Raveling loss of CIR-emulsion with 1.5% emulsified was significantly less than those with 0.5% and 1.0%. Test results in terms of dynamic modulus, flow number, flow time and raveling loss of CIR-foam mixtures are generally better than those of CIR-emulsion mixtures. Given the limited RAP sources used for this study, it is recommended that the CIR-emulsion mix design procedure should be validated against several RAP sources and emulsion types.
Resumo:
In response to the mandate on Load and Resistance Factor Design (LRFD) implementations by the Federal Highway Administration (FHWA) on all new bridge projects initiated after October 1, 2007, the Iowa Highway Research Board (IHRB) sponsored these research projects to develop regional LRFD recommendations. The LRFD development was performed using the Iowa Department of Transportation (DOT) Pile Load Test database (PILOT). To increase the data points for LRFD development, develop LRFD recommendations for dynamic methods, and validate the results ofLRFD calibration, 10 full-scale field tests on the most commonly used steel H-piles (e.g., HP 10 x 42) were conducted throughout Iowa. Detailed in situ soil investigations were carried out, push-in pressure cells were installed, and laboratory soil tests were performed. Pile responses during driving, at the end of driving (EOD), and at re-strikes were monitored using the Pile Driving Analyzer (PDA), following with the CAse Pile Wave Analysis Program (CAPWAP) analysis. The hammer blow counts were recorded for Wave Equation Analysis Program (WEAP) and dynamic formulas. Static load tests (SLTs) were performed and the pile capacities were determined based on the Davisson’s criteria. The extensive experimental research studies generated important data for analytical and computational investigations. The SLT measured loaddisplacements were compared with the simulated results obtained using a model of the TZPILE program and using the modified borehole shear test method. Two analytical pile setup quantification methods, in terms of soil properties, were developed and validated. A new calibration procedure was developed to incorporate pile setup into LRFD.
Resumo:
In this paper we deal with the identification of dependencies between time series of equity returns. Marginal distribution functions are assumed to be known, and a bivariate chi-square test of fit is applied in a fully parametric copula approach. Several families of copulas are fitted and compared with Spanish stock market data. The results show that the t-copula generally outperforms other dependence structures, and highlight the difficulty in adjusting a significant number of bivariate data series
Resumo:
Résumé La voie de signalisation de Wnt est extrêmement conservée au cours de l'évolution. Les protéines Wnt sont des molécules sécrétées qui se lient à la famille de récepteurs Frizzled. Cette interaction mène à la stabilisation de la protéine β-caténine, qui va s'accumuler dans le cytoplasme puis migrer dans le noyau où elle peut s'hétérodimériser avec les facteurs de transcription de la famille TCF/LEF. Il a été démontré que cette voie de signalisation joue un rôle important durant la lymphopoïèse et de récents résultats suggèrent un rôle clé de cette voie dans le renouvellement des Cellules Souches Hématopoïétique (CSH). Des études se basant sur un système de surexpression de protéines montrent clairement que la voie Wnt peut influencer l'hématopoïèse. Cependant, le rôle de la protéine β-caténine dans le système hématopoïétique n'a jamais été testé directement. Ce projet de thèse se propose d'étudier la fonction de la protéine β-caténine par sa délétion inductible via le système Cre-loxP. De façon surprenante, nous avons pu démontrer que les progéniteurs de la moelle osseuse, déficients en β-caténine, ne montrent aucune altération dans leur capacité à s'auto-renouveler et/ou à reconstituer toutes les lignées hématopoïétiques (myéloïde, érythroïde et lymphoïde) dans les souris-chimères. De plus, le développement, la survie des thymocytes ainsi que la prolifération des cellules T périphériques induite par un antigène, sont indépendants de β-caténine. Ces résultats suggèrent soit que la protéine β-caténine ne joue pas un rôle primordial dans le système hématopoiétique, soit que son absence pourrait être compensée par une autre protéine. Un candidat privilégié susceptible de se substituer à β-caténine, serait plakoglobine, aussi connu sous le nom de γ-caténine. En effet, ces deux protéines partagent de multiples caractéristiques structurelles. Afin de démontrer que la protéine γ-caténine peut compenser l'absence de β-caténine, nous avons généré des souris dans lesquelles, le système hématopoïétique est déficient pour ces deux protéines. Cette déficience combinée de β- caténine et γ-caténine ne perturbe pas la capacité des Cellules Souche Hématopoïétique-Long Terme (CSH-LT) de se renouveler, par contre elle agit sur un progéniteur précoce déjà différencié de la moelle osseuse. Ces résultats mettent en évidence que la protéine γ-caténine est capable de compenser l'absence de protéine β-caténine dans le système hématopoïétique. Par conséquent, ce travail contribue à une meilleure connaissance de la cascade Wnt dans l'hématopoïèse. Summary The canonical Wnt signal transduction pathway is a developmentally highly conserved. Wnts are secreted molecules which bind to the family of Frizzled receptors in a complex with the low density lipoprotein receptor related protein (LRP-5/6). This initial activation step leads to the stabilization and accumulation of β-catenin, first in the cytoplasm and subsequently in the nucleus where it forms heterodimers with TCF/LEF transcription factor family members. Wnt signalling has been shown to be important during early lymphopoiesis and has more recently, been suggested to be a key player in self-renewal of haematopoietic stem cells (HSCs). Although mostly gain of function studies indicate that components of the Wnt signalling pathway can influence the haematopoietic system, the role of β-catenin has never been directly investigated. The aim of this thesis project is to investigate the putatively critical role of β-catenin in vivo using the Cre-loxP mediated conditional loss of function approach. Surprisingly, β-catenin deficient bone marrow (BM) progenitors arc not impaired in their ability to self-renew and/or to reconstitute all haematopoietic lineages (myeloid, erythroid and lymphoid) in both mixed and straight bone marrow chimeras. In addition, both thymocyte development and survival, and antigen-induced proliferation of peripheral T cells are β- catenin independent. Our results do not necessarily exclude the possibility of an important function for β-catenin mediated Wnt signalling in the haematopoietic system, it rather raises the question that β-catenin is compensated for by another protein. A prime candidate that may take over the function of β-catenin in its absence, is the close relative plakoglobin, also know as γ-catenin. This protein shares multiple structural features with β-catenin. In order to investigate whether γ-catenin can compensate for the loss of β-catenin we have generated mice in which the haematopoietic compartment is deficient for both proteins. Combined deficiency of β-catenin and γ-catenin does not perturb Long Term-Haematopoietic Stem Cells (LT-HSC) self renewal, but affects an already lineage committed progenitor population within the BM. Our results demonstrate that y-catenin can indeed compensate for the loss of β-catenin within the haematopoietie system.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
Drilled shafts have been used in the US for more than 100 years in bridges and buildings as a deep foundation alternative. For many of these applications, the drilled shafts were designed using the Working Stress Design (WSD) approach. Even though WSD has been used successfully in the past, a move toward Load Resistance Factor Design (LRFD) for foundation applications began when the Federal Highway Administration (FHWA) issued a policy memorandum on June 28, 2000.The policy memorandum requires all new bridges initiated after October 1, 2007, to be designed according to the LRFD approach. This ensures compatibility between the superstructure and substructure designs, and provides a means of consistently incorporating sources of uncertainty into each load and resistance component. Regionally-calibrated LRFD resistance factors are permitted by the American Association of State Highway and Transportation Officials (AASHTO) to improve the economy and competitiveness of drilled shafts. To achieve this goal, a database for Drilled SHAft Foundation Testing (DSHAFT) has been developed. DSHAFT is aimed at assimilating high quality drilled shaft test data from Iowa and the surrounding regions, and identifying the need for further tests in suitable soil profiles. This report introduces DSHAFT and demonstrates its features and capabilities, such as an easy-to-use storage and sharing tool for providing access to key information (e.g., soil classification details and cross-hole sonic logging reports). DSHAFT embodies a model for effective, regional LRFD calibration procedures consistent with PIle LOad Test (PILOT) database, which contains driven pile load tests accumulated from the state of Iowa. PILOT is now available for broader use at the project website: http://srg.cce.iastate.edu/lrfd/. DSHAFT, available in electronic form at http://srg.cce.iastate.edu/dshaft/, is currently comprised of 32 separate load tests provided by Illinois, Iowa, Minnesota, Missouri and Nebraska state departments of transportation and/or department of roads. In addition to serving as a manual for DSHAFT and providing a summary of the available data, this report provides a preliminary analysis of the load test data from Iowa, and will open up opportunities for others to share their data through this quality–assured process, thereby providing a platform to improve LRFD approach to drilled shafts, especially in the Midwest region.
Resumo:
There is a need for more efficient methods giving insight into the complex mechanisms of neurotoxicity. Testing strategies including in vitro methods have been proposed to comply with this requirement. With the present study we aimed to develop a novel in vitro approach which mimics in vivo complexity, detects neurotoxicity comprehensively, and provides mechanistic insight. For this purpose we combined rat primary re-aggregating brain cell cultures with a mass spectrometry (MS)-based metabolomics approach. For the proof of principle we treated developing re-aggregating brain cell cultures for 48h with the neurotoxicant methyl mercury chloride (0.1-100muM) and the brain stimulant caffeine (1-100muM) and acquired cellular metabolic profiles. To detect toxicant-induced metabolic alterations the profiles were analysed using commercial software which revealed patterns in the multi-parametric dataset by principal component analyses (PCA), and recognised the most significantly altered metabolites. PCA revealed concentration-dependent cluster formations for methyl mercury chloride (0.1-1muM), and treatment-dependent cluster formations for caffeine (1-100muM) at sub-cytotoxic concentrations. Four relevant metabolites responsible for the concentration-dependent alterations following methyl mercury chloride treatment could be identified using MS-MS fragmentation analysis. These were gamma-aminobutyric acid, choline, glutamine, creatine and spermine. Their respective mass ion intensities demonstrated metabolic alterations in line with the literature and suggest that the metabolites could be biomarkers for mechanisms of neurotoxicity or neuroprotection. In addition, we evaluated whether the approach could identify neurotoxic potential by testing eight compounds which have target organ toxicity in the liver, kidney or brain at sub-cytotoxic concentrations. PCA revealed cluster formations largely dependent on target organ toxicity indicating possible potential for the development of a neurotoxicity prediction model. With such results it could be useful to perform a validation study to determine the reliability, relevance and applicability of this approach to neurotoxicity screening. Thus, for the first time we show the benefits and utility of in vitro metabolomics to comprehensively detect neurotoxicity and to discover new biomarkers.
Resumo:
Understanding and anticipating biological invasions can focus either on traits that favour species invasiveness or on features of the receiving communities, habitats or landscapes that promote their invasibility. Here, we address invasibility at the regional scale, testing whether some habitats and landscapes are more invasible than others by fitting models that relate alien plant species richness to various environmental predictors. We use a multi-model information-theoretic approach to assess invasibility by modelling spatial and ecological patterns of alien invasion in landscape mosaics and testing competing hypotheses of environmental factors that may control invasibility. Because invasibility may be mediated by particular characteristics of invasiveness, we classified alien species according to their C-S-R plant strategies. We illustrate this approach with a set of 86 alien species in Northern Portugal. We first focus on predictors influencing species richness and expressing invasibility and then evaluate whether distinct plant strategies respond to the same or different groups of environmental predictors. We confirmed climate as a primary determinant of alien invasions and as a primary environmental gradient determining landscape invasibility. The effects of secondary gradients were detected only when the area was sub-sampled according to predictions based on the primary gradient. Then, multiple predictor types influenced patterns of alien species richness, with some types (landscape composition, topography and fire regime) prevailing over others. Alien species richness responded most strongly to extreme land management regimes, suggesting that intermediate disturbance induces biotic resistance by favouring native species richness. Land-use intensification facilitated alien invasion, whereas conservation areas hosted few invaders, highlighting the importance of ecosystem stability in preventing invasions. Plants with different strategies exhibited different responses to environmental gradients, particularly when the variations of the primary gradient were narrowed by sub-sampling. Such differential responses of plant strategies suggest using distinct control and eradication approaches for different areas and alien plant groups.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
The objective of the investigation was the development of a test that would readily identify the potential of an aggregate to cause D-cracking because of its susceptivity to critical saturation. A Press-Ur-Meter was modified by replacing the air chamber with a one-inch diameter plastic tube calibrated in milli-. It was concluded that the pore index was sufficiently reliable to determine the D-cracking potential of limestone aggregates in all but a few cases where marginal results were obtained. Consistently poor or good results were always in agreement with established service records or concrete durability testing. In those instances where marginal results are obtained, the results of concrete durability testing should be considered when making the final determination of the D-cracking susceptibility of the aggregate in question. The following applications for the pore index test have been recommended for consideration: concrete durability testing be discontinued in the evaluation process of new aggregate sources with pore index results between 0-20 (Class 2 durability) and over 35 (Class 1) durability; composite aggregates with intermediate pore index results of 20-35 be tested on each stone type to facilitate the possible removal of low durability stone from the production process; and additional investigation should be made to evaluate the possibility of using the test to monitor and upgrade the acceptance of aggregate from sources associated with D-cracking.
Resumo:
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.
Resumo:
OBJECTIVES: To obtain information about the prevalence of, reasons for, and adequacy of HIV testing in the general population in Switzerland in 1992. DESIGN: Telephone survey (n = 2800). RESULTS: Some 47% of the sample underwent one HIV test performed through blood donation (24%), voluntary testing (17%) or both (6%). Of the sample, 46% considered themselves well or very well informed about the HIV test. Patients reported unsystematic pre-test screening by doctors for the main HIV risks. People having been in situations of potential exposure to risk were more likely to have had the test than others. Overall, 85% of those HIV-tested had a relevant, generally risk-related reason for having it performed. CONCLUSIONS: HIV testing is widespread in Switzerland. Testing is mostly performed for relevant reasons. Pre-test counselling is poor and an opportunity for prevention is thus lost.
Resumo:
Research has shown that one of the major contributing factors in early joint deterioration of portland cement concrete (PCC) pavement is the quality of the coarse aggregate. Conventional physical and freeze/thaw tests are slow and not satisfactory in evaluating aggregate quality. In the last ten years the Iowa DOT has been evaluating X-ray analysis and other new technologies to predict aggregate durability in PCC pavement. The objective of this research is to evaluate thermogravimetric analysis (TGA) of carbonate aggregate. The TGA testing has been conducted with a TA 2950 Thermogravimetric Analyzer. The equipment is controlled by an IBM compatible computer. A "TA Hi-RES" (trademark) software package allows for rapid testing while retaining high resolution. The carbon dioxide is driven off the dolomite fraction between 705 deg C and 745 deg C and off the calcite fraction between 905 deg C and 940 deg C. The graphical plot of the temperature and weight loss using the same sample size and test procedure demonstrates that the test is very accurate and repeatable. A substantial number of both dolomites and limestones (calcites) have been subjected to TGA testing. The slopes of the weight loss plot prior to the dolomite and calcite transitions does correlate with field performance. The noncarbonate fraction, which correlates to the acid insolubles, can be determined by TGA for most calcites and some dolomites. TGA has provided information that can be used to help predict the quality of carbonate aggregate.