993 resultados para Predictive Testing
Resumo:
The characterization and categorization of coarse aggregates for use in portland cement concrete (PCC) pavements is a highly refined process at the Iowa Department of Transportation. Over the past 10 to 15 years, much effort has been directed at pursuing direct testing schemes to supplement or replace existing physical testing schemes. Direct testing refers to the process of directly measuring the chemical and mineralogical properties of an aggregate and then attempting to correlate those measured properties to historical performance information (i.e., field service record). This is in contrast to indirect measurement techniques, which generally attempt to extrapolate the performance of laboratory test specimens to expected field performance. The purpose of this research project was to investigate and refine the use of direct testing methods, such as X-ray analysis techniques and thermal analysis techniques, to categorize carbonate aggregates for use in portland cement concrete. The results of this study indicated that the general testing methods that are currently used to obtain data for estimating service life tend to be very reliable and have good to excellent repeatability. Several changes in the current techniques were recommended to enhance the long-term reliability of the carbonate database. These changes can be summarized as follows: (a) Limits that are more stringent need to be set on the maximum particle size in the samples subjected to testing. This should help to improve the reliability of all three of the test methods studied during this project. (b) X-ray diffraction testing needs to be refined to incorporate the use of an internal standard. This will help to minimize the influence of sample positioning errors and it will also allow for the calculation of the concentration of the various minerals present in the samples. (c) Thermal analysis data needs to be corrected for moisture content and clay content prior to calculating the carbonate content of the sample.
Resumo:
The amount of asphalt cement in asphaltic concrete has a definite effect on its durability under adverse conditions. The expansion of the transportation system to more and heavier loads has also made the percentage of asphalt cement in a mix more critical. The laboratory mixer does not duplicate the mixing effect of the large pugmills; therefore, it is impossible to be completely sure of the asphalt cement needed for each mix. This percentage quite often must be varied in the field. With a central testing laboratory and the high production of mixing plants today, a large amount of asphaltic concrete is produced before a sample can be tested to determine if the asphalt content is correct. If the asphalt content lowers the durability or stability of a mix, more maintenance will be required in the future. The purpose of this project is to determine the value of a mobile laboratory in the field, the feasibility of providing adequate, early testing in the field, and correlation with the central laboratory. The major purpose was to determine as soon as possible the best percentage of asphalt.
Resumo:
In a prospective study, total hip arthroplasty (THA) patients were assessed preoperatively and postoperatively (n = 95) to determine if tender points (TPs) are associated with poor THA outcomes. Patients with high follow-up TP counts had higher visual analog scale (VAS) for pain and sleep, higher follow-up Western Ontario and McMaster Universities Arthritis Index (pain, stiffness, function), lower Health Assessment Questionnaire, Harris Hip, and Short Form 36 (physical functioning, bodily pain, physical component summary) scores. High follow-up TP were associated with increased pain, pain not relieved by surgery, poor function, and poor sleep. Visual analog scale pain and sleep, Short Form 36 (physical functioning, bodily pain), Western Ontario and McMaster Universities Arthritis Index, Health Assessment Questionnaire, and Harris hip scores improved significantly after THA; TP scores did not. Higher preoperative TP were predictive of higher follow-up TP but were poorly predictive of poor outcome measures after surgery in individual patients, suggesting that preoperative TPs are contraindicative for THA.
Resumo:
Due to an equipment malfunction, too much sand was used in the concrete on the bridge floor placed on August 9, 1994, in Washington County, Project No. BRF-22-2(36)38-92. Freeze-thaw durability testing of cores taken from the concrete in question and the other two concretes not in question was performed. The experimental results indicate that the concrete in question is considered at least as durable and resistant to freeze-thaw damage as the concretes which are not in question. The concrete in question can be expected to function properly for the regular service life of the bridge.
Resumo:
In May 1950 a proposal for a research project was submitted to the newly formed Iowa Highway Research Board for consideration and action. This project, designated RPSl by the Board, encompassed the study, development, preparation of preliminary plans and specifications for the construction of a wheel track to be used in the accelerated testing of highway pavements. The device envisioned in the proposal was a circular track about seventy-five feet in diameter equipped with a suitable automobile-tired device to test pavements about five feet in width laid into the track under regular construction practices by small scale construction equipment. The Board, upon review, revised and expanded the basic concepts of the project. The project as revised by the Board included a study of the feasibility of developing, constructing and operating an accelerated testing track in which pavements, bases and subgrades may be laid one full lane, or at least ten feet, in width by full size construction equipment in conformity with usual construction practices. The pavements so laid are to be subjected, during test, to conditions as nearly simulating actual traffic as possible.
Resumo:
Due to frequent accidental damage to prestressed concrete (P/C) bridges caused by impact from overheight vehicles, a project was initiated to evaluate the strength and load distribution characteristics of damaged P/C bridges. A comprehensive literature review was conducted. It was concluded that only a few references pertain to the assessment and repair of damaged P/C beams. No reference was found that involves testing of a damaged bridge(s) as well as the damaged beams following their removal. Structural testing of two bridges was conducted in the field. The first bridge tested, damaged by accidental impact, was the westbound (WB) I-680 bridge in Beebeetown, Iowa. This bridge had significant damage to the first and second beams consisting of extensive loss of section and the exposure of numerous strands. The second bridge, the adjacent eastbound (EB) structure, was used as a baseline of the behavior of an undamaged bridge. Load testing concluded that a redistribution of load away from the damaged beams of the WB bridge was occurring. Subsequent to these tests, the damaged beams in the WB bridge were replaced and the bridge retested. The repaired WB bridge behaved, for the most part, like the undamaged EB bridge indicating that the beam replacement restored the original live load distribution patterns. A large-scale bridge model constructed for a previous project was tested to study the changes in behavior due to incrementally applied damage consisting initially of only concrete removal and then concrete removal and strand damage. A total of 180 tests were conducted with the general conclusion that for exterior beam damage, the bridge load distribution characteristics were relatively unchanged until significant portions of the bottom flange were removed along with several strands. A large amount of the total applied moment to the exterior beam was redistributed to the interior beam of the model. Four isolated P/C beams were tested, two removed from the Beebeetown bridge and two from the aforementioned bridge model. For the Beebeetown beams, the first beam, Beam 1W, was tested in an "as removed" condition to obtain the baseline characteristics of a damaged beam. The second beam, Beam 2W, was retrofit with carbon fiber reinforced polymer (CFRP) longitudinal plates and transverse stirrups to strengthen the section. The strengthened Beam was 12% stronger than Beam 1W. Beams 1 and 2 from the bridge model were also tested. Beam 1 was not damaged and served as the baseline behavior of a "new" beam while Beam 2 was damaged and repaired again using CFRP plates. Prior to debonding of the plates from the beam, the behavior of both Beams 1 and 2 was similar. The retrofit beam attained a capacity greater than a theoretically undamaged beam prior to plate debonding. Analytical models were created for the undamaged and damaged center spans of the WB bridge; stiffened plate and refined grillage models were used. Both models were accurate at predicting the deflections in the tested bridge and should be similarly accurate in modeling other P/C bridges. The moment fractions per beam were computed using both models for the undamaged and damaged bridges. The damaged model indicates a significant decrease in moment in the damaged beams and a redistribution of load to the adjacent curb and rail as well as to the undamaged beam lines.
Resumo:
Glioblastomas are the most malignant gliomas with median survival times of only 15 months despite modern therapies. All standard treatments are palliative. Pathogenetic factors are diverse, hence, stratified treatment plans are warranted considering the molecular heterogeneity among these tumors. However, most patients are treated with "one fits all" standard therapies, many of them with minor response and major toxicities. The integration of clinical and molecular information, now becoming available using new tools such as gene arrays, proteomics, and molecular imaging, will take us to an era where more targeted and effective treatments may be implemented. A first step towards the design of such therapies is the identification of relevant molecular mechanisms driving the aggressive biological behavior of glioblastoma. The accumulation of diverse aberrations in regulatory processes enables tumor cells to bypass the effects of most classical therapies available. Molecular alterations underlying such mechanisms comprise aberrations on the genetic level, such as point mutations of distinct genes, or amplifications and deletions, while others result from epigenetic modifications such as aberrant methylation of CpG islands in the regulatory sequence of genes. Epigenetic silencing of the MGMT gene encoding a DNA repair enzyme was recently found to be of predictive value in a randomized clinical trial for newly diagnosed glioblastoma testing the addition of the alkylating agent temozolomide to standard radiotherapy. Determination of the methylation status of the MGMT promoter may become the first molecular diagnostic tool to identify patients most likely to respond that will allow individually tailored therapy in glioblastoma. To date, the test for the MGMT-methylation status is the only tool available that may direct the choice for alkylating agents in glioblastoma patients, but many others may hopefully become part of an arsenal to stratify patients to respective targeted therapies within the next years.
Resumo:
Ga(3+) is a semimetal element that competes for the iron-binding sites of transporters and enzymes. We investigated the activity of gallium maltolate (GaM), an organic gallium salt with high solubility, against laboratory and clinical strains of methicillin-susceptible Staphylococcus aureus (MSSA), methicillin-resistant S. aureus (MRSA), methicillin-susceptible Staphylococcus epidermidis (MSSE), and methicillin-resistant S. epidermidis (MRSE) in logarithmic or stationary phase and in biofilms. The MICs of GaM were higher for S. aureus (375 to 2000 microg/ml) than S. epidermidis (94 to 200 microg/ml). Minimal biofilm inhibitory concentrations were 3,000 to >or=6,000 microg/ml (S. aureus) and 94 to 3,000 microg/ml (S. epidermidis). In time-kill studies, GaM exhibited a slow and dose-dependent killing, with maximal action at 24 h against S. aureus of 1.9 log(10) CFU/ml (MSSA) and 3.3 log(10) CFU/ml (MRSA) at 3x MIC and 2.9 log(10) CFU/ml (MSSE) and 4.0 log(10) CFU/ml (MRSE) against S. epidermidis at 10x MIC. In calorimetric studies, growth-related heat production was inhibited by GaM at subinhibitory concentrations; and the minimal heat inhibition concentrations were 188 to 4,500 microg/ml (MSSA), 94 to 1,500 microg/ml (MRSA), and 94 to 375 microg/ml (MSSE and MRSE), which correlated well with the MICs. Thus, calorimetry was a fast, accurate, and simple method useful for investigation of antimicrobial activity at subinhibitory concentrations. In conclusion, GaM exhibited activity against staphylococci in different growth phases, including in stationary phase and biofilms, but high concentrations were required. These data support the potential topical use of GaM, including its use for the treatment of wound infections, MRSA decolonization, and coating of implants.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
We propose new methods for evaluating predictive densities. The methods includeKolmogorov-Smirnov and Cram?r-von Mises-type tests for the correct specification ofpredictive densities robust to dynamic mis-specification. The novelty is that the testscan detect mis-specification in the predictive densities even if it appears only overa fraction of the sample, due to the presence of instabilities. Our results indicatethat our tests are well sized and have good power in detecting mis-specification inpredictive densities, even when it is time-varying. An application to density forecastsof the Survey of Professional Forecasters demonstrates the usefulness of the proposedmethodologies.
Resumo:
This work extends a previously developed research concerning about the use of local model predictive control in differential driven mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are briefly introduced. In this sense, monocular image data can be used to plan safety trajectories by using goal attraction potential fields
Resumo:
This research extends a previously developed work concerning about the use of local model predictive control in mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The platformused is a differential driven robot with a free rotating wheel. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are also introduced. In this sense, monocular image data provide an occupancy grid where safety trajectories are computed by using goal attraction potential fields
Resumo:
Testing weather or not data belongs could been generated by a family of extreme value copulas is difficult. We generalize a test and we prove that it can be applied whatever the alternative hypothesis. We also study the effect of using different extreme value copulas in the context of risk estimation. To measure the risk we use a quantile. Our results have motivated by a bivariate sample of losses from a real database of auto insurance claims. Methods are implemented in R.
Resumo:
This paper proposes new methodologies for evaluating out-of-sample forecastingperformance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide rangeof window sizes. We show that the tests proposed in the literature may lack the powerto detect predictive ability and might be subject to data snooping across differentwindow sizes if used repeatedly. An empirical application shows the usefulness of themethodologies for evaluating exchange rate models' forecasting ability.