931 resultados para Rule-based techniques
Resumo:
RATIONALE: An objective and simple prognostic model for patients with pulmonary embolism could be helpful in guiding initial intensity of treatment. OBJECTIVES: To develop a clinical prediction rule that accurately classifies patients with pulmonary embolism into categories of increasing risk of mortality and other adverse medical outcomes. METHODS: We randomly allocated 15,531 inpatient discharges with pulmonary embolism from 186 Pennsylvania hospitals to derivation (67%) and internal validation (33%) samples. We derived our prediction rule using logistic regression with 30-day mortality as the primary outcome, and patient demographic and clinical data routinely available at presentation as potential predictor variables. We externally validated the rule in 221 inpatients with pulmonary embolism from Switzerland and France. MEASUREMENTS: We compared mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. MAIN RESULTS: The prediction rule is based on 11 simple patient characteristics that were independently associated with mortality and stratifies patients with pulmonary embolism into five severity classes, with 30-day mortality rates of 0-1.6% in class I, 1.7-3.5% in class II, 3.2-7.1% in class III, 4.0-11.4% in class IV, and 10.0-24.5% in class V across the derivation and validation samples. Inpatient death and nonfatal complications were <or= 1.1% among patients in class I and <or= 1.9% among patients in class II. CONCLUSIONS: Our rule accurately classifies patients with pulmonary embolism into classes of increasing risk of mortality and other adverse medical outcomes. Further validation of the rule is important before its implementation as a decision aid to guide the initial management of patients with pulmonary embolism.
Resumo:
Doping with natural steroids can be detected by evaluating the urinary concentrations and ratios of several endogenous steroids. Since these biomarkers of steroid doping are known to present large inter-individual variations, monitoring of individual steroid profiles over time allows switching from population-based towards subject-based reference ranges for improved detection. In an Athlete Biological Passport (ABP), biomarkers data are collated throughout the athlete's sporting career and individual thresholds defined adaptively. For now, this approach has been validated on a limited number of markers of steroid doping, such as the testosterone (T) over epitestosterone (E) ratio to detect T misuse in athletes. Additional markers are required for other endogenous steroids like dihydrotestosterone (DHT) and dehydroepiandrosterone (DHEA). By combining comprehensive steroid profiles composed of 24 steroid concentrations with Bayesian inference techniques for longitudinal profiling, a selection was made for the detection of DHT and DHEA misuse. The biomarkers found were rated according to relative response, parameter stability, discriminative power, and maximal detection time. This analysis revealed DHT/E, DHT/5β-androstane-3α,17β-diol and 5α-androstane-3α,17β-diol/5β-androstane-3α,17β-diol as best biomarkers for DHT administration and DHEA/E, 16α-hydroxydehydroepiandrosterone/E, 7β-hydroxydehydroepiandrosterone/E and 5β-androstane-3α,17β-diol/5α-androstane-3α,17β-diol for DHEA. The selected biomarkers were found suitable for individual referencing. A drastic overall increase in sensitivity was obtained.The use of multiple markers as formalized in an Athlete Steroidal Passport (ASP) can provide firm evidence of doping with endogenous steroids. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
A statewide study was conducted to develop regression equations for estimating flood-frequency discharges for ungaged stream sites in Iowa. Thirty-eight selected basin characteristics were quantified and flood-frequency analyses were computed for 291 streamflow-gaging stations in Iowa and adjacent States. A generalized-skew-coefficient analysis was conducted to determine whether generalized skew coefficients could be improved for Iowa. Station skew coefficients were computed for 239 gaging stations in Iowa and adjacent States, and an isoline map of generalized-skew-coefficient values was developed for Iowa using variogram modeling and kriging methods. The skew map provided the lowest mean square error for the generalized-skew- coefficient analysis and was used to revise generalized skew coefficients for flood-frequency analyses for gaging stations in Iowa. Regional regression analysis, using generalized least-squares regression and data from 241 gaging stations, was used to develop equations for three hydrologic regions defined for the State. The regression equations can be used to estimate flood discharges that have recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for ungaged stream sites in Iowa. One-variable equations were developed for each of the three regions and multi-variable equations were developed for two of the regions. Two sets of equations are presented for two of the regions because one-variable equations are considered easy for users to apply and the predictive accuracies of multi-variable equations are greater. Standard error of prediction for the one-variable equations ranges from about 34 to 45 percent and for the multi-variable equations range from about 31 to 42 percent. A region-of-influence regression method was also investigated for estimating flood-frequency discharges for ungaged stream sites in Iowa. A comparison of regional and region-of-influence regression methods, based on ease of application and root mean square errors, determined the regional regression method to be the better estimation method for Iowa. Techniques for estimating flood-frequency discharges for streams in Iowa are presented for determining ( 1) regional regression estimates for ungaged sites on ungaged streams; (2) weighted estimates for gaged sites; and (3) weighted estimates for ungaged sites on gaged streams. The technique for determining regional regression estimates for ungaged sites on ungaged streams requires determining which of four possible examples applies to the location of the stream site and its basin. Illustrations for determining which example applies to an ungaged stream site and for applying both the one-variable and multi-variable regression equations are provided for the estimation techniques.
Resumo:
Remote sensing was utilized in the Phase II Cultural Resources Investigation for this project in lieu of extensive excavations. The purpose of the present report is to compare the costs and benefits of the use of remote sensing to the hypothetical use of traditional excavation methods for this project. Estimates for this hypothetical situation are based on the project archaeologist's considerable past experience in conducting similar investigations. Only that part of the Phase II investigation involving field investigations is addressed in this report. Costs for literature review, laboratory analysis, report preparation, etc., are not included. The project manager proposed the use of this technique for the fol lowing logistic, safety and budgetary reasons.
Resumo:
A discussion is presented of daytime sky imaging and techniques that may be applied to the analysis of full-color sky images to infer cloud macrophysical properties. Descriptions of two different types of skyimaging systems developed by the authors are presented, one of which has been developed into a commercially available instrument. Retrievals of fractional sky cover from automated processing methods are compared to human retrievals, both from direct observations and visual analyses of sky images. Although some uncertainty exists in fractional sky cover retrievals from sky images, this uncertainty is no greater than that attached to human observations for the commercially available sky-imager retrievals. Thus, the application of automatic digital image processing techniques on sky images is a useful method to complement, or even replace, traditional human observations of sky cover and, potentially, cloud type. Additionally, the possibilities for inferring other cloud parameters such as cloud brokenness and solar obstruction further enhance the usefulness of sky imagers
Resumo:
A study of four major concrete pavement joint rehabilitation techniques has been conducted, including: pressure relief joints, full-depth repairs, partial-depth repairs and joint resealing. The products of this research include the following for each technique: a summary of published research, detailed documentation of the design and performance of the 36 projects, conclusions and recommendations of the state highway engineers panel, "Design and Construction Guidelines" and "Guide Specifications." The latter two products are prepared for use by state highway agencies. The results of this study are based upon a review of literature, extensive field surveys and analysis of 36 rehabilitation projects, and the experience of an expert panel of state highway engineers.
Resumo:
The effects of diethylenetriaminpenta(methylenephosphonic acid) (DTPMP), a phosphonate inhibitor, on the growth of delayed ettringite have been evaluated using concrete in highway US 20 near Williams, Iowa, and the cores of six highways subject to moderate (built in 1992) or minor (built in 1997) deterioration. Application of 0.01 and 0.1 vol. % DTPMP to cores was made on a weekly or monthly basis for one year under controlled laboratory-based freeze-thaw and wet-dry conditions over a temperature range of -15 degrees to 58 degrees C to mimic extremes in Iowa roadway conditions. The same concentrations of phosphonate were also applied to cores left outside (roof of Science I at Iowa State University) over the same period of time. Nineteen applications of 0.1 vol. % DTPMP with added deicing salt solution (about 23 weight % NACL) were made to US 20 during the winters of 2003 and 2004. In untreated samples, air voids, pores, and occasional cracks are lined with acicular ettringite crystals (up to 50 micrometers in length) whereas air voids, pores, and cracks in concrete from the westbound lane of US 20 are devoid of ettringite up to a depth of about 0.5 mm from the surface of the concrete. Ettringite is also absent in zones up to 6 mm from the surface of concrete slabs placed on the roof of Science I and cores subject to laboratory-based freeze-thaw experiments. In these zones, the relatively high concentration of DTPMP caused it to behave as a chelator. Stunted ettringite crystals 5 to 25 micrometers in length, occasionally coated with porlandite, form on the margins of these zones indicating that in these areas DTPMP behaved as an inhibitor due to a reduction in the concentration of phosphonate. Analyses of mixes of ettringite and DTPMP using electrospray mass spectrometry suggests that the stunting of ettringite growth is caused by the adsorption of a Ca2+ ion and a water molecule to deprotonated DTPMP on the surface of the {0001} face of ettringite. It is anticipated that by using a DTPMP concentration of between 0.001 and 0.01 vol. % for the extended life of a highway (i.e. >20 years), deterioration caused by the expansive growth of ettringite will be markedly reduced.
Resumo:
Wet pavement friction is known to be one of the most important roadway safety parameters. In this research, frictional properties of flexible (asphalt) pavements were investigated. As a part of this study, a laboratory device to polish asphalt specimens was refined and a procedure to evaluate mixture frictional properties was proposed. Following this procedure, 46 different Superpave mixtures, one stone matrix asphalt (SMA) mixture and one porous friction course (PFC) mixture were tested. In addition, 23 different asphalt and two concrete field sections were also tested for friction and noise. The results of both field and laboratory measurements were used to develop an International Friction Index (IFI)-based protocol for measurement of the frictional characteristics of asphalt pavements for laboratory friction measurements. Based on the results of the study, it appears the content of high friction aggregate should be 20% or more of the total aggregate blend when used with other, polish susceptible coarse aggregates; the frictional properties increased substantially as the friction aggregate content increased above 20%. Both steel slag and quartzite were found to improve the frictional properties of the blend, though steel slag had a lower polishing rate. In general, mixes containing soft limestone demonstrated lower friction values than comparable mixes with hard limestone or dolomite. Larger nominal maximum aggregate size mixes had better overall frictional performance than smaller sized mixes. In addition, mixes with higher fineness moduli generally had higher macrotexture and friction.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.
Resumo:
Lasers are essential tools for cell isolation and monolithic interconnection in thin-film-silicon photovoltaic technologies. Laser ablation of transparent conductive oxides (TCOs), amorphous silicon structures and back contact removal are standard processes in industry for monolithic device interconnection. However, material ablation with minimum debris and small heat affected zone is one of the main difficulty is to achieve, to reduce costs and to improve device efficiency. In this paper we present recent results in laser ablation of photovoltaic materials using excimer and UV wavelengths of diode-pumped solid-state (DPSS) laser sources. We discuss results concerning UV ablation of different TCO and thin-film silicon (a-Si:H and nc-Si:H), focussing our study on ablation threshold measurements and process-quality assessment using advanced optical microscopy techniques. In that way we show the advantages of using UV wavelengths for minimizing the characteristic material thermal affection of laser irradiation in the ns regime at higher wavelengths. Additionally we include preliminary results of selective ablation of film on film structures irradiating from the film side (direct writing configuration) including the problem of selective ablation of ZnO films on a-Si:H layers. In that way we demonstrate the potential use of UV wavelengths of fully commercial laser sources as an alternative to standard backscribing process in device fabrication.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
Rapid amplification of cDNA ends (RACE) is a widely used approach for transcript identification. Random clone selection from the RACE mixture, however, is an ineffective sampling strategy if the dynamic range of transcript abundances is large. To improve sampling efficiency of human transcripts, we hybridized the products of the RACE reaction onto tiling arrays and used the detected exons to delineate a series of reverse-transcriptase (RT)-PCRs, through which the original RACE transcript population was segregated into simpler transcript populations. We independently cloned the products and sequenced randomly selected clones. This approach, RACEarray, is superior to direct cloning and sequencing of RACE products because it specifically targets new transcripts and often results in overall normalization of transcript abundance. We show theoretically and experimentally that this strategy leads indeed to efficient sampling of new transcripts, and we investigated multiplexing the strategy by pooling RACE reactions from multiple interrogated loci before hybridization.
Resumo:
This paper addresses the application of a PCA analysis on categorical data prior to diagnose a patients data set using a Case-Based Reasoning (CBR) system. The particularity is that the standard PCA techniques are designed to deal with numerical attributes, but our medical data set contains many categorical data and alternative methods as RS-PCA are required. Thus, we propose to hybridize RS-PCA (Regular Simplex PCA) and a simple CBR. Results show how the hybrid system produces similar results when diagnosing a medical data set, that the ones obtained when using the original attributes. These results are quite promising since they allow to diagnose with less computation effort and memory storage
Resumo:
This project was undertaken to study the relationships between the performance of locally available asphalts and their physicochemical properties under Iowa conditions with the ultimate objective of development of a locally and performance-based asphalt specification for durable pavements. Physical and physicochemical tests were performed on three sets of asphalt samples including: (a) twelve samples from local asphalt suppliers and their TFOT residues, (b) six core samples of known service records, and (c) a total of 79 asphalts from 10 pavement projects including original, lab aged and recovered asphalts from field mixes, as well as from lab aged mixes. Tests included standard rheological tests, HP-GPC and TMA. Some specific viscoelastic tests (at 5 deg C) were run on b samples and on some a samples. DSC and X-ray diffraction studies were performed on a and b samples. Furthermore, NMR techniques were applied to some a, b and c samples. Efforts were made to identify physicochemical properties which are correlated to physical properties known to affect field performance. The significant physicochemical parameters were used as a basis for an improved performance-based trial specification for Iowa to ensure more durable pavements.