991 resultados para Software defect prediction
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
OBJECTIVE: To determine the percent decussation of pupil input fibers in humans and to explain the size and range of the log unit relative afferent pupillary defect (RAPD) in patients with optic tract lesions. DESIGN: Experimental study. PARTICIPANTS AND CONTROLS: Five patients with a unilateral optic tract lesion. METHODS: The pupil response from light stimulation of the nasal hemifield, temporal hemifield, and full field of each eye of 5 patients with a unilateral optic tract lesion was recorded using computerized binocular infrared pupillography. Six stimulus light intensities, separated by 0.5-log unit steps, were used; 12 stimulus repetitions were given for each stimulus condition. MAIN OUTCOME MEASURES: For each stimulus condition, the pupil response of each eye was characterized by plotting the mean pupil contraction amplitude as a function of stimulus light intensity. The percentage of decussating afferent pupillomotor input fibers was calculated from the ratio of the maximal pupil contractions elicited from each eye. The RAPD was determined pupillographically from full-field stimulation to each eye. RESULTS: In all patients, the pupil response from the functioning temporal hemifield ipsilateral to the tract lesion was greater than that from the functioning contralateral nasal hemifield. This temporal-nasal asymmetry increased with increasing stimulus intensity and was similar in hemifield and full-field stimuli, eventually saturating at maximal light intensity. The log unit RAPD did not correlate with the estimated percentage of decussating pupil fibers, which ranged from 54% to 67%. CONCLUSIONS: In patients with a unilateral optic tract lesion, the pupillary responses from full-field stimulation to each eye are the same as comparing the functioning temporal field with the functioning nasal field. The percentage of decussating fibers is reflected in the ratio of the maximal pupil contraction amplitudes resulting from stimulus input between the two eyes. The RAPD that occurs in this setting reflects the difference in light sensitivity between the intact temporal and nasal hemifields. Its magnitude does not correlate with the difference in the number of crossed and uncrossed axons, but its sidedness contralateral to the side of the optic tract lesion is consistent with the greater percentage of decussating pupillomotor input.
Resumo:
Investigaremos cómo las redes de colaboración y el softwarelibre permiten adaptar el centro educativo al entorno, cómo pueden ayudar al centro a potenciar la formación profesional y garantizar la durabilidad de las acciones, con el objetivo que perdure el conocimiento y la propia red de colaboración para una mejora educativa.
Resumo:
3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.
Resumo:
Trabajo que muestra, haciendo uso de tecnologías libres y basándonos en sistemas operativos abiertos, cómo es posible mantener un nivel alto de trabajo para una empresa que se dedica a implementar y realizar desarrollos en tecnologías de software libre. Se muestra el montaje de un laboratorio de desarrollo que nos va a permitir entender el funcionamiento y la implementación tanto de GNU/Linux como del software que se basa en él dentro de la infraestructura de la empresa.
Resumo:
The cblC defect is the most common inborn error of vitamin B12 metabolism. Despite therapeutic measures, the long-term outcome is often unsatisfactory. This retrospective multicentre study evaluates clinical, biochemical and genetic findings in 88 cblC patients. The questionnaire designed for the study evaluates clinical and biochemical features at both initial presentation and during follow up. Also the development of severity scores allows investigation of individual disease load, statistical evaluation of parameters between the different age of presentation groups, as well as a search for correlations between clinical endpoints and potential modifying factors. RESULTS: No major differences were found between neonatal and early onset patients so that these groups were combined as an infantile-onset group representing 88 % of all cases. Hypotonia, lethargy, feeding problems and developmental delay were predominant in this group, while late-onset patients frequently presented with psychiatric/behaviour problems and myelopathy. Plasma total homocysteine was higher and methionine lower in infantile-onset patients. Plasma methionine levels correlated with "overall impression" as judged by treating physicians. Physician's impression of patient's well-being correlated with assessed disease load. We confirmed the association between homozygosity for the c.271dupA mutation and infantile-onset but not between homozygosity for c.394C>T and late-onset. Patients were treated with parenteral hydroxocobalamin, betaine, folate/folinic acid and carnitine resulting in improvement of biochemical abnormalities, non-neurological signs and mortality. However the long-term neurological and ophthalmological outcome is not significantly influenced. In summary the survey points to the need for prospective studies in a large cohort using agreed treatment modalities and monitoring criteria.
Resumo:
INTRODUCTION: Hidradenitis suppurativa of the groin is a chronic, relapsing inflammatory disease of the skin and subcutaneous tissues. Radical surgical excision is the treatment of choice. Often split-skin grafting or wound healing by secondary intention are used for defect closure, sometimes with disfiguring results. We describe our experience with radical excision of localised inguinal hidradenitis suppurativa and immediate defect closure with a medial thigh lift. PATIENTS AND METHODS: Our hospital database was searched for all patients presenting to our institution for surgical treatment of hidradenitis suppurativa between 2001 and 2006. Only patients with hidradenitis confined to the groin were included. Exclusion criteria were simple abscess incisions, recurrence after previous grafting or flap surgery and extension of the disease outside the groin and presence of clinical signs of infection at the time of surgery. We documented patient demographics, sizes of defects, complications, time of follow-up, recurrences and patient satisfaction. RESULTS: A total of 8 patients with localised inguinal hidradenitis suppurativa were identified and 15 thigh lifts were performed. Defect size assessed on pathologic examination of the excised specimens averaged 15.9 cm x 4.3 cm x 1.3 cm (length x width x depth). All wounds but one healed primarily. Functional and aesthetic results were satisfactory. No major complications and no irritations of the genital area were observed. No recurrences were observed either. CONCLUSION: We propose the medial thigh lift to be considered for immediate defect closure after radical excision of localised inguinal hidradenitis suppurativa provided that no perifocal signs of infection are present after debridement.
Resumo:
Validation is the main bottleneck preventing theadoption of many medical image processing algorithms inthe clinical practice. In the classical approach,a-posteriori analysis is performed based on someobjective metrics. In this work, a different approachbased on Petri Nets (PN) is proposed. The basic ideaconsists in predicting the accuracy that will result froma given processing based on the characterization of thesources of inaccuracy of the system. Here we propose aproof of concept in the scenario of a diffusion imaginganalysis pipeline. A PN is built after the detection ofthe possible sources of inaccuracy. By integrating thefirst qualitative insights based on the PN withquantitative measures, it is possible to optimize the PNitself, to predict the inaccuracy of the system in adifferent setting. Results show that the proposed modelprovides a good prediction performance and suggests theoptimal processing approach.
Resumo:
BACKGROUND: Prognostic models have been developed to predict survival of patients with newly diagnosed glioblastoma (GBM). To improve predictions, models should be updated with information at the recurrence. We performed a pooled analysis of European Organization for Research and Treatment of Cancer (EORTC) trials on recurrent glioblastoma to validate existing clinical prognostic factors, identify new markers, and derive new predictions for overall survival (OS) and progression free survival (PFS).¦METHODS: Data from 300 patients with recurrent GBM recruited in eight phase I or II trials conducted by the EORTC Brain Tumour Group were used to evaluate patient's age, sex, World Health Organisation (WHO) performance status (PS), presence of neurological deficits, disease history, use of steroids or anti-epileptics and disease characteristics to predict PFS and OS. Prognostic calculators were developed in patients initially treated by chemoradiation with temozolomide.¦RESULTS: Poor PS and more than one target lesion had a significant negative prognostic impact for both PFS and OS. Patients with large tumours measured by the maximum diameter of the largest lesion (⩾42mm) and treated with steroids at baseline had shorter OS. Tumours with predominant frontal location had better survival. Age and sex did not show independent prognostic values for PFS or OS.¦CONCLUSIONS: This analysis confirms performance status but not age as a major prognostic factor for PFS and OS in recurrent GBM. Patients with multiple and large lesions have an increased risk of death. With these data prognostic calculators with confidence intervals for both medians and fixed time probabilities of survival were derived.
Resumo:
BACKGROUND: Cytomegalovirus (CMV) disease remains an important problem in solid-organ transplant recipients, with the greatest risk among donor CMV-seropositive, recipient-seronegative (D(+)/R(-)) patients. CMV-specific cell-mediated immunity may be able to predict which patients will develop CMV disease. METHODS: We prospectively included D(+)/R(-) patients who received antiviral prophylaxis. We used the Quantiferon-CMV assay to measure interferon-γ levels following in vitro stimulation with CMV antigens. The test was performed at the end of prophylaxis and 1 and 2 months later. The primary outcome was the incidence of CMV disease at 12 months after transplant. We calculated positive and negative predictive values of the assay for protection from CMV disease. RESULTS: Overall, 28 of 127 (22%) patients developed CMV disease. Of 124 evaluable patients, 31 (25%) had a positive result, 81 (65.3%) had a negative result, and 12 (9.7%) had an indeterminate result (negative mitogen and CMV antigen) with the Quantiferon-CMV assay. At 12 months, patients with a positive result had a subsequent lower incidence of CMV disease than patients with a negative and an indeterminate result (6.4% vs 22.2% vs 58.3%, respectively; P < .001). Positive and negative predictive values of the assay for protection from CMV disease were 0.90 (95% confidence interval [CI], .74-.98) and 0.27 (95% CI, .18-.37), respectively. CONCLUSIONS: This assay may be useful to predict if patients are at low, intermediate, or high risk for the development of subsequent CMV disease after prophylaxis. CLINICAL TRIALS REGISTRATION: NCT00817908.
Resumo:
The objective of this work was to determine the contents of methylxanthines, caffeine and theobromine, and phenolic compounds, chlorogenic and caffeic acids, in 51 mate progenies (half-sib families) and estimate the heritability of genetic parameters. Mate progenies were from five Brazilian municipalities: Pinhão, Ivaí, Barão de Cotegipe, Quedas do Iguaçu, and Cascavel. The progenies were grown in the Ivaí locality. The contents of the compounds were obtained by high performance liquid chromatography (HPLC). The estimation of genetic parameters by the restricted maximum likelihood (REML) and the prediction of genotypic values via best linear unbiased prediction (BLUP) were obtained by the Selegen - REML/BLUP software. Caffeine (0.248-1.663%) and theobromine (0.106-0.807%) contents were significantly different (p<0.05) depending on the region of origin, with high individual heritability (ĥ²>0.5). The two different progeny groups determined for chlorogenic (1.365-2.281%) and caffeic (0.027-0.037%) acid contents were not significantly different (p<0.05) depending on the locality of origin. Individual heritability values were low to medium for chlorogenic (ĥ²<0.4) and caffeic acid (ĥ²<0.3). The content of the compounds and the values of genetic parameters could support breeding programs for mate.
Resumo:
The objective of this work was to build mock-ups of complete yerba mate plants in several stages of development, using the InterpolMate software, and to compute photosynthesis on the interpolated structure. The mock-ups of yerba-mate were first built in the VPlants software for three growth stages. Male and female plants grown in two contrasting environments (monoculture and forest understory) were considered. To model the dynamic 3D architecture of yerba-mate plants during the biennial growth interval between two subsequent prunings, data sets of branch development collected in 38 dates were used. The estimated values obtained from the mock-ups, including leaf photosynthesis and sexual dimorphism, are very close to those observed in the field. However, this similarity was limited to reconstructions that included growth units from original data sets. The modeling of growth dynamics enables the estimation of photosynthesis for the entire yerba mate plant, which is not easily measurable in the field. The InterpolMate software is efficient for building yerba mate mock-ups.
Resumo:
This report is concerned with the prediction of the long-time creep and shrinkage behavior of concrete. It is divided into three main areas. l. The development of general prediction methods that can be used by a design engineer when specific experimental data are not available. 2. The development of prediction methods based on experimental data. These methods take advantage of equations developed in item l, and can be used to accurately predict creep and shrinkage after only 28 days of data collection. 3. Experimental verification of items l and 2, and the development of specific prediction equations for four sand-lightweight aggregate concretes tested in the experimental program. The general prediction equations and methods are developed in Chapter II. Standard Equations to estimate the creep of normal weight concrete (Eq. 9), sand-lightweight concrete (Eq. 12), and lightweight concrete (Eq. 15) are recommended. These equations are developed for standard conditions (see Sec. 2. 1) and correction factors required to convert creep coefficients obtained from equations 9, 12, and 15 to valid predictions for other conditions are given in Equations 17 through 23. The correction factors are shown graphically in Figs. 6 through 13. Similar equations and methods are developed for the prediction of the shrinkage of moist cured normal weight concrete (Eq. 30}, moist cured sand-lightweight concrete (Eq. 33}, and moist cured lightweight concrete (Eq. 36). For steam cured concrete the equations are Eq. 42 for normal weight concrete, and Eq. 45 for lightweight concrete. Correction factors are given in Equations 47 through 52 and Figs., 18 through 24. Chapter III summarizes and illustrates, by examples, the prediction methods developed in Chapter II. Chapters IV and V describe an experimental program in which specific prediction equations are developed for concretes made with Haydite manufactured by Hydraulic Press Brick Co. (Eqs. 53 and 54}, Haydite manufactured by Buildex Inc. (Eqs. 55 and 56), Haydite manufactured by The Cater-Waters Corp. (Eqs. 57 and 58}, and Idealite manufactured by Idealite Co. (Eqs. 59 and 60). General prediction equations are also developed from the data obtained in the experimental program (Eqs. 61 and 62) and are compared to similar equations developed in Chapter II. Creep and Shrinkage prediction methods based on 28 day experimental data are developed in Chapter VI. The methods are verified by comparing predicted and measured values of the long-time creep and shrinkage of specimens tested at the University of Iowa (see Chapters IV and V) and elsewhere. The accuracy obtained is shown to be superior to other similar methods available to the design engineer.