862 resultados para Algorithms, Properties, the KCube Graphs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to present a new methodology for evaluating the pelvic floor muscle (PFM) passive properties. The properties were assessed in 13 continent women using an intra-vaginal dynamometric speculum and EMG (to ensure the subjects were relaxed) in four different conditions: (1) forces recorded at minimal aperture (initial passive resistance); (2) passive resistance at maximal aperture; (3) forces and passive elastic stiffness (PES) evaluated during five lengthening and shortening cycles; and (4) percentage loss of resistance after 1 min of sustained stretch. The PFMs and surrounding tissues were stretched, at constant speed, by increasing the vaginal antero-posterior diameter; different apertures were considered. Hysteresis was also calculated. The procedure was deemed acceptable by all participants. The median passive forces recorded ranged from 0.54 N (interquartile range 1.52) for minimal aperture to 8.45 N (interquartile range 7.10) for maximal aperture while the corresponding median PES values were 0.17 N/mm (interquartile range 0.28) and 0.67 N/mm (interquartile range 0.60). Median hysteresis was 17.24 N∗mm (interquartile range 35.60) and the median percentage of force losses was 11.17% (interquartile range 13.33). This original approach to evaluating the PFM passive properties is very promising for providing better insight into the patho-physiology of stress urinary incontinence and pinpointing conservative treatment mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to present a new methodology for evaluating the pelvic floor muscle (PFM) passive properties. The properties were assessed in 13 continent women using an intra-vaginal dynamometric speculum and EMG (to ensure the subjects were relaxed) in four different conditions: (1) forces recorded at minimal aperture (initial passive resistance); (2) passive resistance at maximal aperture; (3) forces and passive elastic stiffness (PES) evaluated during five lengthening and shortening cycles; and (4) percentage loss of resistance after 1 min of sustained stretch. The PFMs and surrounding tissues were stretched, at constant speed, by increasing the vaginal antero-posterior diameter; different apertures were considered. Hysteresis was also calculated. The procedure was deemed acceptable by all participants. The median passive forces recorded ranged from 0.54 N (interquartile range 1.52) for minimal aperture to 8.45 N (interquartile range 7.10) for maximal aperture while the corresponding median PES values were 0.17 N/mm (interquartile range 0.28) and 0.67 N/mm (interquartile range 0.60). Median hysteresis was 17.24 N∗mm (interquartile range 35.60) and the median percentage of force losses was 11.17% (interquartile range 13.33). This original approach to evaluating the PFM passive properties is very promising for providing better insight into the patho-physiology of stress urinary incontinence and pinpointing conservative treatment mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, nanoscience and nanotechnology has emerged as one of the most important and exciting frontier areas of research interest in almost all fields of science and technology. This technology provides the path of many breakthrough changes in the near future in many areas of advanced technological applications. Nanotechnology is an interdisciplinary area of research and development. The advent of nanotechnology in the modern times and the beginning of its systematic study can be thought of to have begun with a lecture by the famous physicist Richard Feynman. In 1960 he presented a visionary and prophetic lecture at the meeting of the American Physical Society entitled “there is plenty of room at the bottom” where he speculated on the possibility and potential of nanosized materials. Synthesis of nanomaterials and nanostructures are the essential aspects of nanotechnology. Studies on new physical properties and applications of nanomaterials are possible only when materials are made available with desired size, morphology, crystal structure and chemical composition. Cerium oxide (ceria) is one of the important functional materials with high mechanical strength, thermal stability, excellent optical properties, appreciable oxygen ion conductivity and oxygen storage capacity. Ceria finds a variety of applications in mechanical polishing of microelectronic devices, as catalysts for three-way automatic exhaust systems and as additives in ceramics and phosphors. The doped ceria usually has enhanced catalytic and electrical properties, which depend on a series of factors such as the particle size, the structural characteristics, morphology etc. Ceria based solid solutions have been widely identified as promising electrolytes for intermediate temperature solid oxide fuel cells (SOFC). The success of many promising device technologies depends on the suitable powder synthesis techniques. The challenge for introducing new nanopowder synthesis techniques is to preserve high material quality while attaining the desired composition. The method adopted should give reproducible powder properties, high yield and must be time and energy effective. The use of a variety of new materials in many technological applications has been realized through the use of thin films of these materials. Thus the development of any new material will have good application potential if it can be deposited in thin film form with the same properties. The advantageous properties of thin films include the possibility of tailoring the properties according to film thickness, small mass of the materials involved and high surface to volume ratio. The synthesis of polymer nanocomposites is an integral aspect of polymer nanotechnology. By inserting the nanometric inorganic compounds, the properties of polymers can be improved and this has a lot of applications depending upon the inorganic filler material present in the polymer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to evaluate the chemical, color, textural, and sensorial characteristics of Serra da Estrela cheese and also to identity the factors affecting these properties, namely thistle ecotype, place of production, dairy and maturation. The results demon- strated that the cheeses lost weight mostly during the first stage of maturation, which was negatively correlated with moisture content, being this also observed for fat and protein contents. During maturation the cheeses became darker and with a yellowish coloration. A strong corre- lation was found between ash and chlorides contents, being the last directly related to the added salt in the manufacturing process. The flesh firmness showed a strong positive correlation with the rind harness and the firmness of inner paste. Stickiness was strongly related with all the other textural properties being indicative of the creamy nature of the paste. Adhesiveness was posi- tively correlated with moisture content and negatively correlated with maturation time. The trained panelists liked the cheeses, giving high overall assessment scores, but these were not significantly correlated with the physicochemical properties. The salt differences between cheeses were not evident for the panelists, which was corroborated by the absence of correlation between the perception of saltiness and the analyzed chlorides con- tents. The Factorial Analysis of the chemical and physical properties evidenced that they could be explained by two factors, one associated to the texture and the color and the other associated with the chemical properties. Finally, there was a clear influence of the thistle ecotype, place of production and dairy factors in the analyzed properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Harnessing the power of nuclear reactions has brought huge benefits in terms of nuclear energy, medicine and defence as well as risks including the management of nuclear wastes. One of the main issues for radioactive waste management is liquid radioactive waste (LRW). Different methods have been applied to remediate LRW, thereunder ion exchange and adsorption. Comparative studies have demonstrated that Na2Ti2O3SiO4·2H2O titanosilicate sorption materials are the most promising in terms of Cs+ and Sr2+ retention from LRW. Therefore these TiSi materials became the object of this study. The recently developed in Ukraine sol-gel method of synthesizing these materials was chosen among the other reported approaches since it allows obtaining the TiSi materials in the form of particles with size ≥ 4mm. utilizing inexpensive and bulk stable inorganic precursors and yielded the materials with desirable properties by alteration of the comparatively mild synthesis conditions. The main aim of this study was to investigate the physico-chemical properties of sol-gel synthesized titanosilicates for radionuclide uptake from aqueous solutions. The effect of synthesis conditions on the structural and sorption parameters of TiSi xerogels was planned to determine in order to obtain a highly efficient sorption material. The ability of the obtained TiSis to retain Cs+, Sr2+ and other potentially toxic metal cations from the synthetic and real aqueous solutions was intended to assess. To our expectations, abovementioned studies will illustrate the efficiency and profitability of the chosen synthesis approach, synthesis conditions and the obtained materials. X-ray diffraction, low temperature adsorption/desorption surface area analysis, X-ray photoelectron spectroscopy, infrared spectroscopy and scanning electron microscopy with energy dispersive X-ray spectroscopy was used for xerogels characterization. The sorption capability of the synthesized TiSi gels was studied as a function of pH, adsorbent mass, initial concentration of target ion, contact time, temperature, composition and concentration of the background solution. It was found that the applied sol-gel approach yielded materials with a poorly crystalline sodium titanosilicate structure under relatively mild synthesis conditions. The temperature of HTT has the strongest influence on the structure of the materials and consequently was concluded to be the control factor for the preparation of gels with the desired properties. The obtained materials proved to be effective and selective for both Sr2+ and Cs+ decontamination from synthetic and real aqueous solutions like drinking, ground, sea and mine waters, blood plasma and liquid radioactive wastes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the effect of pre-treatments on the physical properties of fresh kiwi was studied. For that, a set of tests using chemical pretreatments was used, in which the samples were subjected to aqueous solutions of ascorbic acid and potassium metabisulfite at concentrations of 0.25% and 1% (w/v) for periods of 30 and 60 minutes, in order to understand the implications of the treatments in the color and texture of the kiwi as compared to its original properties. The results showed that the kiwi treated with ascorbic acid changed its color very intensively when compared to the fresh product, and this trend was intensified after storage. Contrarily, when potassium metabisulfite was used, the changes in color were quite negligible right after the treatment and even lower after the storage period of 6 days under refrigeration. After the treatments with both solutions, the kiwi texture was drastically changed, diminishing hardness considerably and increasing elasticity for all treatments. The same could be observed after six days of refrigeration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

John Frazer's architectural work is inspired by living and generative processes. Both evolutionary and revolutionary, it explores informatin ecologies and the dynamics of the spaces between objects. Fuelled by an interest in the cybernetic work of Gordon Pask and Norbert Wiener, and the possibilities of the computer and the "new science" it has facilitated, Frazer and his team of collaborators have conducted a series of experiments that utilize genetic algorithms, cellular automata, emergent behaviour, complexity and feedback loops to create a truly dynamic architecture. Frazer studied at the Architectural Association (AA) in London from 1963 to 1969, and later became unit master of Diploma Unit 11 there. He was subsequently Director of Computer-Aided Design at the University of Ulter - a post he held while writing An Evolutionary Architecture in 1995 - and a lecturer at the University of Cambridge. In 1983 he co-founded Autographics Software Ltd, which pioneered microprocessor graphics. Frazer was awarded a person chair at the University of Ulster in 1984. In Frazer's hands, architecture becomes machine-readable, formally open-ended and responsive. His work as computer consultant to Cedric Price's Generator Project of 1976 (see P84)led to the development of a series of tools and processes; these have resulted in projects such as the Calbuild Kit (1985) and the Universal Constructor (1990). These subsequent computer-orientated architectural machines are makers of architectural form beyond the full control of the architect-programmer. Frazer makes much reference to the multi-celled relationships found in nature, and their ongoing morphosis in response to continually changing contextual criteria. He defines the elements that describe his evolutionary architectural model thus: "A genetic code script, rules for the development of the code, mapping of the code to a virtual model, the nature of the environment for the development of the model and, most importantly, the criteria for selection. In setting out these parameters for designing evolutionary architectures, Frazer goes beyond the usual notions of architectural beauty and aesthetics. Nevertheless his work is not without an aesthetic: some pieces are a frenzy of mad wire, while others have a modularity that is reminiscent of biological form. Algorithms form the basis of Frazer's designs. These algorithms determine a variety of formal results dependent on the nature of the information they are given. His work, therefore, is always dynamic, always evolving and always different. Designing with algorithms is also critical to other architects featured in this book, such as Marcos Novak (see p150). Frazer has made an unparalleled contribution to defining architectural possibilities for the twenty-first century, and remains an inspiration to architects seeking to create responsive environments. Architects were initially slow to pick up on the opportunities that the computer provides. These opportunities are both representational and spatial: computers can help architects draw buildings and, more importantly, they can help architects create varied spaces, both virtual and actual. Frazer's work was groundbreaking in this respect, and well before its time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the advances that have been made in relation to the valuation of commercial, industrial and retail property, there has not been the same progress in relation to the valuation of rural property. Although the majority of rural property valuations also require the valuer to carry out a full analysis of the economic performance of the farming operations, this information is rarely used to assess the value of the property, nor is it even used for a secondary valuation method. Over the past 20 years the nature of rural valuation practice has required rural valuers to undertake studies in both agriculture (farm management) and valuation, especially if carrying out valuation work for financial institutions. The additional farm financial information obtained by rural valuers exceeds that level of information required to value commercial, retail and industrial by the capitalisation of net rent/profit valuation method and is very similar to the level of information required for the valuation of commercial and retail property by the Discounted Cash Flow valuation method. On this basis the valuers specialising in rural valuation practice should have the necessary skills and information to value rural properties by an income valuation method. Although the direct comparison method of valuation has been sufficient in the past to value rural properties the future use of the method as the main valuation method is limited and valuers need to adopt an income valuation method as at least a secondary valuation method to overcome the problems associated with the use of direct comparison as the only rural property valuation method. This paper will review the results of an extensive survey carried out by rural property valuers in New South Wales (NSW), Australia, in relation to the impact of farm management on rural property values and rural property income potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microsphere systems with the ideal properties for bone regeneration need to be bioactive, and at the same time possess the capacity for controlled protein/drug-delivery; however, the current crop of microsphere system fails to fulfill these properties. The aim of this study was to develop a novel protein-delivery system of bioactive mesoporous glass (MBG) microspheres by a biomimetic method through controlling the density of apatite on the surface of microspheres, for potential bone tissue regeneration. MBG microspheres were prepared by using the method of alginate cross-linking with Ca2+ ions. The cellular bioactivity of MBG microspheres was evaluated by investigating the proliferation and attachment of bone marrow stromal cell (BMSC). The loading efficiency and release kinetics of bovine serum albumin (BSA) on MBG microspheres were investigated after coprecipitating with biomimetic apatite in simulated body fluids (SBF). The results showed that MBG microspheres supported BMSC attachment and the Si containing ionic products from MBG microspheres stimulated BMSCs proliferation. The density of apatite on MBG microspheres increased with the length of soaking time in SBF. BSA-loading efficiency of MBG was significantly enhanced by co-precipitating with apatite. Furthermore, the loading efficiency and release kinetics of BSA could be controlled by controlling the density of apatite formed on MBG microspheres. Our results suggest that MBG microspheres are a promising protein-delivery system as a filling material for bone defect healing and regeneration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the optimal design of an active flow control device; Shock Control Bump (SCB) on suction and pressure sides of transonic aerofoil to reduce transonic total drag is investigated. Two optimisation test cases are conducted using different advanced Evolutionary Algorithms (EAs); the first optimiser is the Hierarchical Asynchronous Parallel Evolutionary Algorithm (HAPMOEA) based on canonical Evolutionary Strategies (ES). The second optimiser is the HAPMOEA is hybridised with one of well-known Game Strategies; Nash-Game. Numerical results show that SCB significantly reduces the drag by 30% when compared to the baseline design. In addition, the use of a Nash-Game strategy as a pre-conditioner of global control saves computational cost up to 90% when compared to the first optimiser HAPMOEA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.