889 resultados para Exposure-time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Oxygen exposure has a large impact on lipid biomarker preservation in surface sediments and may affect the application of organic proxies used for reconstructing past environmental conditions. To determine its effect on long chain alkyl diol and keto-ol based proxies, the distributions of these lipids was studied in nine surface sediments from the Murray Ridge in the Arabian Sea obtained from varying water depths (900 to 3000 m) but in close lateral proximity and, therefore, likely receiving a similar particle flux. Due to substantial differences in bottom water oxygen concentration (<3 to 77 µmol/L) and sedimentation rate, substantial differences exist in the time the biomarker lipids are exposed to oxygen in the sediment. Long chain alkyl diol and keto-ol concentrations in the surface sediments (0-0.5 cm) decreased progressively with increasing oxygen exposure time, suggesting increased oxic degradation. The 1,15-keto-ol/diol ratio (DOXI) increased slightly with oxygen exposure time as diols had apparently slightly higher degradation rates than keto-ols. The ratio of 1,14- vs. 1,13- or 1,15-diols, used as upwelling proxies, did not show substantial changes. However, the C30 1,15-diol exhibited a slightly higher degradation rate than C28 and C30 1,13-diols, and thus the Long chain Diol Index (LDI), used as sea surface temperature proxy, showed a negative correlation with the maximum residence time in the oxic zone of the sediment, resulting in ca. 2-3.5 °C change, when translated to temperature. The UK'37 index did not show significant changes with increasing oxygen exposure. This suggests that oxic degradation may affect temperature reconstructions using the LDI in oxic settings and where oxygen concentrations have varied substantially over time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atmospheric corrosion tests, according to ASTM G50, have been carried out in Queensland, Australia, at three different sites representing three different environmental conditions. A range of materials including primary copper (electrosheet) and electrolytic tough pitch (traditional cold rolled) copper have been exposed. Data is available for five exposure periods over a three year time span. X-Ray Diffraction has been used to determine the composition of the corrosion products. Corrosion rates have been determined for each material at each of the exposure sites and are compared with corrosion rates obtained from other long term atmospheric corrosion test programs. Primary copper sheet (electrosheet) behaves like traditionally produced cold rolled copper (C11000) sheet but with an increased corrosion rate. This difference between the rolled copper samples and the primary copper samples is probably due to a combination of factors related to the difference in crystallographic texture of the underlying copper, the morphology and texture of the cuprite layer, the surface roughness of the sheets, and the differences in mass. These factors combine together to provide an increased oxidation rate and TOW for the electrosheet material and which is significantly higher at the more tropical sites. For a sulfate environment (Urban) the initial corrosion product is cuprite with posnjakite and brochantite also occurring at longer exposures. Posnjakite is either washed away or converted to brochantite during further exposure. The amount of brochantite increases with exposure time and forms the blue-green patina layer. For a chloride environment (Marine) the initial corrosion product is cuprite with atacamite also occurring at longer exposures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ultem 1000 polyetherimide films prepared by cast-evaporating technique were covered with a 1H,1H,2H-tridecafluoro-oct-1-ene (PFO) plasma-polymerized layer. The effects of the plasma exposure time on the surface composition were studied by X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy, and surface energy analysis. The surface topography of the plasma layer was deduced from scanning electron microscopy. The F/C ratio for plasma-polymerized PFO under the input RF power of 50 W can be as high as 1.30 for 480 s and similar to 0.4-2 at % of oxygen was detected, resulting from the reaction of long-lived radicals in the plasma polymer with atmospheric oxygen. The plasma deposition of fluorocarbon coating from plasma PFO reduces the surface energy from 46 to 18.3 mJ m(-2). (c) 2006 Wiley Periodicals, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND. Regular physical activity is strongly advocated in children, with recommendations suggesting up to several hours of daily participation. However, an unintended consequence of physical activity is exposure to the risk of injury. To date, these risks have not been quantified in primary school-aged children despite injury being a leading cause for hospitalization and death in this population. OBJECT. Our goal was to quantify the risk of injury associated with childhood physical activity both in and out of the school setting and calculate injury rates per exposure time for organized and non-organized activity outside of school. METHODS. The Childhood Injury Prevention Study prospectively followed a cohort of randomly selected Australian primary school- and preschool-aged children (4 to 12 years). Over 12 months, each injury that required first aid attention was registered with the study. Exposure to physical activity outside school hours was measured by using a parent-completed 7-day diary. The age and gender distribution of injury rates per 10 000 hours of exposure were calculated for all activity and for organized and non-organized activity occurring outside school hours. In addition, child-based injury rates were calculated for physical activity-related injuries both in and out of the school setting. RESULTS. Complete diary and injury data were available for 744 children. There were 504 injuries recorded over the study period, 396 (88.6%) of which were directly related to physical activity. Thirty-four percent of physical activity-related injuries required professional medical treatment. Analysis of injuries occurring outside of school revealed an overall injury rate of 5.7 injuries per 10 000 hours of exposure to physical activity and a medically treated injury rate of 1.7 per 10 000 hours. CONCLUSION. Injury rates per hours of exposure to physical activity were low in this cohort of primary school-aged children, with < 2 injuries requiring medical treatment occurring for every 10 000 hours of activity participation outside of school.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Children engage in various physical activities that pose different injury risks. However, the lack of adequate data on exposure has meant that these risks have not been quantified or compared in young children aged 5-12 years. Objectives: To measure exposure to popular activities among Australian primary school children and to quantify the associated injury risks. Method: The Childhood Injury Prevention Study prospectively followed up a cohort of randomly selected Australian primary and preschool children aged 5-12 years. Time (min) engaged in various physical activities was measured using a parent-completed 7-day diary. All injuries over 12 months were reported to the study. All data on exposure and injuries were coded using the International classification of external causes of injury. Injury rates per 1000 h of exposure were calculated for the most popular activities. Results: Complete diaries and data on injuries were available for 744 children. Over 12 months, 314 injuries relating to physical activity outside of school were reported. The highest injury risks per exposure time occurred for tackle-style football (2.18/1000 h), wheeled activities (1.72/1000 h) and tennis (1.19/1000 h). Overall, boys were injured more often than girls; however, the differences were non-significant or reversed for some activities including soccer, trampolining and team ball sports. Conclusion: Although the overall injury rate was low in this prospective cohort, the safety of some popular childhood activities can be improved so that the benefits may be enjoyed with fewer negative consequences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the results of atmospheric corrosion testing and of an examination of patina samples from Brisbane, Denmark, Sweden, France, USA and Austria. The aim was threefold: (1) to determine the structure of natural patinas and to relate their structure to their appearance in service and to the atmospheric corrosion of copper; (2) to understand why a brown rust coloured layer forms on the surface of some copper patinas; (3) to understand why some patinas are still black in colour despite being of significant age. During the atmospheric corrosion of copper, a two-layer patina forms on the copper surface. Cuprite is the initial corrosion product and cuprite is always the patina layer in contact with the copper. The growth laws describing patina formation indicate that the decreasing corrosion rate with increasing exposure time is due to the protective nature of the cuprite layer. The green patinas were typically characterised by an outer layer of brochantite, which forms as individual crystals on the surface of the cuprite layer, probably by a precipitation reaction from an aqueous surface layer on the cuprite layer. Natural patinas come in a variety of colours. The colour is controlled by the amount of the patina and its chemical composition. Thin patinas containing predominantly cuprite were black. If the patina was sufficiently thick, and the [Fe]/[Cu] ratio was low, then the patina was green, whereas if the [Fe]/[Cu] ratio was approximately 10 at%, then the patina is rust brown in colour. The iron was in solid solution in the brochantite, which might be designated as a (copper/iron) hydroxysulphate. In the brown patinas examined, the iron was distributed predominately in the outermost part of the patina. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Freshwater is extremely precious; but even more precious than freshwater is clean freshwater. From the time that 2/3 of our planet is covered in water, we have contaminated our globe with chemicals that have been used by industrial activities over the last century in a unprecedented way causing harm to humans and wildlife. We have to adopt a new scientific mindset in order to face this problem so to protect this important resource. The Water Framework Directive (European Parliament and the Council, 2000) is a milestone legislative document that transformed the way that water quality monitoring is undertaken across all Member States by introducing the Ecological and Chemical Status. A “good or higher” Ecological Status is expected to be achieved for all waterbodies in Europe by 2015. Yet, most of the European waterbodies, which are determined to be at risk, or of moderate to bad quality, further information will be required so that adequate remediation strategies can be implemented. To date, water quality evaluation is based on five biological components (phytoplankton, macrophytes and benthic algae, macroinvertebrates and fishes) and various hydromorphological and physicochemical elements. The evaluation of the chemical status is principally based on 33 priority substances and on 12 xenobiotics, considered as dangerous for the environment. This approach takes into account only a part of the numerous xenobiotics that can be present in surface waters and could not evidence all the possible causes of ecotoxicological stress that can act in a water section. The mixtures of toxic chemicals may constitute an ecological risk not predictable on the basis of the single component concentration. To improve water quality, sources of contamination and causes of ecological alterations need to be identified. On the other hand, the analysis of the community structure, which is the result of multiple processes, including hydrological constrains and physico-chemical stress, give back only a “photograph” of the actual status of a site without revealing causes and sources of the perturbation. A multidisciplinary approach, able to integrate the information obtained by different methods, such as community structure analysis and eco-genotoxicological studies, could help overcome some of the difficulties in properly identifying the different causes of stress in risk assessment. In synthesis, the river ecological status is the result of a combination of multiple pressures that, for management purposes and quality improvement, have to be disentangled from each other. To reduce actual uncertainty in risk assessment, methods that establish quantitative links between levels of contamination and community alterations are needed. The analysis of macrobenthic invertebrate community structure has been widely used to identify sites subjected to perturbation. Trait-based descriptors of community structure constitute a useful method in ecological risk assessment. The diagnostic capacity of freshwater biomonitoring could be improved by chronic sublethal toxicity testing of water and sediment samples. Requiring an exposure time that covers most of the species’ life cycle, chronic toxicity tests are able to reveal negative effects on life-history traits at contaminant concentrations well below the acute toxicity level. Furthermore, the responses of high-level endpoints (growth, fecundity, mortality) can be integrated in order to evaluate the impact on population’s dynamics, a highly relevant endpoint from the ecological point of view. To gain more accurate information about potential causes and consequences of environmental contamination, the evaluation of adverse effects at physiological, biochemical and genetic level is also needed. The use of different biomarkers and toxicity tests can give information about the sub-lethal and toxic load of environmental compartments. Biomarkers give essential information about the exposure to toxicants, such as endocrine disruptor compounds and genotoxic substances whose negative effects cannot be evidenced by using only high-level toxicological endpoints. The increasing presence of genotoxic pollutants in the environment has caused concern regarding the potential harmful effects of xenobiotics on human health, and interest on the development of new and more sensitive methods for the assessment of mutagenic and cancerogenic risk. Within the WFD, biomarkers and bioassays are regarded as important tools to gain lines of evidence for cause-effect relationship in ecological quality assessment. Despite the scientific community clearly addresses the advantages and necessity of an ecotoxicological approach within the ecological quality assessment, a recent review reports that, more than one decade after the publication of the WFD, only few studies have attempted to integrate ecological water status assessment and biological methods (namely biomarkers or bioassays). None of the fifteen reviewed studies included both biomarkers and bioassays. The integrated approach developed in this PhD Thesis comprises a set of laboratory bioassays (Daphnia magna acute and chronic toxicity tests, Comet Assay and FPG-Comet) newly-developed, modified tacking a cue from standardized existing protocols or applied for freshwater quality testing (ecotoxicological, genotoxicological and toxicogenomic assays), coupled with field investigations on macrobenthic community structures (SPEAR and EBI indexes). Together with the development of new bioassays with Daphnia magna, the feasibility of eco-genotoxicological testing of freshwater and sediment quality with Heterocypris incongruens was evaluated (Comet Assay and a protocol for chronic toxicity). However, the Comet Assay, although standardized, was not applied to freshwater samples due to the lack of sensitivity of this species observed after 24h of exposure to relatively high (and not environmentally relevant) concentrations of reference genotoxicants. Furthermore, this species demonstrated to be unsuitable also for chronic toxicity testing due to the difficult evaluation of fecundity as sub-lethal endpoint of exposure and complications due to its biology and behaviour. The study was applied to a pilot hydrographic sub-Basin, by selecting section subjected to different levels of anthropogenic pressure: this allowed us to establish the reference conditions, to select the most significant endpoints and to evaluate the coherence of the responses of the different lines of evidence (alteration of community structure, eco-genotoxicological responses, alteration of gene expression profiles) and, finally, the diagnostic capacity of the monitoring strategy. Significant correlations were found between the genotoxicological parameter Tail Intensity % (TI%) and macrobenthic community descriptors SPEAR (p<0.001) and EBI (p<0.05), between the genotoxicological parameter describing DNA oxidative stress (ΔTI%) and mean levels of nitrates (p<0.01) and between reproductive impairment (Failed Development % from D. magna chronic bioassays) and TI% (p<0.001) as well as EBI (p<0.001). While correlation among parameters demonstrates a general coherence in the response to increasing impacts, the concomitant ability of each single endpoint to be responsive to specific sources of stress is at the basis of the diagnostic capacity of the integrated approach as demonstrated by stations presenting a mismatch among the different lines of evidence. The chosen set of bioassays, as well as the selected endpoints, are not providing redundant indications on the water quality status but, on the contrary, are contributing with complementary pieces of information about the several stressors that insist simultaneously on a waterbody section providing this monitoring strategy with a solid diagnostic capacity. Our approach should provide opportunities for the integration of biological effects into monitoring programmes for surface water, especially in investigative monitoring. Moreover, it should provide a more realistic assessment of impact and exposure of aquatic organisms to contaminants. Finally this approach should provide an evaluation of drivers of change in biodiversity and its causalities on ecosystem function/services provision, that is the direct and indirect contributions to human well-being.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The efficacy of a new skin disinfectant, 2% (w/v) chlorhexidine gluconate (CHG) in 70% (v/v) isopropyl alcohol (IPA) (ChloraPrep®), was compared with five commonly used skin disinfectants against Staphylococcus epidermidis RP62A in the presence or absence of protein, utilizing quantitative time-kill suspension and carrier tests. All six disinfectants [70% (v/v) IPA, 0.5% (w/v) aqueous CHG, 2% (w/v) aqueous CHG, 0.5% (w/v) CHG in 70% (v/v) IPA and 10% (w/v) aqueous povidone iodine (PI)] achieved a log10 reduction factor of 5, in colony-forming units/mL, in a suspension test (exposure time 30 s) in the presence and absence of 10% human serum. Subsequent challenges of S. epidermidis RP62A in a biofilm (with and without human serum) demonstrated reduced bactericidal activity. Overall, the most effective skin disinfectants tested against S. epidermidis RP62A were 2% (w/v) CHG in 70% IPA and 10% (w/v) PI. These results suggest that enhanced skin antisepsis may be achieved with 2% (w/v) CHG in 70% (v/v) IPA compared with the three commonly used CHG preparations [0.5% (w/v) aqueous CHG, 2% (w/v) aqueous CHG and 0.5% (w/v) CHG in 70% (v/v) IPA]. © 2005 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study examines the effect of the goodness of view on the minimal exposure time required to recognize depth-rotated objects. In a previous study, Verfaillie and Boutsen (1995) derived scales of goodness of view, using a new corpus of images of depth-rotated objects. In the present experiment, a subset of this corpus (five views of 56 objects) is used to determine the recognition exposure time for each view, by increasing exposure time across successive presentations until the object is recognized. The results indicate that, for two thirds of the objects, good views are recognized more frequently and have lower recognition exposure times than bad views.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The process of astrogliosis, or reactive gliosis, is a typical response of astrocytes to a wide range of physical and chemical injuries. The up-regulation of the astrocyte specific glial fibrillary acidic protein (GFAP) is a hallmark of reactive gliosis and is widely used as a marker to identify the response. In order to develop a reliable, sensitive and high throughput astrocyte toxicity assay that is more relevant to the human response than existing animal cell based models, the U251-MG, U373-MG and CCF-STTG 1 human astrocytoma cell lines were investigated for their ability to exhibit reactive-like changes following exposure to ethanol, chloroquine diphosphate, trimethyltin chloride and acrylamide. Cytotoxicity analysis showed that the astrocytic cells were generally more resistant to the cytotoxic effects of the agents than the SH-SY5Y neuroblastoma cells. Retinoic acid induced differentiation of the SH-SY5Y line was also seen to confer some degree of resistance to toxicant exposure, particularly in the case of ethanol. Using a cell based ELISA for GFAP together with concurrent assays for metabolic activity and cell number, each of the three cell lines responded to toxicant exposure by an increase in GFAP immunoreactivity (GFAP-IR), or by increased metabolic activity. Ethanol, chloroquine diphosphate, trimethyltin chloride and bacterial lipopolysaccharide all induced either GFAP or MTT increases depending upon the cell line, dose and exposure time. Preliminary investigations of additional aspects of astrocytic injury indicated that IL-6, but not TNF-α. or nitric oxide, is released following exposure to each of the compounds, with the exception of acrylamide. It is clear that these human astrocytoma cell lines are capable of responding to toxicant exposure in a manner typical of reactive gliosis and are therefore a valuable cellular model in the assessment of in vitro neurotoxicity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with the optimising of hearing protector selection. A computer model was used to estimate the reduction in noise exposure and risk of occupational deafness provided by the wearing of hearing protectors in industrial noise spectra. The model was used to show that low attenuation hearing protectors con provide greater protection than high attenuation protectors if the high attenuation protectors ore not worn for the total duration of noise exposure; or not used by a small proportion of the population. The model was also used to show that high attenuation protectors will not necessarily provide significantly greater reduction in risk than low attenuation protectors if the population has been exposed to the noise for many years prior to the provision of hearing protectors. The effects of earplugs and earmuffs on the localisation of sounds were studied to determine whether high attenuation earmuffs are likely to have greater potential than the lower attenuation earplugs for affecting personal safety. Laboratory studies and experiments at a foundry with normal-hearing office employees and noise-exposed foundrymen who had some experience of wearing hearing protectors showed that although earplugs reduced the ability of the wearer to determine the direction of warning sounds, earmuffs produced more total angular error and more confusions between left and right. !t is concluded from the research findings that the key to the selection of hearing protectors is to be found in the provision of hearing protectors that can be worn for a very high percentage of the exposure time by a high percentage of the exposed population with the minimum effect on the personal safety of the wearers - the attenuation provided by the protection should be adequate but not a maximum value.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The first investigation of this study is concerned with the reasonableness of the assumptions related to diffusion of water vapour in concrete and with the development of a diffusivity equation for heated concrete. It has been demonstrated that diffusion of water vapour does occur in concrete at all temperatures and that the type of diffusion is concrete is Knudsen diffusion. Neglecting diffusion leads to underestimating the pressure. It results in a maximum pore pressure of less than 1 MPa. It has also been shown that the assumption that diffusion in concrete is molecular is unreasonable even when the tortuosity is considered. Molecular diffusivity leads to overestimating the pressure. It results in a maximum pore pressure of 2.7 MPa of which the vapour pressure is 1.5 MPa while the air pressure is 1.2 MPa. Also, the first diffusivity equation, appropriately named 'concrete diffusivity', has been developed specifically for concrete that determines the effective diffusivity of any gas in concrete at any temperature. In thick walls and columns exposed to fire, concrete diffusivity leads to a maximum pore pressures of 1.5 and 2.2 MPa (along diagonals), respectively, that are almost entirely due to water vapour pressure. Also, spalling is exacerbated, and thus higher pressures may occur, in thin heated sections, since there is less of a cool reservoir towards which vapour can migrate. Furthermore, the reduction of the cool reservoir is affected not only by the thickness, but also by the time of exposure to fire and by the type of exposure, i.e. whether the concrete member is exposed to fire from one or more sides. The second investigation is concerned with examining the effects of thickness and exposure time and type. It has been demonstrated that the build up of pore pressure is low in thick members, since there is a substantial cool zone towards which water vapour can migrate. Thus, if surface and/or explosive spalling occur on a thick member, then such spalling must be due to high thermal stresses, but corner spalling is likely to be pore pressure spalling. However, depending on the exposure time and type, the pore pressures can be more than twice those occurring in thick members and thought to be the maximum that can occur so far, and thus the enhanced propensity of pore pressure spalling occurring on thin sections heated on opposite sides has been conclusively demonstrated to be due to the lack of a cool zone towards which moisture can migrate. Expressions were developed for the determination of the maximum pore pressures that can occur in different concrete walls and columns exposed to fire and of the corresponding times of exposure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract: Loss of central vision caused by age-related macular degeneration (AMD) is a problem affecting increasingly large numbers of people within the ageing population. AMD is the leading cause of blindness in the developed world, with estimates of over 600,000 people affected in the UK . Central vision loss can be devastating for the sufferer, with vision loss impacting on the ability to carry out daily activities. In particular, inability to read is linked to higher rates of depression in AMD sufferers compared to age-matched controls. Methods to improve reading ability in the presence of central vision loss will help maintain independence and quality of life for those affected. Various attempts to improve reading with central vision loss have been made. Most textual manipulations, including font size, have led to only modest gains in reading speed. Previous experimental work and theoretical arguments on spatial integrative properties of the peripheral retina suggest that ‘visual crowding’ may be a major factor contributing to inefficient reading. Crowding refers to the phenomena in which juxtaposed targets viewed eccentrically may be difficult to identify. Manipulating text spacing of reading material may be a simple method that reduces crowding and benefits reading ability in macular disease patients. In this thesis the effect of textual manipulation on reading speed was investigated, firstly for normally sighted observers using eccentric viewing, and secondly for observers with central vision loss. Test stimuli mimicked normal reading conditions by using whole sentences that required normal saccadic eye movements and observer comprehension. Preliminary measures on normally-sighted observers (n = 2) used forced-choice procedures in conjunction with the method of constant stimuli. Psychometric functions relating the proportion of correct responses to exposure time were determined for text size, font type (Lucida Sans and Times New Roman) and text spacing, with threshold exposure time (75% correct responses) used as a measure of reading performance. The results of these initial measures were used to derive an appropriate search space, in terms of text spacing, for assessing reading performance in AMD patients. The main clinical measures were completed on a group of macular disease sufferers (n=24). Firstly, high and low contrast reading acuity and critical print size were measured using modified MNREAD test charts, and secondly, the effect of word and line spacing was investigated using a new test, designed specifically for this study, called the Equal Readability Passages (ERP) test. The results from normally-sighted observers were in close agreement with those from the group of macular disease sufferers. Results show that: (i) optimum reading performance was achieved when using both double line and double word spacing; (ii) the effect of line spacing was greater than the effect of word spacing (iii) a text size of approximately 0.85o is sufficiently large for reading at 5o eccentricity. In conclusion, the results suggest that crowding is detrimental to reading with peripheral vision, and its effects can be minimized with a modest increase in text spacing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Particle breakage due to fluid flow through various geometries can have a major influence on the performance of particle/fluid processes and on the product quality characteristics of particle/fluid products. In this study, whey protein precipitate dispersions were used as a case study to investigate the effect of flow intensity and exposure time on the breakage of these precipitate particles. Computational fluid dynamic (CFD) simulations were performed to evaluate the turbulent eddy dissipation rate (TED) and associated exposure time along various flow geometries. The focus of this work is on the predictive modelling of particle breakage in particle/fluid systems. A number of breakage models were developed to relate TED and exposure time to particle breakage. The suitability of these breakage models was evaluated for their ability to predict the experimentally determined breakage of the whey protein precipitate particles. A "power-law threshold" breakage model was found to provide a satisfactory capability for predicting the breakage of the whey protein precipitate particles. The whey protein precipitate dispersions were propelled through a number of different geometries such as bends, tees and elbows, and the model accurately predicted the mean particle size attained after flow through these geometries. © 2005 Elsevier Ltd. All rights reserved.