935 resultados para Non-continuous Seepage Flow
Resumo:
This thesis presents an effective methodology for the generation of a simulation which can be used to increase the understanding of viscous fluid processing equipment and aid in their development, design and optimisation. The Hampden RAPRA Torque Rheometer internal batch twin rotor mixer has been simulated with a view to establishing model accuracies, limitations, practicalities and uses. As this research progressed, via the analyses several 'snap-shot' analysis of several rotor configurations using the commercial code Polyflow, it was evident that the model was of some worth and its predictions are in good agreement with the validation experiments, however, several major restrictions were identified. These included poor element form, high man-hour requirements for the construction of each geometry and the absence of the transient term in these models. All, or at least some, of these limitations apply to the numerous attempts to model internal mixes by other researchers and it was clear that there was no generally accepted methodology to provide a practical three-dimensional model which has been adequately validated. This research, unlike others, presents a full complex three-dimensional, transient, non-isothermal, generalised non-Newtonian simulation with wall slip which overcomes these limitations using unmatched ridding and sliding mesh technology adapted from CFX codes. This method yields good element form and, since only one geometry has to be constructed to represent the entire rotor cycle, is extremely beneficial for detailed flow field analysis when used in conjunction with user defined programmes and automatic geometry parameterisation (AGP), and improves accuracy for investigating equipment design and operation conditions. Model validation has been identified as an area which has been neglected by other researchers in this field, especially for time dependent geometries, and has been rigorously pursued in terms of qualitative and quantitative velocity vector analysis of the isothermal, full fill mixing of generalised non-Newtonian fluids, as well as torque comparison, with a relatively high degree of success. This indicates that CFD models of this type can be accurate and perhaps have not been validated to this extent previously because of the inherent difficulties arising from most real processes.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Influence of pretreatment on corrosion behaviour of duplex zinc/polymer coatings on steel substrates
Resumo:
An investigation has been undertaken to determine the major factors influencing the corrosion resistance of duplex-zinc coatings on steel substrates.Premature failure of these systems has been attributed to the presence of defects such as craters and pinholes in the polymer film and debonding of the polymer film from the zinc substrate.Defects found on commercially produced samples have been carefully characterised using metallographic and scanning electron microscopy techniques. The influence of zinc substrate surface roughness, polymer film thickness and degassing of conversion coatings films on the incidence of defects has been determined.Pretreatments of the chromate, chromate-phosphate, non chromate, and alkali-oxide types were applied and the conversion coatings produced characterised with respect to their nature and composition. The effect of degassing on the properties of the films was also investigated. Electrochemical investigations were carried out to determine the effect of the presence of the eta or zeta phase as the outermost layer of the galvanized coating.Flow characteristics of polyester on zinc electroplated hot-dip continuous and batch galvanized and zinc sprayed samples were investigated using hot-stage microscopy. The effects of different pretreatments and degassing after conversion coating formation on flow characteristics were determined.Duplex coatings were subjected to the acetic acid salt spray test. The effect on adhesion was determined using an indentation debonding test and the results compared with those obtained using cross-cut/peel and pull-off tests. The locus of failure was determined using scanning electron microscopy and X-ray photoelectron spectroscopy techniques.
Resumo:
The relationship between accommodation and intraocular pressure (lOP) has not been addressed as a research question for over 20 years, when measurement of both of these parameters was less advanced than today. Hence the central aim of this thesis was to evaluate the effects of accommodation on lOP. The instrument of choice throughout this thesis was the Pulsair EasyEye non-contact tonometer (NCT) due principally to its slim-line design which allowed the measurement of lOP in one eye and simultaneous stimulation of accommodation in the other eye. A second reason for using the Pulsair EasyEye NCT was that through collaboration with the manufacturers (Keeler, UK) the instrument's operational technology was made accessible. Hence, the principle components underpinning non-contact lOP measures of 0.1mmHg resolution (an order of magnitude greater than other methods) were made available. The relationship between the pressure-output and corneal response has been termed the pressure-response relationship, aspects of which have been shown to be related to ocular biometric parameters. Further, analysis of the components of the pressure-response relationship together with high-speed photography of the cornea during tonometry has enhanced our understanding of the derivation of an lOP measure with the Pulsair EasyEye NCT. The NCT samples the corneal response to the pressure pulse over a 19 ms cycle photoelectronically, but computes the subject's lOP using the data collected in the first 2.34 ms. The relatively instantaneous nature of the lOP measurement renders the measures susceptible to variations in the steady-state lOP caused by the respiratory and cardiac cycles. As such, the variance associated with these cycles was minimised by synchronising the lOP measures with the cardiac trace and maintaining a constant pace respiratory cycle at 15 breathes/minute. It is apparent that synchronising the lOP measures with the peak, middle or trough of the cardiac trace significantly reduced the spread of consecutive measures. Of the 3 locations investigated, synchronisation with the middle location demonstrated the least variance (coeflicient of variation = 9.1%) and a strong correlation (r = 0.90, p = <0.001) with lOP values obtained with Goldmann contact tonometry (n = 50). Accordingly lOP measures synchronised with the middle location of the cardiac cycle were taken in the RE while the LE fixated low (L; zero D), intermediate (I; 1.50 D) and high (H; 4 D) accommodation targets, Quasi-continuous measures of accommodation responses were obtained during the lOP measurement period using the portable infrared Grand Seiko FR-5000 autorefractor. The lOP reduced between L and I accommodative levels by approximately 0.61 mmHg (p <0.00 I). No significant reduction in IOP between L and H accommodation levels was elicited (p = 0.65) (n = 40). The relationship between accommodation and lOP was characterised by substantial inter-subject variations. Myopes demonstrated a tendency to show a reduction in IOP with accommodation which was significant only with I accommodation levels when measured with the NCT (r = 0.50, p = 0.01). However, the relationship between myopia and lOP change with accommodation reached significance for both I (r = 0.61, p= 0.003) and H (r = 0.531, p= 0.0 1) accommodation levels when measured with the Ocular blood Flow Analyser (OBFA). Investigation of the effects of accommodation on the parameters measured by the OBFA demonstrated that with H accommodation levels the pulse amplitude (PA) and pulse rate (PR) responses differed between myopes and emmetropes (PA: p = 0.03; PR: p = 0.004). As thc axial length increased there was a tendency for the pulsatile ocular blood flow (POBF) to reduce with accommodation, which was significant only with H accommodation levels (r = 0.38, p = 0.02). It is proposed that emmetropes arc able to regulate the POBF responses to changes in ocular perfusion pressure caused by changes in lOP with I (r = 0.77, p <0.001) and H (r = 0.73, p = 0.001) accommodation levels. However, thc relationship between lOP and POBF changes in the myopes was not correlated for both I (r = 0.33, p = 0.20) and H (r = 0.05, p = 0.85) accommodation levels. The thesis presents new data on the relationships between accommodation, lOP and parameters of the OBFA,: and provides evidence for possible lOP and choroidal blood flow regulatory mechanisms. Further the data highlight possible deficits in the vascular regulation of the myopic eye during accommodation, which may play a putative role in the aetiology of myopia development.
Resumo:
In this paper we examine the equilibrium states of periodic finite amplitude flow in a horizontal channel with differential heating between the two rigid boundaries. The solutions to the Navier-Stokes equations are obtained by means of a perturbation method for evaluating the Landau coefficients and through a Newton-Raphson iterative method that results from the Fourier expansion of the solutions that bifurcate above the linear stability threshold of infini- tesimal disturbances. The results obtained from these two different methods of evaluating the convective flow are compared in the neighbourhood of the critical Rayleigh number. We find that for small Prandtl numbers the discrepancy of the two methods is noticeable.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Sensorimotor synchronization is hypothesized to arise through two different processes, associated with continuous or discontinuous rhythmic movements. This study investigated synchronization of continuous and discontinuous movements to different pacing signals (auditory or visual), pacing interval (500, 650, 800, 950 ms) and across effectors (non-dominant vs. non-dominant hand). The results showed that mean and variability of asynchronization errors were consistently smaller for discontinuous movements compared to continuous movements. Furthermore, both movement types were timed more accurately with auditory pacing compared to visual pacing and were more accurate with the dominant hand. Shortening the pacing interval also improved sensorimotor synchronization accuracy in both continuous and discontinuous movements. These results show the dependency of temporal control of movements on the nature of the motor task, the type and rate of extrinsic sensory information as well as the efficiency of the motor actuators for sensory integration.
Resumo:
Horizontal Subsurface Flow Treatment Wetlands (HSSF TWs) are used by Severn Trent Water as a low-cost tertiary wastewater treatment for rural locations. Experience has shown that clogging is a major operational problem that reduces HSSF TW lifetime. Clogging is caused by an accumulation of secondary wastewater solids from upstream processes and decomposing leaf litter. Clogging occurs as a sludge layer where wastewater is loaded on the surface of the bed at the inlet. Severn Trent systems receive relatively high hydraulic loading rates, which causes overland flow and reduces the ability to mineralise surface sludge accumulations. A novel apparatus and method, the Aston Permeameter, was created to measure hydraulic conductivity in situ. Accuracy is ±30 %, which was considered adequate given that conductivity in clogged systems varies by several orders of magnitude. The Aston Permeameter was used to perform 20 separate tests on 13 different HSSF TWs in the UK and the US. The minimum conductivity measured was 0.03 m/d at Fenny Compton (compared with 5,000 m/d clean conductivity), which was caused by an accumulation of construction fines in one part of the bed. Most systems displayed a 2 to 3 order of magnitude variation in conductivity in each dimension. Statistically significant transverse variations in conductivity were found in 70% of the systems. Clogging at the inlet and outlet was generally highest where flow enters the influent distribution and exits the effluent collection system, respectively. Surface conductivity was lower in systems with dense vegetation because plant canopies reduce surface evapotranspiration and decelerate sludge mineralisation. An equation was derived to describe how the water table profile is influenced by overland flow, spatial variations in conductivity and clogging. The equation is calibrated using a single parameter, the Clog Factor (CF), which represents the equivalent loss of porosity that would reproduce measured conductivity according to the Kozeny-Carman Equation. The CF varies from 0 for ideal conditions to 1 for completely clogged conditions. Minimum CF was 0.54 for a system that had recently been refurbished, which represents the deviation from ideal conditions due to characteristics of non-ideal media such as particle size distribution and morphology. Maximum CF was 0.90 for a 15 year old system that exhibited sludge accumulation and overland flow across the majority of the bed. A Finite Element Model of a 15 m long HSSF TW was used to indicate how hydraulics and hydrodynamics vary as CF increases. It was found that as CF increases from 0.55 to 0.65 the subsurface wetted area increases, which causes mean hydraulic residence time to increase from 0.16 days to 0.18 days. As CF increases from 0.65 to 0.90, the extent of overland flow increases from 1.8 m to 13.1 m, which reduces hydraulic efficiency from 37 % to 12 % and reduces mean residence time to 0.08 days.
Resumo:
The stability characteristics of an incompressible viscous pressure-driven flow of an electrically conducting fluid between two parallel boundaries in the presence of a transverse magnetic field are compared and contrasted with those of Plane Poiseuille flow (PPF). Assuming that the outer regions adjacent to the fluid layer are perfectly electrically insulating, the appropriate boundary conditions are applied. The eigenvalue problems are then solved numerically to obtain the critical Reynolds number Rec and the critical wave number ac in the limit of small Hartmann number (M) range to produce the curves of marginal stability. The non-linear two-dimensional travelling waves that bifurcate by way of a Hopf bifurcation from the neutral curves are approximated by a truncated Fourier series in the streamwise direction. Two and three dimensional secondary disturbances are applied to both the constant pressure and constant flux equilibrium solutions using Floquet theory as this is believed to be the generic mechanism of instability in shear flows. The change in shape of the undisturbed velocity profile caused by the magnetic field is found to be the dominant factor. Consequently the critical Reynolds number is found to increase rapidly with increasing M so the transverse magnetic field has a powerful stabilising effect on this type of flow.
Resumo:
Purpose – To propose and investigate a stable numerical procedure for the reconstruction of the velocity of a viscous incompressible fluid flow in linear hydrodynamics from knowledge of the velocity and fluid stress force given on a part of the boundary of a bounded domain. Design/methodology/approach – Earlier works have involved the similar problem but for stationary case (time-independent fluid flow). Extending these ideas a procedure is proposed and investigated also for the time-dependent case. Findings – The paper finds a novel variation method for the Cauchy problem. It proves convergence and also proposes a new boundary element method. Research limitations/implications – The fluid flow domain is limited to annular domains; this restriction can be removed undertaking analyses in appropriate weighted spaces to incorporate singularities that can occur on general bounded domains. Future work involves numerical investigations and also to consider Oseen type flow. A challenging problem is to consider non-linear Navier-Stokes equation. Practical implications – Fluid flow problems where data are known only on a part of the boundary occur in a range of engineering situations such as colloidal suspension and swimming of microorganisms. For example, the solution domain can be the region between to spheres where only the outer sphere is accessible for measurements. Originality/value – A novel variational method for the Cauchy problem is proposed which preserves the unsteady Stokes operator, convergence is proved and using recent for the fundamental solution for unsteady Stokes system, a new boundary element method for this system is also proposed.
Resumo:
The first demonstration of heterogeneous catalysis within an oscillatory baffled flow reactor (OBR) is reported, exemplified by the solid acid catalysed esterification of organic acids, an important prototypical reaction for fine chemicals and biofuel synthesis. Suspension of a PrSOH-SBA-15 catalyst powder is readily achieved within the OBR under an oscillatory flow, facilitating the continuous esterification of hexanoic acid. Excellent semi-quantitative agreement is obtained between OBR and conventional stirred batch reaction kinetics, demonstrating efficient mixing, and highlighting the potential of OBRs for continuous, heterogeneously catalysed liquid phase transformations. Kinetic analysis highlights acid chain length (i.e. steric factors) as a key predictor of activity. Continuous esterification offers improved ester yields compared with batch operation, due to the removal of water by-product from the catalyst, evidencing the versatility of the OBR for heterogeneous flow chemistry and potential role as a new clean catalytic technology. © The Royal Society of Chemistry 2013.
Resumo:
This paper presents a simulated genetic algorithm (GA) model of scheduling the flow shop problem with re-entrant jobs. The objective of this research is to minimize the weighted tardiness and makespan. The proposed model considers that the jobs with non-identical due dates are processed on the machines in the same order. Furthermore, the re-entrant jobs are stochastic as only some jobs are required to reenter to the flow shop. The tardiness weight is adjusted once the jobs reenter to the shop. The performance of the proposed GA model is verified by a number of numerical experiments where the data come from the case company. The results show the proposed method has a higher order satisfaction rate than the current industrial practices.
Resumo:
A continuous multi-step synthesis of 1,2-diphenylethane was performed sequentially in a structured compact reactor. This process involved a Heck C-C coupling reaction followed by the addition of hydrogen to perform reduction of the intermediate obtained in the first step. Both of the reactions were catalysed by microspherical carbon-supported Pd catalysts. Due to the integration of the micro-heat exchanger, the static mixer and the mesoscale packed-bed reaction channel, the compact reactor was proven to be an intensified tool for promoting the reactions. In comparison with the batch reactor, this flow process in the compact reactor was more efficient as: (i) the reaction time was significantly reduced (ca. 7 min versus several hours), (ii) no additional ligands were used and (iii) the reaction was run at lower operational pressure and temperature. Pd leached in the Heck reaction step was shown to be effectively recovered in the following hydrogenation reaction section and the catalytic activity of the system can be mostly retained by reverse flow operation. © 2009 Elsevier Inc. All rights reserved.
Resumo:
Background/aim: The technique of photoretinoscopy is unique in being able to measure the dynamics of the oculomotor system (ocular accommodation, vergence, and pupil size) remotely (working distance typically 1 metre) and objectively in both eyes simultaneously. The aim af this study was to evaluate clinically the measurement of refractive error by a recent commercial photoretinoscopic device, the PowerRefractor (PlusOptiX, Germany). Method: The validity and repeatability of the PowerRefractor was compared to: subjective (non-cycloplegic) refraction on 100 adult subjects (mean age 23.8 (SD 5.7) years) and objective autarefractian (Shin-Nippon SRW-5000, Japan) on 150 subjects (20.1 (4.2) years). Repeatability was assessed by examining the differences between autorefractor readings taken from each eye and by re-measuring the objective prescription of 100 eyes at a subsequent session. Results: On average the PowerRefractor prescription was not significantly different from the subjective refraction, although quite variable (difference -0.05 (0.63) D, p = 0.41) and more negative than the SRW-5000 prescription (by -0.20 (0.72) D, p<0.001). There was no significant bias in the accuracy of the instrument with regard to the type or magnitude of refractive error. The PowerRefractor was found to be repeatable over the prescription range of -8.75D to +4.00D (mean spherical equivalent) examined. Conclusion: The PowerRefractor is a useful objective screening instrument and because of its remote and rapid measurement of both eyes simultaneously is able to assess the oculomotor response in a variety of unrestricted viewing conditions and patient types.