469 resultados para Probable Number Technique


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An improved scaling analysis and direct numerical simulations are performed for the unsteady natural convection boundary layer adjacent to a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages: a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as the numerical results. Previous scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings perform very well with Rayleigh number and aspect ratio dependency. In this study, a modified Prandtl number scaling is developed using a triple layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the modified scaling performs considerably better than the previous scaling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 4-cylinder Ford 2701C test engine was used in this study to explore the impact of ethanol fumigation on gaseous and particle emission concentrations. The fumigation technique delivered vaporised ethanol into the intake manifold of the engine, using an injector, a pump and pressure regulator, a heat exchanger for vaporising ethanol and a separate fuel tank and lines. Gaseous (Nitric oxide (NO), Carbon monoxide (CO) and hydrocarbons (HC)) and particulate emissions (particle mass (PM2.5) and particle number) testing was conducted at intermediate speed (1700 rpm) using 4 load settings with ethanol substitution percentages ranging from 10-40 % (by energy). With ethanol fumigation, NO and PM2.5 emissions were reduced, whereas CO and HC emissions increased considerably and particle number emissions increased at most test settings. It was found that ethanol fumigation reduced the excess air factor for the engine and this led to increased emissions of CO and HC, but decreased emissions of NO. PM2.5 emissions were reduced with ethanol fumigation, as ethanol has a very low “sooting” tendency. This is due to the higher hydrogen-to-carbon ratio of this fuel, and also because ethanol does not contain aromatics, both of which are known soot precursors. The use of a diesel oxidation catalyst (as an after-treatment device) is recommended to achieve a reduction in the four pollutants that are currently regulated for compression ignition engines. The increase in particle number emissions with ethanol fumigation was due to the formation of volatile (organic) particles; consequently, using a diesel oxidation catalyst will also assist in reducing particle number emissions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current investigation reports on diesel particulate matter emissions, with special interest in fine particles from the combustion of two base fuels. The base fuels selected were diesel fuel and marine gas oil (MGO). The experiments were conducted with a four-stroke, six-cylinder, direct injection diesel engine. The results showed that the fine particle number emissions measured by both SMPS and ELPI were higher with MGO compared to diesel fuel. It was observed that the fine particle number emissions with the two base fuels were quantitatively different but qualitatively similar. The gravimetric (mass basis) measurement also showed higher total particulate matter (TPM) emissions with the MGO. The smoke emissions, which were part of TPM, were also higher for the MGO. No significant changes in the mass flow rate of fuel and the brake-specific fuel consumption (BSFC) were observed between the two base fuels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Evolutionary biologists are often misled by convergence of morphology and this has been common in the study of bird evolution. However, the use of molecular data sets have their own problems and phylogenies based on short DNA sequences have the potential to mislead us too. The relationships among clades and timing of the evolution of modern birds (Neoaves) has not yet been well resolved. Evidence of convergence of morphology remain controversial. With six new bird mitochondrial genomes (hummingbird, swift, kagu, rail, flamingo and grebe) we test the proposed Metaves/Coronaves division within Neoaves and the parallel radiations in this primary avian clade. Results Our mitochondrial trees did not return the Metaves clade that had been proposed based on one nuclear intron sequence. We suggest that the high number of indels within the seventh intron of the β-fibrinogen gene at this phylogenetic level, which left a dataset with not a single site across the alignment shared by all taxa, resulted in artifacts during analysis. With respect to the overall avian tree, we find the flamingo and grebe are sister taxa and basal to the shorebirds (Charadriiformes). Using a novel site-stripping technique for noise-reduction we found this relationship to be stable. The hummingbird/swift clade is outside the large and very diverse group of raptors, shore and sea birds. Unexpectedly the kagu is not closely related to the rail in our analysis, but because neither the kagu nor the rail have close affinity to any taxa within this dataset of 41 birds, their placement is not yet resolved. Conclusion Our phylogenetic hypothesis based on 41 avian mitochondrial genomes (13,229 bp) rejects monophyly of seven Metaves species and we therefore conclude that the members of Metaves do not share a common evolutionary history within the Neoaves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, magnetohydrodynamic natural convection boundary layer flow of an electrically conducting and viscous incompressible fluid along a heated vertical flat plate with uniform heat and mass flux in the presence of strong cross magnetic field has been investigated. For smooth integrations the boundary layer equations are transformed in to a convenient dimensionless form by using stream function formulation as well as the free variable formulation. The nonsimilar parabolic partial differential equations are integrated numerically for Pr ≪1 that is appropriate for liquid metals against the local Hartmann parameter ξ . Further, asymptotic solutions are obtained near the leading edge using regular perturbation method for smaller values of ξ . Solutions for values of ξ ≫ 1 are also obtained by employing the matched asymptotic technique. The results obtained for small, large and all ξ regimes are examined in terms of shear stress, τw, rate of heat transfer, qw, and rate of mass transfer, mw, for important physical parameter. Attention has been given to the influence of Schmidt number, Sc, buoyancy ratio parameter, N and local Hartmann parameter, ξ on velocity, temperature and concentration distributions and noted that velocity and temperature of the fluid achieve their asymptotic profiles for Sc ≥ 10:0.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mesenchymal stem cells (MSCs) are undifferentiated, multi-potent stem cells with the ability to renew. They can differentiate into many types of terminal cells, such as osteoblasts, chondrocytes, adipocytes, myocytes, and neurons. These cells have been applied in tissue engineering as the main cell type to regenerate new tissues. However, a number of issues remain concerning the use of MSCs, such as cell surface markers, the determining factors responsible for their differentiation to terminal cells, and the mechanisms whereby growth factors stimulate MSCs. In this chapter, we will discuss how proteomic techniques have contributed to our current knowledge and how they can be used to address issues currently facing MSC research. The application of proteomics has led to the identification of a special pattern of cell surface protein expression of MSCs. The technique has also contributed to the study of a regulatory network of MSC differentiation to terminal differentiated cells, including osteocytes, chondrocytes, adipocytes, neurons, cardiomyocytes, hepatocytes, and pancreatic islet cells. It has also helped elucidate mechanisms for growth factor–stimulated differentiation of MSCs. Proteomics can, however, not reveal the accurate role of a special pathway and must therefore be combined with other approaches for this purpose. A new generation of proteomic techniques have recently been developed, which will enable a more comprehensive study of MSCs. Keywords

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical simulations for mixed convection of micropolar fluid in an open ended arc-shape cavity have been carried out in this study. Computation is performed using the Alternate Direct Implicit (ADI) method together with the Successive Over Relaxation (SOR) technique for the solution of governing partial differential equations. The flow phenomenon is examined for a range of values of Rayleigh number, 102 ≤ Ra ≤ 106, Prandtl number, 7 ≤ Pr ≤ 50, and Reynolds number, 10 ≤ Re ≤ 100. The study is mainly focused on how the micropolar fluid parameters affect the fluid properties in the flow domain. It was found that despite the reduction of flow in the core region, the heat transfer rate increases, whereas the skin friction and microrotation decrease with the increase in the vortex viscosity parameter, Δ.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Six Sigma technique is one of the quality management strategies and is utilised for improving the quality and productivity in the manufacturing process. It is inspired by the two major project methodologies of Deming’s "Plan – Do – Check – Act (PDCA)" Cycle which consists of DMAIC and DMADV. Those two methodologies are comprised of five phases. The DMAIC project methodology will be comprehensively used in this research. In brief, DMAIC is utilised for improving the existing manufacturing process and it involves the phases Define, Measure, Analyse, Improve, and Control. Mask industry has become a significant industry in today’s society since the outbreak of some serious diseases such as the Severe Acute Respiratory Syndrome (SARS), bird flu, influenza, swine flu and hay fever. Protecting the respiratory system, then, has become the fundamental requirement for preventing respiratory deceases. Mask is the most appropriate and protective product inasmuch as it is effective in protecting the respiratory tract and resisting the virus infection through air. In order to satisfy various customers’ requirements, thousands of mask products are designed in the market. Moreover, masks are also widely used in industries including medical industries, semi-conductor industries, food industries, traditional manufacturing, and metal industries. Notwithstanding the quality of masks have become the prioritisations since they are used to prevent dangerous diseases and safeguard people, the quality improvement technique are of very high significance in mask industry. The purpose of this research project is firstly to investigate the current quality control practices in a mask industry, then, to explore the feasibility of using Six Sigma technique in that industry, and finally, to implement the Six Sigma technique in the case company to develop and evaluate the product quality process. This research mainly investigates the quality problems of musk industry and effectiveness of six sigma technique in musk industry with the United Excel Enterprise Corporation (UEE) Company as a case company. The DMAIC project methodology in the Six Sigma technique is adopted and developed in this research. This research makes significant contribution to knowledge. The main results contribute to the discovering the root causes of quality problems in a mask industry. Secondly, the company was able to increase not only acceptance rate but quality level by utilising the Six Sigma technique. Hence, utilising the Six Sigma technique could increase the production capacity of the company. Third, the Six Sigma technique is necessary to be extensively modified to improve the quality control in the mask industry. The impact of the Six Sigma technique on the overall performance in the business organisation should be further explored in future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Navigational collisions are one of the major safety concerns in many seaports. To address this safety concern, a comprehensive and structured method of collision risk management is necessary. Traditionally management of port water collision risks has been relied on historical collision data. However, this collision-data-based approach is hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of samples for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique that uses traffic conflicts as an alternative to the collision data. This paper proposes a collision risk management method by utilizing the principles of this technique. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which, consequently, has great potential for managing collision risks in a fast, reliable and efficient manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The favourable scaffold for bone tissue engineering should have desired characteristic features, such as adequate mechanical strength and three-dimensional open porosity, which guarantee a suitable environment for tissue regeneration. In fact, the design of such complex structures like bone scaffolds is a challenge for investigators. One of the aims is to achieve the best possible mechanical strength-degradation rate ratio. In this paper we attempt to use numerical modelling to evaluate material properties for designing bone tissue engineering scaffold fabricated via the fused deposition modelling technique. For our studies the standard genetic algorithm was used, which is an efficient method of discrete optimization. For the fused deposition modelling scaffold, each individual strut is scrutinized for its role in the architecture and structural support it provides for the scaffold, and its contribution to the overall scaffold was studied. The goal of the study was to create a numerical tool that could help to acquire the desired behaviour of tissue engineered scaffolds and our results showed that this could be achieved efficiently by using different materials for individual struts. To represent a great number of ways in which scaffold mechanical function loss could proceed, the exemplary set of different desirable scaffold stiffness loss function was chosen. © 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An elevated particle number concentration (PNC) observed during nucleation events could play a significant contribution to the total particle load and therefore to the air pollution in the urban environments. Therefore, a field measurement study of PNC was commenced to investigate the temporal and spatial variations of PNC within the urban airshed of Brisbane, Australia. PNC was monitored at urban (QUT), roadside (WOO) and semi-urban (ROC) areas around the Brisbane region during 2009. During the morning traffic peak period, the highest relative fraction of PNC reached about 5% at QUT and WOO on weekdays. PNC peaks were observed around noon, which correlated with the highest solar radiation levels at all three stations, thus suggesting that high PNC levels were likely to be associated with new particle formation caused by photochemical reactions. Wind rose plots showed relatively higher PNC for the NE direction, which was associated with industrial pollution, accounting for 12%, 9% and 14% of overall PNC at QUT, WOO and ROC, respectively. Although there was no significant correlation between PNC at each station, the variation of PNC was well correlated among three stations during regional nucleation events. In addition, PNC at ROC was significantly influenced by upwind urban pollution during the nucleation burst events, with the average enrichment factor of 15.4. This study provides an insight into the influence of regional nucleation events on PNC in the Brisbane region and it the first study to quantify the effect of urban pollution on semi-urban PNC through the nucleation events. © 2012 Author(s).