964 resultados para Application method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Porosity is one of the key parameters of the macroscopic structure of porous media, generally defined as the ratio of the free spaces occupied (by the volume of air) within the material to the total volume of the material. Porosity is determined by measuring skeletal volume and the envelope volume. Solid displacement method is one of the inexpensive and easy methods to determine the envelope volume of a sample with an irregular shape. In this method, generally glass beads are used as a solid due to their uniform size, compactness and fluidity properties. The smaller size of the glass beads means that they enter into the open pores which have a larger diameter than the glass beads. Although extensive research has been carried out on porosity determination using displacement method, no study exists which adequately reports micro-level observation of the sample during measurement. This study set out with the aim of assessing the accuracy of solid displacement method of bulk density measurement of dried foods by micro-level observation. Solid displacement method of porosity determination was conducted using a cylindrical vial (cylindrical plastic container) and 57 µm glass beads in order to measure the bulk density of apple slices at different moisture contents. A scanning electron microscope (SEM), a profilometer and ImageJ software were used to investigate the penetration of glass beads into the surface pores during the determination of the porosity of dried food. A helium pycnometer was used to measure the particle density of the sample. Results show that a significant number of pores were large enough to allow the glass beads to enter into the pores, thereby causing some erroneous results. It was also found that coating the dried sample with appropriate coating material prior to measurement can resolve this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the health care industry, Job Satisfaction (JS) is linked with work performance, psychological well-being and employee turnover. Although research into JS among health professionals has a long history worldwide, there has been very little analysis in Vietnam. No study has addressed JS of preventive medicine workers in Vietnam, and there is no reliable and valid instrument in Vietnamese language and context for evaluation of JS in this group. This project was conducted to fill these gaps. The findings contribute evidence regarding factors that influence JS in this sector of the health industry that should be applied to personnel management policies and practices in Vietnam.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper relates to the importance of impact of the chosen bottle-point method when conducting ion exchange equilibria experiments. As an illustration, potassium ion exchange with strong acid cation resin was investigated due to its relevance to the treatment of various industrial effluents and groundwater. The “constant mass” bottle-point method was shown to be problematic in that depending upon the resin mass used the equilibrium isotherm profiles were different. Indeed, application of common equilibrium isotherm models revealed that the optimal fit could be with either the Freundlich or Temkin equations, depending upon the conditions employed. It could be inferred that the resin surface was heterogeneous in character, but precise conclusions regarding the variation in the heat of sorption were not possible. Estimation of the maximum potassium loading was also inconsistent when employing the “constant mass” method. The “constant concentration” bottle-point method illustrated that the Freundlich model was a good representation of the exchange process. The isotherms recorded were relatively consistent when compared to the “constant mass” approach. Unification of all the equilibrium isotherm data acquired was achieved by use of the Langmuir Vageler expression. The maximum loading of potassium ions was predicted to be at least 116.5 g/kg resin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any government deciding to invoke widespread change in its higher education sector through implementation of new policies impacts on every institution and all staff and students, often in both the time taken up and the heightened emotions caused. The central phenomenon that this study addresses is the process and consequences of policy changes in higher education in Australia. The aim of this article is to record the research design through the perspective (evaluation research), theoretical framework (program evaluation) and methods (content analysis, descriptive statistical analysis and bibliometric analysis) applied to the investigation of the 2003 federal government higher education reform package. This approach allows both the intended and unintended consequences arising from the policy implementation of three national initiatives focused on learning and teaching in higher education in Australia to surface. As a result, this program evaluation, also known in some disciplines as policy implementation analysis, will demonstrate the applicability of illuminative evaluation as a methodology and reinforce how program evaluation will assist and advise future government reform and policy implementation, and will serve as a legacy for future evaluative research.Any government deciding to invoke widespread change in its higher education sector through implementation of new policies impacts on every institution and all staff and students, often in both the time taken up and the heightened emotions caused. The central phenomenon that this study addresses is the process and consequences of policy changes in higher education in Australia. The aim of this article is to record the research design through the perspective (evaluation research), theoretical framework (program evaluation) and methods (content analysis, descriptive statistical analysis and bibliometric analysis) applied to the investigation of the 2003 federal government higher education reform package. This approach allows both the intended and unintended consequences arising from the policy implementation of three national initiatives focused on learning and teaching in higher education in Australia to surface. As a result, this program evaluation, also known in some disciplines as policy implementation analysis, will demonstrate the applicability of illuminative evaluation as a methodology and reinforce how program evaluation will assist and advise future government reform and policy implementation, and will serve as a legacy for future evaluative research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new metric, which we call the lighting variance ratio, for quantifying descriptors in terms of their variance to illumination changes. In many applications it is desirable to have descriptors that are robust to changes in illumination, especially in outdoor environments. The lighting variance ratio is useful for comparing descriptors and determining if a descriptor is lighting invariant enough for a given environment. The metric is analysed across a number of datasets, cameras and descriptors. The results show that the upright SIFT descriptor is typically the most lighting invariant descriptor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In images with low contrast-to-noise ratio (CNR), the information gain from the observed pixel values can be insufficient to distinguish foreground objects. A Bayesian approach to this problem is to incorporate prior information about the objects into a statistical model. A method for representing spatial prior information as an external field in a hidden Potts model is introduced. This prior distribution over the latent pixel labels is a mixture of Gaussian fields, centred on the positions of the objects at a previous point in time. It is particularly applicable in longitudinal imaging studies, where the manual segmentation of one image can be used as a prior for automatic segmentation of subsequent images. The method is demonstrated by application to cone-beam computed tomography (CT), an imaging modality that exhibits distortions in pixel values due to X-ray scatter. The external field prior results in a substantial improvement in segmentation accuracy, reducing the mean pixel misclassification rate for an electron density phantom from 87% to 6%. The method is also applied to radiotherapy patient data, demonstrating how to derive the external field prior in a clinical context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bearing faults are the most common cause of wind turbine failures. Unavailability and maintenance cost of wind turbines are becoming critically important, with their fast growing in electric networks. Early fault detection can reduce outage time and costs. This paper proposes Anomaly Detection (AD) machine learning algorithms for fault diagnosis of wind turbine bearings. The application of this method on a real data set was conducted and is presented in this paper. For validation and comparison purposes, a set of baseline results are produced using the popular one-class SVM methods to examine the ability of the proposed technique in detecting incipient faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to develop a meshless approach based on the Point Interpolation Method (PIM) for numerical simulation of a space fractional diffusion equation. Two fully-discrete schemes for the one-dimensional space fractional diffusion equation are obtained by using the PIM and the strong-forms of the space diffusion equation. Numerical examples with different nodal distributions are studied to validate and investigate the accuracy and efficiency of the newly developed meshless approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A FitzHugh-Nagumo monodomain model has been used to describe the propagation of the electrical potential in heterogeneous cardiac tissue. In this paper, we consider a two-dimensional fractional FitzHugh-Nagumo monodomain model on an irregular domain. The model consists of a coupled Riesz space fractional nonlinear reaction-diffusion model and an ordinary differential equation, describing the ionic fluxes as a function of the membrane potential. Secondly, we use a decoupling technique and focus on solving the Riesz space fractional nonlinear reaction-diffusion model. A novel spatially second-order accurate semi-implicit alternating direction method (SIADM) for this model on an approximate irregular domain is proposed. Thirdly, stability and convergence of the SIADM are proved. Finally, some numerical examples are given to support our theoretical analysis and these numerical techniques are employed to simulate a two-dimensional fractional Fitzhugh-Nagumo model on both an approximate circular and an approximate irregular domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we derive a new nonlinear two-sided space-fractional diffusion equation with variable coefficients from the fractional Fick’s law. A semi-implicit difference method (SIDM) for this equation is proposed. The stability and convergence of the SIDM are discussed. For the implementation, we develop a fast accurate iterative method for the SIDM by decomposing the dense coefficient matrix into a combination of Toeplitz-like matrices. This fast iterative method significantly reduces the storage requirement of O(n2)O(n2) and computational cost of O(n3)O(n3) down to n and O(nlogn)O(nlogn), where n is the number of grid points. The method retains the same accuracy as the underlying SIDM solved with Gaussian elimination. Finally, some numerical results are shown to verify the accuracy and efficiency of the new method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a new alternating direction implicit Galerkin--Legendre spectral method for the two-dimensional Riesz space fractional nonlinear reaction-diffusion equation is developed. The temporal component is discretized by the Crank--Nicolson method. The detailed implementation of the method is presented. The stability and convergence analysis is strictly proven, which shows that the derived method is stable and convergent of order $2$ in time. An optimal error estimate in space is also obtained by introducing a new orthogonal projector. The present method is extended to solve the fractional FitzHugh--Nagumo model. Numerical results are provided to verify the theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maximum principle for the space and time–space fractional partial differential equations is still an open problem. In this paper, we consider a multi-term time–space Riesz–Caputo fractional differential equations over an open bounded domain. A maximum principle for the equation is proved. The uniqueness and continuous dependence of the solution are derived. Using a fractional predictor–corrector method combining the L1 and L2 discrete schemes, we present a numerical method for the specified equation. Two examples are given to illustrate the obtained results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider a two-sided space-fractional diffusion equation with variable coefficients on a finite domain. Firstly, based on the nodal basis functions, we present a new fractional finite volume method for the two-sided space-fractional diffusion equation and derive the implicit scheme and solve it in matrix form. Secondly, we prove the stability and convergence of the implicit fractional finite volume method and conclude that the method is unconditionally stable and convergent. Finally, some numerical examples are given to show the effectiveness of the new numerical method, and the results are in excellent agreement with theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fractional Fokker-Planck equation is an important physical model for simulating anomalous diffusions with external forces. Because of the non-local property of the fractional derivative an interesting problem is to explore high accuracy numerical methods for fractional differential equations. In this paper, a space-time spectral method is presented for the numerical solution of the time fractional Fokker-Planck initial-boundary value problem. The proposed method employs the Jacobi polynomials for the temporal discretization and Fourier-like basis functions for the spatial discretization. Due to the diagonalizable trait of the Fourier-like basis functions, this leads to a reduced representation of the inner product in the Galerkin analysis. We prove that the time fractional Fokker-Planck equation attains the same approximation order as the time fractional diffusion equation developed in [23] by using the present method. That indicates an exponential decay may be achieved if the exact solution is sufficiently smooth. Finally, some numerical results are given to demonstrate the high order accuracy and efficiency of the new numerical scheme. The results show that the errors of the numerical solutions obtained by the space-time spectral method decay exponentially.