166 resultados para Test method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the tourism industry has been dramatically altered due to the Internet, there has been limited research published about international entrepreneurial values and Internet use in tourism firms. The findings of this study point to a relationship between the values of Internet-enabled international entrepreneurs in small-sized to medium-sized enterprises and the inclination of the firm to develop and initiate international activity. The findings of this study suggest that Internet-enabled tourism entrepreneurs share similar construct values. Two effective but underutilized qualitative methods were used in this study. The first method, repertory test, is an efficient technique for exploring constructs in decision making; the second method, laddering analysis, facilitates understanding of the perceived consequences and personal values guiding behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a method to test the cytotoxicity of wound dressings, ointments, creams and gels used in our Burn Centre, by placing them on a permeable Nunc Polycarbonate cell culture insert, incubated with a monolayer of cells (HaCaTs and primary human keratinocytes). METHODS: We performed two different methods to determine the relative toxicity to cells. (1) Photo visualisation: The dressings or compounds were positioned on the insert's membrane which was placed onto the monolayer tissue culture plate. After 24 h the surviving adherent cells were stained with Toluidine Blue and photos of the plates were taken. The acellular area of non-adherent dead cells which had been washed off with buffer was measured as a percentage of the total area of the plate. (2) Cell count of surviving cells: After 24 h incubation with the test material, the remaining cells were detached with trypsin, spun down and counted in a Haemocytometer with Trypan Blue, which differentiates between live and dead cells. RESULTS: Seventeen products were tested. The least cytotoxic products were Melolite, White soft Paraffin and Chlorsig1% Ointment. Some cytotoxicity was shown with Jelonet, Mepitel((R)), PolyMem((R)), DuoDerm((R)) and Xeroform. The most cytotoxic products included those which contained silver or Chlorhexidine and Paraffin Cream a moisturizer which contains the preservative Chlorocresol. CONCLUSION: This in vitro cell culture insert method allows testing of agents without direct cell contact. It is easy and quick to perform, and should help the clinician to determine the relative cytotoxicity of various dressings and the optimal dressing for each individual wound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study used a homogeneous water-equivalent model of an electronic portal imaging device (EPID), contoured as a structure in a radiotherapy treatment plan, to produce reference dose images for comparison with in vivo EPID dosimetry images. Head and neck treatments were chosen as the focus of this study, due to the heterogeneous anatomies involved and the consequent difficulty of rapidly obtaining reliable reference dose images by other means. A phantom approximating the size and heterogeneity of a typical neck, with a maximum radiological thickness of 8.5 cm, was constructed for use in this study. This phantom was CT scanned and a simple treatment including five square test fields and one off-axis IMRT field was planned. In order to allow the treatment planning system to calculate dose in a model EPID positioned a distance downstream from the phantom to achieve a source-to-detector distance (SDD) of 150 cm, the CT images were padded with air and the phantom’s “body” contour was extended to encompass the EPID contour. Comparison of dose images obtained from treatment planning calculations and experimental irradiations showed good agreement, with more than 90% of points in all fields passing a gamma evaluation, at γ (3%, 3mm )Similar agreement was achieved when the phantom was over-written with air in the treatment plan and removed from the experimental beam, suggesting that water EPID model at 150 cm SDD is capable of providing accurate reference images for comparison with clinical IMRT treatment images, for patient anatomies with radiological thicknesses ranging from 0 up to approximately 9 cm. This methodology therefore has the potential to be used for in vivo dosimetry during treatments to tissues in the neck as well as the oral and nasal cavities, in the head-and-neck region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, integration of small-scale electricity generators, known as Distributed Generation (DG), into distribution networks has become increasingly popular. This tendency together with the falling price of DG units has a great potential in giving the DG a better chance to participate in voltage regulation process, in parallel with other regulating devices already available in the distribution systems. The voltage control issue turns out to be a very challenging problem for distribution engineers, since existing control coordination schemes need to be reconsidered to take into account the DG operation. In this paper, a control coordination approach is proposed, which is able to utilize the ability of the DG as a voltage regulator, and at the same time minimize the interaction of DG with another DG or other active devices, such as On-load Tap Changing Transformer (OLTC). The proposed technique has been developed based on the concepts of protection principles (magnitude grading and time grading) for response coordination of DG and other regulating devices and uses Advanced Line Drop Compensators (ALDCs) for implementation. A distribution feeder with tap changing transformer and DG units has been extracted from a practical system to test the proposed control technique. The results show that the proposed method provides an effective solution for coordination of DG with another DG or voltage regulating devices and the integration of protection principles has considerably reduced the control interaction to achieve the desired voltage correction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The capacity to diagnosys, quantify and evaluate movement beyond the general confines of a clinical environment under effectiveness conditions may alleviate rampant strain on limited, expensive and highly specialized medical resources. An iPhone 4® mounted a three dimensional accelerometer subsystem with highly robust software applications. The present study aimed to evaluate the reliability and concurrent criterion-related validity of the accelerations with an iPhone 4® in an Extended Timed Get Up and Go test. Extended Timed Get Up and Go is a clinical test with that the patient get up from the chair and walking ten meters, turn and coming back to the chair. Methods A repeated measure, cross-sectional, analytical study. Test-retest reliability of the kinematic measurements of the iPhone 4® compared with a standard validated laboratory device. We calculated the Coefficient of Multiple Correlation between the two sensors acceleration signal of each subject, in each sub-stage, in each of the three Extended Timed Get Up and Go test trials. To investigate statistical agreement between the two sensors we used the Bland-Altman method. Results With respect to the analysis of the correlation data in the present work, the Coefficient of Multiple Correlation of the five subjects in their triplicated trials were as follows: in sub-phase Sit to Stand the ranged between r = 0.991 to 0.842; in Gait Go, r = 0.967 to 0.852; in Turn, 0.979 to 0.798; in Gait Come, 0.964 to 0.887; and in Turn to Stand to Sit, 0.992 to 0.877. All the correlations between the sensors were significant (p < 0.001). The Bland-Altman plots obtained showed a solid tendency to stay at close to zero, especially on the y and x-axes, during the five phases of the Extended Timed Get Up and Go test. Conclusions The inertial sensor mounted in the iPhone 4® is sufficiently reliable and accurate to evaluate and identify the kinematic patterns in an Extended Timed Get and Go test. While analysis and interpretation of 3D kinematics data continue to be dauntingly complex, the iPhone 4® makes the task of acquiring the data relatively inexpensive and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To develop a signal processing paradigm for extracting ERG responses to temporal sinusoidal modulation with contrasts ranging from below perceptual threshold to suprathreshold contrasts. To estimate the magnitude of intrinsic noise in ERG signals at different stimulus contrasts. Methods Photopic test stimuli were generated using a 4-primary Maxwellian view optical system. The 4-primary lights were sinusoidally temporally modulated in-phase (36 Hz; 2.5 - 50% Michelson). The stimuli were presented in 1 s epochs separated by a 1 ms blank interval and repeated 160 times (160.16 s duration) during the recording of the continuous flicker ERG from the right eye using DTL fiber electrodes. After artefact rejection, the ERG signal was extracted using Fourier methods in each of the 1 s epochs where a stimulus was presented. The signal processing allows for computation of the intrinsic noise distribution in addition to the signal to noise (SNR) ratio. Results We provide the initial report that the ERG intrinsic noise distribution is independent of stimulus contrast whereas SNR decreases linearly with decreasing contrast until the noise limit at ~2.5%. The 1ms blank intervals between epochs de-correlated the ERG signal at the line frequency (50 Hz) and thus increased the SNR of the averaged response. We confirm that response amplitude increases linearly with stimulus contrast. The phase response shows a shallow positive relationship with stimulus contrast. Conclusions This new technique will enable recording of intrinsic noise in ERG signals above and below perceptual visual threshold and is suitable for measurement of continuous rod and cone ERGs across a range of temporal frequencies, and post-receptoral processing in the primary retinogeniculate pathways at low stimulus contrasts. The intrinsic noise distribution may have application as a biomarker for detecting changes in disease progression or treatment efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To The ratcheting behavior of high-strength rail steel (Australian Standard AS1085.1) is studied in this work for the purpose of predicting wear and damage to the rail surface. Historically, researchers have used circular test coupons obtained from the rail head to conduct cyclic load tests, but according to hardness profile data, considerable variation exists across the rail head section. For example, the induction-hardened rail (AS1085.1) shows high hardness (400-430 HV100) up to four-millimeters into the rail head’s surface, but then drops considerably beyond that. Given that cyclic test coupons five millimeters in diameter at the gauge area are usually taken from the rail sample, there is a high probability that the original surface properties of the rail do not apply across the entire test coupon and, therefore, data representing only average material properties are obtained. In the literature, disks (47 mm in diameter) for a twin-disk rolling contact test machine have been obtained directly from the rail sample and used to validate rolling contact fatigue wear models. The question arises: How accurate are such predictions? In this research paper, the effect of rail sampling position on the ratcheting behavior of AS1085.1 rail steel was investigated using rectangular shaped specimens. Uniaxial stress-controlled tests were conducted with samples obtained at four different depths to observe the ratcheting behaviour of each. Micro-hardness measurements of the test coupons were carried out to obtain a constitutive relationship to predict the effect of depth on the ratcheting behaviour of the rail material. This work ultimately assists the selection of valid material parameters for constitutive models in the study of rail surface ratcheting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane proteins play important roles in many biochemical processes and are also attractive targets of drug discovery for various diseases. The elucidation of membrane protein types provides clues for understanding the structure and function of proteins. Recently we developed a novel system for predicting protein subnuclear localizations. In this paper, we propose a simplified version of our system for predicting membrane protein types directly from primary protein structures, which incorporates amino acid classifications and physicochemical properties into a general form of pseudo-amino acid composition. In this simplified system, we will design a two-stage multi-class support vector machine combined with a two-step optimal feature selection process, which proves very effective in our experiments. The performance of the present method is evaluated on two benchmark datasets consisting of five types of membrane proteins. The overall accuracies of prediction for five types are 93.25% and 96.61% via the jackknife test and independent dataset test, respectively. These results indicate that our method is effective and valuable for predicting membrane protein types. A web server for the proposed method is available at http://www.juemengt.com/jcc/memty_page.php

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we aim at predicting protein structural classes for low-homology data sets based on predicted secondary structures. We propose a new and simple kernel method, named as SSEAKSVM, to predict protein structural classes. The secondary structures of all protein sequences are obtained by using the tool PSIPRED and then a linear kernel on the basis of secondary structure element alignment scores is constructed for training a support vector machine classifier without parameter adjusting. Our method SSEAKSVM was evaluated on two low-homology datasets 25PDB and 1189 with sequence homology being 25% and 40%, respectively. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies on these two data sets are 86.3% and 84.5%, respectively, which are higher than those obtained by other existing methods. Especially, our method achieves higher accuracies (88.1% and 88.5%) for differentiating the α + β class and the α/β class compared to other methods. This suggests that our method is valuable to predict protein structural classes particularly for low-homology protein sequences. The source code of the method in this paper can be downloaded at http://math.xtu.edu.cn/myphp/math/research/source/SSEAK_source_code.rar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a refined classic noise prediction method based on the VISSIM and FHWA noise prediction model is formulated to analyze the sound level contributed by traffic on the Nanjing Lukou airport connecting freeway before and after widening. The aim of this research is to (i) assess the traffic noise impact on the Nanjing University of Aeronautics and Astronautics (NUAA) campus before and after freeway widening, (ii) compare the prediction results with field data to test the accuracy of this method, (iii) analyze the relationship between traffic characteristics and sound level. The results indicate that the mean difference between model predictions and field measurements is acceptable. The traffic composition impact study indicates that buses (including mid-sized trucks) and heavy goods vehicles contribute a significant proportion of total noise power despite their low traffic volume. In addition, speed analysis offers an explanation for the minor differences in noise level across time periods. Future work will aim at reducing model error, by focusing on noise barrier analysis using the FEM/BEM method and modifying the vehicle noise emission equation by conducting field experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have derived a versatile gene-based test for genome-wide association studies (GWAS). Our approach, called VEGAS (versatile gene-based association study), is applicable to all GWAS designs, including family-based GWAS, meta-analyses of GWAS on the basis of summary data, and DNA-pooling-based GWAS, where existing approaches based on permutation are not possible, as well as singleton data, where they are. The test incorporates information from a full set of markers (or a defined subset) within a gene and accounts for linkage disequilibrium between markers by using simulations from the multivariate normal distribution. We show that for an association study using singletons, our approach produces results equivalent to those obtained via permutation in a fraction of the computation time. We demonstrate proof-of-principle by using the gene-based test to replicate several genes known to be associated on the basis of results from a family-based GWAS for height in 11,536 individuals and a DNA-pooling-based GWAS for melanoma in approximately 1300 cases and controls. Our method has the potential to identify novel associated genes; provide a basis for selecting SNPs for replication; and be directly used in network (pathway) approaches that require per-gene association test statistics. We have implemented the approach in both an easy-to-use web interface, which only requires the uploading of markers with their association p-values, and a separate downloadable application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of erroneous genotypes having passed standard quality control (QC) can be severe in genome-wide association studies, genotype imputation, and estimation of heritability and prediction of genetic risk based on single nucleotide polymorphisms (SNP). To detect such genotyping errors, a simple two-locus QC method, based on the difference in test statistic of association between single SNPs and pairs of SNPs, was developed and applied. The proposed approach could detect many problematic SNPs with statistical significance even when standard single SNP QC analyses fail to detect them in real data. Depending on the data set used, the number of erroneous SNPs that were not filtered out by standard single SNP QC but detected by the proposed approach varied from a few hundred to thousands. Using simulated data, it was shown that the proposed method was powerful and performed better than other tested existing methods. The power of the proposed approach to detect erroneous genotypes was approximately 80% for a 3% error rate per SNP. This novel QC approach is easy to implement and computationally efficient, and can lead to a better quality of genotypes for subsequent genotype-phenotype investigations.