884 resultados para methods of measurement
Resumo:
This paper aims at evaluating the methods of multiclass support vector machines (SVMs) for effective use in distance relay coordination. Also, it describes a strategy of supportive systems to aid the conventional protection philosophy in combating situations where protection systems have maloperated and/or information is missing and provide selective and secure coordinations. SVMs have considerable potential as zone classifiers of distance relay coordination. This typically requires a multiclass SVM classifier to effectively analyze/build the underlying concept between reach of different zones and the apparent impedance trajectory during fault. Several methods have been proposed for multiclass classification where typically several binary SVM classifiers are combined together. Some authors have extended binary SVM classification to one-step single optimization operation considering all classes at once. In this paper, one-step multiclass classification, one-against-all, and one-against-one multiclass methods are compared for their performance with respect to accuracy, number of iterations, number of support vectors, training, and testing time. The performance analysis of these three methods is presented on three data sets belonging to training and testing patterns of three supportive systems for a region and part of a network, which is an equivalent 526-bus system of the practical Indian Western grid.
Resumo:
A method is described for monitoring the concentration of endogenous receptor-bound gonadotropin in the ovarian tissue. This involved development of a radioimmunoassay procedure, the validity of which for measuring all of the tissue-bound hormone has been established. The specificity of the method of measurement was indicated by the fact that high levels of FSH could be measured only in target tissue such as follicles, while non-target organs showed little FSH. Using this method, the amount of FSH in the non-luteal ovarian tissue of the hamster at different stages of the estrous cycle was quantitated and compared with serum FSH levels found at these times. No correlation could be found between serum and tissue FSH levels at all times. On the morning of estrus, for example, when the serum level of FSH was high, the ovarian concentration was low, and on the evening of diestrus-2 the ovary exhibited high concentration of FSH, despite the serum FSH concentration being low at this time. The highest concentration of FSH in the ovary during the cycle was found on the evening of proestrus. Although a large amount of this was found in the Graafian follicles, a considerable amount could still be found in the �growing� follicles. Ovarian FSH concentration could be considered to be a reflection of FSH receptor content, since preventing the development of FSH receptors by blocking initiation of follicular development during the cycle resulted in a decrease in the concentration of FSH in the ovary. The high concentration of FSH in the ovary seen on the evening of diestrus-2 was not influenced either by varying the concentration of estrogen or by neutralization of LH. Neutralization of FSH on diestrus-2, on the other hand, caused a drastic reduction in the ovarian LH concentration on the next day (i.e. at proestrus), thus suggesting the importance of FSH in the induction of LH receptors.
Resumo:
Two methods of pre-harvest inventory were designed and tested on three cutting sites containing a total of 197 500 m3 of wood. These sites were located on flat-ground boreal forests located in northwestern Quebec. Both methods studied involved scaling of trees harvested to clear the road path one year (or more) prior to harvest of adjacent cut-blocks. The first method (ROAD) considers the total road right-of-way volume divided by the total road area cleared. The resulting volume per hectare is then multiplied by the total cut-block area scheduled for harvest during the following year to obtain the total estimated cutting volume. The second method (STRATIFIED) also involves scaling of trees cleared from the road. However, in STRATIFIED, log scaling data are stratified by forest stand location. A volume per hectare is calculated for each stretch of road that crosses a single forest stand. This volume per hectare is then multiplied by the remaining area of the same forest stand scheduled for harvest one year later. The sum of all resulting estimated volumes per stand gives the total estimated cutting-volume for all cut-blocks adjacent to the studied road. A third method (MNR) was also used to estimate cut-volumes of the sites studied. This method represents the actual existing technique for estimating cutting volume in the province of Quebec. It involves summing the cut volume for all forest stands. The cut volume is estimated by multiplying the area of each stand by its estimated volume per hectare obtained from standard stock tables provided by the governement. The resulting total estimated volume per cut-block for all three methods was then compared with the actual measured cut-block volume (MEASURED). This analysis revealed a significant difference between MEASURED and MNR methods with the MNR volume estimate being 30 % higher than MEASURED. However, no significant difference from MEASURED was observed for volume estimates for the ROAD and STRATIFIED methods which respectively had estimated cutting volumes 19 % and 5 % lower than MEASURED. Thus the ROAD and STRATIFIED methods are good ways to estimate cut-block volumes after road right-of-way harvest for conditions similar to those examined in this study.
Resumo:
1. Habitat selection is a universal aspect of animal ecology that has important fitness consequences and may drive patterns of spatial organisation in ecological communities. 2. Measurements of habitat selection have mostly been carried out on single species and at the landscape level. Quantitative studies examining microhabitat selection at the community level are scarce, especially in insects. 3. In this study, microhabitat selection in a natural assemblage of cricket species was examined for the first time using resource selection functions (RSF), an approach more commonly applied in studies of macrohabitat selection. 4. The availability and differential use of six microhabitats by 13 species of crickets inhabiting a tropical evergreen forest in southern India was examined. The six available microhabitats included leaf litter-covered ground, tree trunks, dead logs, brambles, understorey and canopy foliage. The area offered by the six microhabitats was estimated using standard methods of forest structure measurement. Of the six microhabitats, the understorey and canopy accounted for approximately 70% of the total available area. 5. The use of different microhabitats by the 13 species was investigated using acoustic sampling of crickets to locate calling individuals. Using RSF, it was found that of 13 cricket species examined, 10 showed 100% selection for a specific microhabitat. Of these, two species showed fairly high selection for brambles and dead logs, which were rare microhabitats, highlighting the importance of preserving all components of forest structure.
Resumo:
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.
Resumo:
The property of crystal depends seriously on the solution concentration distribution near the growth surface of a crystal. However, the concentration distributions are affected by the diffusion and convection of the solution. In the present experiment, the two methods of optical measurement are used to obtained velocity field and concentration field of NaClO3 solution. The convection patterns in sodium chlorate (NaClO3) crystal growth are measured by Digital Particle image Velocimetry (DPIV) technology. The 2-dimentional velocity distributions in the solution of NaClO3 are obtained from experiments. And concentration field are obtained by a Mach-Zehnder interferometer with a phase shift servo system. Interference patterns were recorded directly by a computer via a CCD camera. The evolution of velocity field and concentration field from dissolution to crystallization are visualized clearly. The structures of velocity fields were compared with that of concentration field.
Resumo:
A turbulent boundary-layer flow over a rough wall generates a dipole sound field as the near-field hydrodynamic disturbances in the turbulent boundary-layer scatter into radiated sound at small surface irregularities. In this paper, phased microphone arrays are applied to the measurement and simulation of surface roughness noise. The radiated sound from two rough plates and one smooth plate in an open jet is measured at three streamwise locations, and the beamforming source maps demonstrate the dipole directivity. Higher source strengths can be observed on the rough plates which also enhance the trailing-edge noise. A prediction scheme in previous theoretical work is used to describe the strength of a distribution of incoherent dipoles and to simulate the sound detected by the microphone array. Source maps of measurement and simulation exhibit satisfactory similarities in both source pattern and source strength, which confirms the dipole nature and the predicted magnitude of roughness noise. However, the simulations underestimate the streamwise gradient of the source strengths and overestimate the source strengths at the highest frequency. © 2008 Elsevier Ltd. All rights reserved.
Resumo:
Gene microarray technology is highly effective in screening for differential gene expression and has hence become a popular tool in the molecular investigation of cancer. When applied to tumours, molecular characteristics may be correlated with clinical features such as response to chemotherapy. Exploitation of the huge amount of data generated by microarrays is difficult, however, and constitutes a major challenge in the advancement of this methodology. Independent component analysis (ICA), a modern statistical method, allows us to better understand data in such complex and noisy measurement environments. The technique has the potential to significantly increase the quality of the resulting data and improve the biological validity of subsequent analysis. We performed microarray experiments on 31 postmenopausal endometrial biopsies, comprising 11 benign and 20 malignant samples. We compared ICA to the established methods of principal component analysis (PCA), Cyber-T, and SAM. We show that ICA generated patterns that clearly characterized the malignant samples studied, in contrast to PCA. Moreover, ICA improved the biological validity of the genes identified as differentially expressed in endometrial carcinoma, compared to those found by Cyber-T and SAM. In particular, several genes involved in lipid metabolism that are differentially expressed in endometrial carcinoma were only found using this method. This report highlights the potential of ICA in the analysis of microarray data.
Resumo:
Groupers are important components of commercial and recreational fisheries. Current methods of diver-based grouper census surveys could potentially benefit from development of remotely sensed methods of seabed classification. The goal of the present study was to determine if areas of high grouper abundance have characteristic acoustic signatures. A commercial acoustic seabed mapping system, QTC View Series V, was used to survey an area near Carysfort Reef, Florida Keys. Acoustic data were clustered using QTC IMPACT software, resulting in three main acoustic classes covering 94% of the area surveyed. Diver-based data indicate that one of the acoustic classes corresponded to hard substrate and the other two represented sediment. A new measurement of seabed heterogeneity, designated acoustic variability, was also computed from the acoustic survey data in order to more fully characterize the acoustic response (i.e., the signature) of the seafloor. When compared with diver-based grouper census data, both acoustic classification and acoustic variability were significantly different at sites with and without groupers. Sites with groupers were characterized by hard bottom substrate and high acoustic variability. Thus, the acoustic signature of a site, as measured by acoustic classification or acoustic variability, is a potentially useful tool for stratifying diver sampling effort for grouper census.
Resumo:
The study presented here was carried out to obtain the actual solids flow rate by the combination of electrical resistance tomography and electromagnetic flow meter. A new in-situ measurement method based on measurements of the Electromagnetic Flow Meters (EFM) and Electrical Resistance Tomography (ERT) to study the flow rates of individual phases in a vertical flow was proposed. The study was based on laboratory experiments that were carried out with a 50 mm vertical flow rig for a number of sand concentrations and different mixture velocities. A range of sand slurries with median particle size from 212 mu m to 355 mu m was tested. The solid concentration by volume covered was 5% and 15%, and the corresponding density of 5% was 1078 kg/m(3) and of 15% was 1238 kg/m(3). The flow velocity was between 1.5 m/s and 3.0 m/s. A total of 6 experimental tests were conducted. The equivalent liquid model was adopted to validate in-situ volumetric solids fraction and calculate the slip velocity. The results show that the ERT technique can be used in conjunction with an electromagnetic flow meter as a way of measurement of slurry flow rate in a vertical pipe flow. However it should be emphasized that the EFM results must be treated with reservation when the flow pattern at the EFM mounting position is a non-homogenous flow. The flow rate obtained by the EFM should be corrected considering the slip velocity and the flow pattern.
Resumo:
Most of the fish marketed throughout Nigeria are in either smoked or dried form. The technological requirement for other forms of preservation like chilling and freezing cannot be afforded by the small scale fisher folk. Considerable quantities of fish processed for distant consumer markets are lost at handling, processing, storage and marketing stages. Significant losses occur through infestation by mites, insects, fungal infestation and fragmentation during transportation. This paper attempts to describe the effect of these losses on fish quality and suggests methods of protecting fish from agents of deterioration
Resumo:
The study was carried out to asses the nutritional qualities of smoked O. niloticus and to discover the best methods of storage to minimize spoilage and infestation of smoked fish. Result showed that the protein contents in A and D decreased while the protein contents of b and C increased. The lipid content increased only in A while it decreased in B-C and D. The moisture content generally increased over the period of storage and there was an increase in ash content only in C while it decreased in A, B and D. The samples packed in polythene bag suffered about 35% mould infection and a few were attached by rodents with some fouling. Samples packed in jute bag were in good condition but were slightly attached by insect. All samples packed in carton and basket were still in good state but there were insect attack in those packed in carton
Resumo:
Part I
Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.
Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.
The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.
Part II
Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.
Resumo:
The energy loss of protons and deuterons in D_2O ice has been measured over the energy range, E_p 18 - 541 kev. The double focusing magnetic spectrometer was used to measure the energy of the particles after they had traversed a known thickness of the ice target. One method of measurement is used to determine relative values of the stopping cross section as a function of energy; another method measures absolute values. The results are in very good agreement with the values calculated from Bethe’s semi-empirical formula. Possible sources of error are considered and the accuracy of the measurements is estimated to be ± 4%.
The D(dp)H^3 cross section has been measured by two methods. For E_D = 200 - 500 kev the spectrometer was used to obtain the momentum spectrum of the protons and tritons. From the yield and stopping cross section the reaction cross section at 90° has been obtained.
For E_D = 35 – 550 kev the proton yield from a thick target was differentiated to obtain the cross section. Both thin and thick target methods were used to measure the yield at each of ten angles. The angular distribution is expressed in terms of a Legendre polynomial expansion. The various sources of experimental error are considered in detail, and the probable error of the cross section measurements is estimated to be ± 5%.
Resumo:
In recent collaborative biological sampling exercises organised by the Nottingham Regional Laboratory of the Severn-Trent Water Authority, the effect of handnet sampling variation on the quality and usefulness of the data obtained has been questioned, especially when this data is transcribed into one or more of the commonly used biological methods of water quality assessment. This study investigates if this effect is constant at sites with similar typography but differing water quality states when the sampling method is standardized and carried out by a single operator. An argument is made for the use of a lowest common denominator approach to give a more consistent result and obviate the effect of sampling variation on these biological assessment methods.