45 resultados para Sample algorithms

em University of Queensland eSpace - Australia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most face recognition systems only work well under quite constrained environments. In particular, the illumination conditions, facial expressions and head pose must be tightly controlled for good recognition performance. In 2004, we proposed a new face recognition algorithm, Adaptive Principal Component Analysis (APCA) [4], which performs well against both lighting variation and expression change. But like other eigenface-derived face recognition algorithms, APCA only performs well with frontal face images. The work presented in this paper is an extension of our previous work to also accommodate variations in head pose. Following the approach of Cootes et al, we develop a face model and a rotation model which can be used to interpret facial features and synthesize realistic frontal face images when given a single novel face image. We use a Viola-Jones based face detector to detect the face in real-time and thus solve the initialization problem for our Active Appearance Model search. Experiments show that our approach can achieve good recognition rates on face images across a wide range of head poses. Indeed recognition rates are improved by up to a factor of 5 compared to standard PCA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite many successes of conventional DNA sequencing methods, some DNAs remain difficult or impossible to sequence. Unsequenceable regions occur in the genomes of many biologically important organisms, including the human genome. Such regions range in length from tens to millions of bases, and may contain valuable information such as the sequences of important genes. The authors have recently developed a technique that renders a wide range of problematic DNAs amenable to sequencing. The technique is known as sequence analysis via mutagenesis (SAM). This paper presents a number of algorithms for analysing and interpreting data generated by this technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study was conducted to examine the relationships among eating pathology, weight dissatisfaction and dieting, and unwanted sexual experiences in childhood. An unselected community sample of 201 young and 268 middle-aged women were administered questionnaires assessing eating behaviors and attitudes, and past and current sexual abuse. Results showed differential relationships among these factors for the two age cohorts: for young women, past sexual abuse predicted weight dissatisfaction, but not dieting or disordered eating behaviors, whereas for middle-aged women, past abuse was predictive of disordered eating, but not dieting or weight dissatisfaction. Current physical or sexual abuse was also found to be predictive of disordered eating for the young women. These findings underscore the complexity of the relationships among unwanted sexual experiences and eating and weight pathology, and suggest that the timing of sexual abuse, and the age of the woman, are important mediating factors. (C) 1998 Elsevier Science Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives. To investigate the test-retest stability of a standardized version of Nelson's (1976) Modified Card Sorting Test (MCST) and its relationships with demographic variables in a sample of healthy older adults. Design. A standard card order and administration were devised for the MCST and administered to participants at an initial assessment, and again at a second session conducted a minimum of six months later in order to examine its test-retest stability. Participants were also administered the WAIS-R at initial assessment in order to provide a measure of psychometric intelligence. Methods. Thirty-six (24 female, 12 male) healthy older adults aged 52 to 77 years with mean education 12.42 years (SD = 3.53) completed the MCST on two occasions approximately 7.5 months (SD = 1.61) apart. Stability coefficients and test-retest differences were calculated for the range of scores. The effect of gender on MCST performance was examined. Correlations between MCST scores and age, education and WAIS-R IQs were also determined. Results. Stability coefficients ranged from .26 for the percent perseverative errors measure to .49 for the failure to maintain set measure. Several measures were significantly correlated with age, education and WAIS-R IQs, although no effect of gender on MCST performance was found. Conclusions. None of the stability coefficients reached the level required for clinical decision making. The results indicate that participants' age, education, and intelligence need to be considered when interpreting MCST performance. Normative studies of MCST performance as well as further studies with patients with executive dysfunction are needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the X-ray properties of the Parkes sample of Bat-spectrum radio sources using data from the ROSAT All-Sky Survey and archival pointed PSPC observations. In total, 163 of the 323 sources are detected. For the remaining 160 sources, 2 sigma upper limits to the X-ray flux are derived. We present power-law photon indices in the 0.1-2.4 keV energy band for 115 sources, which were determined either with a hardness ratio technique or from direct fits to pointed PSPC data if a sufficient number of photons were available. The average photon index is <Gamma > = 1.95(-0.12)(+0.13) for flat-spectrum radio-loud quasars, <Gamma > = 1.70(-0.24)(+0.23) for galaxies, and <Gamma > = 2.40(-0.31)(+0.12) for BL Lac objects. The soft X-ray photon index is correlated with redshift and with radio spectral index in the sense that sources at high redshift and/or with flat (or inverted) radio spectra have flatter X-ray spectra on average. The results are in accord with orientation-dependent unification schemes for radio-loud active galactic nuclei. Webster et al. discovered many sources with unusually red optical continua among the quasars of this sample, and interpreted this result in terms of extinction by dust. Although the X-ray spectra in general do not show excess absorption, we find that low-redshift optically red quasars have significantly lower soft X-ray luminosities on average than objects with blue optical continua. The difference disappears for higher redshifts, as is expected for intrinsic absorption by cold gas associated with the dust. In addition, the scatter in log(f(x)/f(o)) is consistent with the observed optical extinction, contrary to previous claims based on optically or X-ray selected samples. Although alternative explanations for the red optical continua cannot be excluded with the present X-ray data, we note that the observed X-ray properties are consistent with the idea that dust plays an important role in some of the radio-loud quasars with red optical continua.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sampling is widely used in vadose zone percolation experiments to investigate the extent in which soil structure heterogeneities influence the spatial and temporal distributions of water and solutes. In this note, a simple, robust, mathematical model, based on the beta-statistical distribution, is proposed as a method of quantifying the magnitude of heterogeneity in such experiments. The model relies on fitting two parameters, alpha and zeta to the cumulative elution curves generated in multiple-sample percolation experiments. The model does not require knowledge of the soil structure. A homogeneous or uniform distribution of a solute and/or soil-water is indicated by alpha = zeta = 1, Using these parameters, a heterogeneity index (HI) is defined as root 3 times the ratio of the standard deviation and mean. Uniform or homogeneous flow of water or solutes is indicated by HI = 1 and heterogeneity is indicated by HI > 1. A large value for this index may indicate preferential flow. The heterogeneity index relies only on knowledge of the elution curves generated from multiple sample percolation experiments and is, therefore, easily calculated. The index may also be used to describe and compare the differences in solute and soil-water percolation from different experiments. The use of this index is discussed for several different leaching experiments. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is the first paper in a study on the influence of the environment on the crack tip strain field for AISI 4340. A stressing stage for the environmental scanning electron microscope (ESEM) was constructed which was capable of applying loads up to 60 kN to fracture-mechanics samples. The measurement of the crack tip strain field required preparation (by electron lithography or chemical etching) of a system of reference points spaced at similar to 5 mu m intervals on the sample surface, loading the sample inside an electron microscope, image processing procedures to measure the displacement at each reference point and calculation of the strain field. Two algorithms to calculate strain were evaluated. Possible sources of errors were calculation errors due to the algorithm, errors inherent in the image processing procedure and errors due to the limited precision of the displacement measurements. Estimation of the contribution of each source of error was performed. The technique allows measurement of the crack tip strain field over an area of 50 x 40 mu m with a strain precision better than +/- 0.02 at distances larger than 5 mu m from the crack tip. (C) 1999 Kluwer Academic Publishers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fornax Spectroscopic Survey will use the Two degree Field spectrograph (2dF) of the Angle-Australian Telescope to obtain spectra for a complete sample of all 14000 objects with 16.5 less than or equal to b(j) less than or equal to 19.7 in a 12 square degree area centred on the Fornax Cluster. The aims of this project include the study of dwarf galaxies in the cluster (both known low surface brightness objects and putative normal surface brightness dwarfs) and a comparison sample of background field galaxies. We will also measure quasars and other active galaxies, any previously unrecognised compact galaxies and a large sample of Galactic stars. By selecting all objects-both stars and galaxies-independent of morphology, we cover a much larger range of surface brightness and scale size than previous surveys. In this paper we first describe the design of the survey. Our targets are selected from UK Schmidt Telescope sky survey plates digitised by the Automated Plate Measuring (APM) facility. We then describe the photometric and astrometric calibration of these data and show that the APM astrometry is accurate enough for use with the 2dF. We also describe a general approach to object identification using cross-correlations which allows us to identify and classify both stellar and galaxy spectra. We present results from the first 2dF field. Redshift distributions and velocity structures are shown for all observed objects in the direction of Fornax, including Galactic stars? galaxies in and around the Fornax Cluster, and for the background galaxy population. The velocity data for the stars show the contributions from the different Galactic components, plus a small tail to high velocities. We find no galaxies in the foreground to the cluster in our 2dF field. The Fornax Cluster is clearly defined kinematically. The mean velocity from the 26 cluster members having reliable redshifts is 1560 +/- 80 km s(-1). They show a velocity dispersion of 380 +/- 50 km s(-1). Large-scale structure can be traced behind the cluster to a redshift beyond z = 0.3. Background compact galaxies and low surface brightness galaxies are found to follow the general galaxy distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rates of cell size increase are an important measure of success during the baculovirus infection process. Batch and fed batch cultures sustain large fluctuations in osmolarity that can affect the measured cell volume if this parameter is not considered during the sizing protocol. Where osmolarity differences between the sizing diluent and the culture broth exist, biased measurements of size are obtained as a result of the cell osmometer response. Spodoptera frugiperda (Sf9) cells are highly sensitive to volume change when subjected to a change in osmolarity. Use of the modified protocol with culture supernatants for sample dilution prior to sizing removed the observed error during measurement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fornax Cluster Spectroscopic Survey (FCSS) project utilizes the Two-degree Field (2dF) multi-object spectrograph on the Anglo-Australian Telescope (AAT). Its aim is to obtain spectra for a complete sample of all 14 000 objects with 16 5 less than or equal to b(j) less than or equal to 19 7 irrespective of their morphology in a 12 deg(2) area centred on the Fornax cluster. A sample of 24 Fornax cluster members has been identified from the first 2dF field (3.1 deg(2) in area) to be completed. This is the first complete sample of cluster objects of known distance with well-defined selection limits. Nineteen of the galaxies (with -15.8 < M-B < 12.7) appear to be conventional dwarf elliptical (dE) or dwarf S0 (dS0) galaxies. The other five objects (with -13.6 < M-B < 11.3) are those galaxies which were described recently by Drinkwater et al. and labelled 'ultracompact dwarfs' (UCDs). A major result is that the conventional dwarfs all have scale sizes alpha greater than or similar to 3 arcsec (similar or equal to300 pc). This apparent minimum scale size implies an equivalent minimum luminosity for a dwarf of a given surface brightness. This produces a limit on their distribution in the magnitude-surface brightness plane, such that we do not observe dEs with high surface brightnesses but faint absolute magnitudes. Above this observed minimum scale size of 3 arcsec, the dEs and dS0s fill the whole area of the magnitude-surface brightness plane sampled by our selection limits. The observed correlation between magnitude and surface brightness noted by several recent studies of brighter galaxies is not seen with our fainter cluster sample. A comparison of our results with the Fornax Cluster Catalog (FCC) of Ferguson illustrates that attempts to determine cluster membership solely on the basis of observed morphology can produce significant errors. The FCC identified 17 of the 24 FCSS sample (i.e. 71 per cent) as being 'cluster' members, in particular missing all five of the UCDs. The FCC also suffers from significant contamination: within the FCSS's field and selection limits, 23 per cent of those objects described as cluster members by the FCC are shown by the FCSS to be background objects.