803 resultados para Sample algorithms
Resumo:
Multiple sampling is widely used in vadose zone percolation experiments to investigate the extent in which soil structure heterogeneities influence the spatial and temporal distributions of water and solutes. In this note, a simple, robust, mathematical model, based on the beta-statistical distribution, is proposed as a method of quantifying the magnitude of heterogeneity in such experiments. The model relies on fitting two parameters, alpha and zeta to the cumulative elution curves generated in multiple-sample percolation experiments. The model does not require knowledge of the soil structure. A homogeneous or uniform distribution of a solute and/or soil-water is indicated by alpha = zeta = 1, Using these parameters, a heterogeneity index (HI) is defined as root 3 times the ratio of the standard deviation and mean. Uniform or homogeneous flow of water or solutes is indicated by HI = 1 and heterogeneity is indicated by HI > 1. A large value for this index may indicate preferential flow. The heterogeneity index relies only on knowledge of the elution curves generated from multiple sample percolation experiments and is, therefore, easily calculated. The index may also be used to describe and compare the differences in solute and soil-water percolation from different experiments. The use of this index is discussed for several different leaching experiments. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.
Resumo:
This is the first paper in a study on the influence of the environment on the crack tip strain field for AISI 4340. A stressing stage for the environmental scanning electron microscope (ESEM) was constructed which was capable of applying loads up to 60 kN to fracture-mechanics samples. The measurement of the crack tip strain field required preparation (by electron lithography or chemical etching) of a system of reference points spaced at similar to 5 mu m intervals on the sample surface, loading the sample inside an electron microscope, image processing procedures to measure the displacement at each reference point and calculation of the strain field. Two algorithms to calculate strain were evaluated. Possible sources of errors were calculation errors due to the algorithm, errors inherent in the image processing procedure and errors due to the limited precision of the displacement measurements. Estimation of the contribution of each source of error was performed. The technique allows measurement of the crack tip strain field over an area of 50 x 40 mu m with a strain precision better than +/- 0.02 at distances larger than 5 mu m from the crack tip. (C) 1999 Kluwer Academic Publishers.
Resumo:
The Fornax Spectroscopic Survey will use the Two degree Field spectrograph (2dF) of the Angle-Australian Telescope to obtain spectra for a complete sample of all 14000 objects with 16.5 less than or equal to b(j) less than or equal to 19.7 in a 12 square degree area centred on the Fornax Cluster. The aims of this project include the study of dwarf galaxies in the cluster (both known low surface brightness objects and putative normal surface brightness dwarfs) and a comparison sample of background field galaxies. We will also measure quasars and other active galaxies, any previously unrecognised compact galaxies and a large sample of Galactic stars. By selecting all objects-both stars and galaxies-independent of morphology, we cover a much larger range of surface brightness and scale size than previous surveys. In this paper we first describe the design of the survey. Our targets are selected from UK Schmidt Telescope sky survey plates digitised by the Automated Plate Measuring (APM) facility. We then describe the photometric and astrometric calibration of these data and show that the APM astrometry is accurate enough for use with the 2dF. We also describe a general approach to object identification using cross-correlations which allows us to identify and classify both stellar and galaxy spectra. We present results from the first 2dF field. Redshift distributions and velocity structures are shown for all observed objects in the direction of Fornax, including Galactic stars? galaxies in and around the Fornax Cluster, and for the background galaxy population. The velocity data for the stars show the contributions from the different Galactic components, plus a small tail to high velocities. We find no galaxies in the foreground to the cluster in our 2dF field. The Fornax Cluster is clearly defined kinematically. The mean velocity from the 26 cluster members having reliable redshifts is 1560 +/- 80 km s(-1). They show a velocity dispersion of 380 +/- 50 km s(-1). Large-scale structure can be traced behind the cluster to a redshift beyond z = 0.3. Background compact galaxies and low surface brightness galaxies are found to follow the general galaxy distribution.
Resumo:
Rates of cell size increase are an important measure of success during the baculovirus infection process. Batch and fed batch cultures sustain large fluctuations in osmolarity that can affect the measured cell volume if this parameter is not considered during the sizing protocol. Where osmolarity differences between the sizing diluent and the culture broth exist, biased measurements of size are obtained as a result of the cell osmometer response. Spodoptera frugiperda (Sf9) cells are highly sensitive to volume change when subjected to a change in osmolarity. Use of the modified protocol with culture supernatants for sample dilution prior to sizing removed the observed error during measurement.
Resumo:
The Fornax Cluster Spectroscopic Survey (FCSS) project utilizes the Two-degree Field (2dF) multi-object spectrograph on the Anglo-Australian Telescope (AAT). Its aim is to obtain spectra for a complete sample of all 14 000 objects with 16 5 less than or equal to b(j) less than or equal to 19 7 irrespective of their morphology in a 12 deg(2) area centred on the Fornax cluster. A sample of 24 Fornax cluster members has been identified from the first 2dF field (3.1 deg(2) in area) to be completed. This is the first complete sample of cluster objects of known distance with well-defined selection limits. Nineteen of the galaxies (with -15.8 < M-B < 12.7) appear to be conventional dwarf elliptical (dE) or dwarf S0 (dS0) galaxies. The other five objects (with -13.6 < M-B < 11.3) are those galaxies which were described recently by Drinkwater et al. and labelled 'ultracompact dwarfs' (UCDs). A major result is that the conventional dwarfs all have scale sizes alpha greater than or similar to 3 arcsec (similar or equal to300 pc). This apparent minimum scale size implies an equivalent minimum luminosity for a dwarf of a given surface brightness. This produces a limit on their distribution in the magnitude-surface brightness plane, such that we do not observe dEs with high surface brightnesses but faint absolute magnitudes. Above this observed minimum scale size of 3 arcsec, the dEs and dS0s fill the whole area of the magnitude-surface brightness plane sampled by our selection limits. The observed correlation between magnitude and surface brightness noted by several recent studies of brighter galaxies is not seen with our fainter cluster sample. A comparison of our results with the Fornax Cluster Catalog (FCC) of Ferguson illustrates that attempts to determine cluster membership solely on the basis of observed morphology can produce significant errors. The FCC identified 17 of the 24 FCSS sample (i.e. 71 per cent) as being 'cluster' members, in particular missing all five of the UCDs. The FCC also suffers from significant contamination: within the FCSS's field and selection limits, 23 per cent of those objects described as cluster members by the FCC are shown by the FCSS to be background objects.
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
Aims: To estimate dementia prevalence and describe the etiology of dementia in a community sample from the city of Sao Paulo, Brazil. Methods: A sample of subjects older than 60 years was screened for dementia in the first phase. During the second phase, the diagnostic workup included a structured interview, physical and neurological examination, laboratory exams, a brain scan, and DSM-IV criteria diagnosis. Results: Mean age was 71.5 years (n = 1,563) and 58.3% had up to 4 years of schooling (68.7% female). Dementia was diagnosed in 107 subjects with an observed prevalence of 6.8%. The estimate of dementia prevalence was 12.9%, considering design effect, nonresponse during the community phase, and positive and negative predictive values. Alzheimer`s disease was the most frequent cause of dementia (59.8%), followed by vascular dementia (15.9%). Older age and illiteracy were significantly associated with dementia. Conclusions: The estimate of dementia prevalence was higher than previously reported in Brazil, with Alzheimer`s disease and vascular dementia being the most frequent causes of dementia. Dementia prevalence in Brazil and in other Latin American countries should be addressed by additional studies to confirm these higher dementia rates which might have a sizable impact on countries` health services. Copyright (C) 2008 S. Karger AG, Basel
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
Resumo:
We present a new sample of Parkes half-jansky flat-spectrum radio sources, having made a particular effort to find any previously unidentified sources. The sample contains 323 sources selected according to a flux limit of 0.5 Jy at 2.7 GHz, a spectral index measured between 2.7 and 5.0 GHz of alpha(2.7/5.0) > -0.5, where S(nu) proportional to nu(alpha), Galactic latitude \b\ > 20 degrees and -45 degrees < declination (B1950) < +10 degrees. The sample was selected from a region 3.90 steradians in area. We have obtained accurate radio positions for all the unresolved sources in this sample, and combined these with accurate optical positions from digitized photographic sky survey data to check all the optical identifications. We report new identifications based on R- and Kn-band imaging and new spectroscopic measurements of many of the sources. We present a catalogue of the 323 sources, of which 321 now have identified optical counterparts and 277 have measured spectral redshifts.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.