852 resultados para robust estimator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental mechanical sieving methods are applied to samples of shellfish remains from three sites in southeast Queensland, Seven Mile Creek Mound, Sandstone Point and One-Tree, to test the efficacy of various recovery and quantification procedures commonly applied to shellfish assemblages in Australia. There has been considerable debate regarding the most appropriate sieve sizes and quantification methods that should be applied in the recovery of vertebrate faunal remains. Few studies, however, have addressed the impact of recovery and quantification methods on the interpretation of invertebrates, specifically shellfish remains. In this study, five shellfish taxa representing four bivalves (Anadara trapezia, Trichomya hirsutus, Saccostrea glomerata, Donax deltoides) and one gastropod (Pyrazus ebeninus) common in eastern Australian midden assemblages are sieved through 10mm, 6.3mm and 3.15mm mesh. Results are quantified using MNI, NISP and weight. Analyses indicate that different structural properties and pre- and postdepositional factors affect recovery rates. Fragile taxa (T. hirsutus) or those with foliated structure (S. glomerata) tend to be overrepresented by NISP measures in smaller sieve fractions, while more robust taxa (A. trapezia and P. ebeninus) tend to be overrepresented by weight measures. Results demonstrate that for all quantification methods tested a 3mm sieve should be used on all sites to allow for regional comparability and to effectively collect all available information about the shellfish remains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dimensionless spray flux Ψa is a dimensionless group that characterises the three most important variables in liquid dispersion: flowrate, drop size and powder flux through the spray zone. In this paper, the Poisson distribution was used to generate analytical solutions for the proportion of nuclei formed from single drops (fsingle) and the fraction of the powder surface covered by drops (fcovered) as a function of Ψa. Monte-Carlo simulations were performed to simulate the spray zone and investigate how Ψa, fsingle and fcovered are related. The Monte-Carlo data was an excellent match with analytical solutions of fcovered and fsingle as a function of Ψa. At low Ψa, the proportion of the surface covered by drops (fcovered) was equal to Ψa. As Ψa increases, drop overlap becomes more dominant and the powder surface coverage levels off. The proportion of nuclei formed from single drops (fsingle) falls exponentially with increasing Ψa. In the ranges covered, these results were independent of drop size, number of drops, drop size distribution (mono-sized, bimodal and trimodal distributions), and the uniformity of the spray. Experimental data of nuclei size distributions as a function of spray flux were fitted to the analytical solution for fsingle by defining a cutsize for single drop nuclei. The fitted cutsizes followed the spray drop sizes suggesting that the method is robust and that the cutsize does indicate the transition size between single drop and agglomerate nuclei. This demonstrates that the nuclei distribution is determined by the dimensionless spray flux and the fraction of drop controlled nuclei can be calculated analytically in advance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the first dynamical analysis of a galaxy cluster to include a large fraction of dwarf galaxies. Our sample of 108 Fornax Cluster members measured with the UK Schmidt Telescope FLAIR-II spectrograph contains 55 dwarf galaxies (15.5 > b(j) > 18.0 or -16 > M-B > -13.5). H alpha emission shows that of the dwarfs are star forming, twice the fraction implied by morphological classifications. The total sample has a mean velocity of 1493 +/- 36 kms s(-1) and a velocity dispersion of 374 +/- 26 km s(-1). The dwarf galaxies form a distinct population: their velocity dispersion (429 +/- 41 km s(-1)) is larger than that of the giants () at the 98% confidence level. This suggests that the dwarf population is dominated by infalling objects whereas the giants are virialized. The Fornax system has two components, the main Fornax Cluster centered on NGC 1399 with cz = 1478 km s(-1) and sigma (cz) = 370 km s(-1) and a subcluster centered 3 degrees to the southwest including NGC 1316 with cz = 1583 km s(-1) and sigma (cz) = 377 km s(-1). This partition is preferred over a single cluster at the 99% confidence level. The subcluster, a site of intense star formation, is bound to Fornax and probably infalling toward the cluster core for the first time. We discuss the implications of this substructure for distance estimates of the Fornax Cluster. We determine the cluster mass profile using the method of Diaferio, which does not assume a virialized sample. The mass within a projected radius of 1.4 Mpc is (7 +/- 2) x 10(13) M-., and the mass-to-light ratio is 300 +/- 100 M-./L-.. The mass is consistent with values derived from the projected mass virial estimator and X-ray measurements at smaller radii.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major challenge in successfully implementing transit-oriented development (TOD) is having a robust process that ensures effective appraisal, initiation and delivery of multi-stakeholder TOD projects. A step-by step project development process can assist in the methodic design, evaluation, and initiation of TOD projects. Successful TOD requires attention to transit, mixed-use development and public space. Brisbane, Australia provides a case-study where recent planning policies and infrastructure documents have laid a foundation for TOD, but where barriers lie in precinct level planning and project implementation. In this context and perhaps in others, the research effort needs to shift toward identification of appropriate project processes and strategies. This paper presents the outcomes of research conducted to date. Drawing on the mainstream approach to project development and financial evaluation for property projects, key steps for potential use in successful delivery of TOD projects have been identified, including: establish the framework; location selection; precinct context review; preliminary precinct design; the initial financial viability study; the decision stage; establishment of project structure; land acquisition; development application; and project delivery. The appropriateness of this mainstream development and appraisal process will be tested through stakeholder research, and the proposed process will then be refined for adoption in TOD projects. It is suggested that the criteria for successful TOD should be broadened beyond financial concerns in order to deliver public sector support for project initiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently Adams and Bischof (1994) proposed a novel region growing algorithm for segmenting intensity images. The inputs to the algorithm are the intensity image and a set of seeds - individual points or connected components - that identify the individual regions to be segmented. The algorithm grows these seed regions until all of the image pixels have been assimilated. Unfortunately the algorithm is inherently dependent on the order of pixel processing. This means, for example, that raster order processing and anti-raster order processing do not, in general, lead to the same tessellation. In this paper we propose an improved seeded region growing algorithm that retains the advantages of the Adams and Bischof algorithm fast execution, robust segmentation, and no tuning parameters - but is pixel order independent. (C) 1997 Elsevier Science B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The avian hippocampus plays a pivotal role in memory required for spatial navigation and food storing. Here we have examined synaptic transmission and plasticity within the hippocampal formation of the domestic chicken using an in vitro slice preparation. With the use of sharp microelectrodes we have shown that excitatory synaptic inputs in this structure are glutamatergic and activate both NMDA-and AMPA-type receptors on the postsynaptic membrane. In response to tetanic stimulation, the EPSP displayed a robust long-term potentiation (LTP) lasting >1 hr. This LTP was unaffected by blockade of NMDA receptors or chelation of postsynaptic calcium. Application of forskolin increased the EPSP and reduced paired-pulse facilitation: (PPF), indicating an increase in release probability. In contrast, LTP was not associated with a change in the PPF ratio. Induction of LTP did not occlude the effects of forskolin. Thus, in contrast to NMDA receptor-independent LTP in the mammalian brain, LTP in the chicken hippocampus is not attributable to a change in the probability of transmitter release and does not require activation of adenylyl cyclase, These findings indicate that a novel form of synaptic plasticity might underlie learning in the avian hippocampus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sensitive high-performance liquid chromatographic assay has been developed for measuring plasma concentrations of methotrexate and its major metabolite, 7-hydroxymethotrexate. Methotrexate and metabolite were extracted from plasma using solid-phase extraction. An internal standard, aminopterin was used. Chromatographic separation was achieved using a 15-cm poly(styrene-divinylbenzene) (PRP-1(R)) column. This column is more robust than a silica-based stationary phase. Post column, the eluent was irradiated with UV light, producing fluorescent photolytic degradation products of methotrexate and the metabolite. The excitation and emission wavelengths of fluorescence detection were at 350 and 435 nm, respectively. The mobile phase consisted of 0.1 M phosphate buffer (pH 6.5), with 6% N,N-dimethylformamide and 0.2% of 30% hydrogen peroxide. The absolute recoveries for methotrexate and 7-hydroxymethotrexate were greater than 86%. Precision, expressed as a coefficient of variation (n=6), was

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple sampling is widely used in vadose zone percolation experiments to investigate the extent in which soil structure heterogeneities influence the spatial and temporal distributions of water and solutes. In this note, a simple, robust, mathematical model, based on the beta-statistical distribution, is proposed as a method of quantifying the magnitude of heterogeneity in such experiments. The model relies on fitting two parameters, alpha and zeta to the cumulative elution curves generated in multiple-sample percolation experiments. The model does not require knowledge of the soil structure. A homogeneous or uniform distribution of a solute and/or soil-water is indicated by alpha = zeta = 1, Using these parameters, a heterogeneity index (HI) is defined as root 3 times the ratio of the standard deviation and mean. Uniform or homogeneous flow of water or solutes is indicated by HI = 1 and heterogeneity is indicated by HI > 1. A large value for this index may indicate preferential flow. The heterogeneity index relies only on knowledge of the elution curves generated from multiple sample percolation experiments and is, therefore, easily calculated. The index may also be used to describe and compare the differences in solute and soil-water percolation from different experiments. The use of this index is discussed for several different leaching experiments. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use theoretical and numerical methods to investigate the general pore-fluid flow patterns near geological lenses in hydrodynamic and hydrothermal systems respectively. Analytical solutions have been rigorously derived for the pore-fluid velocity, stream function and excess pore-fluid pressure near a circular lens in a hydrodynamic system. These analytical solutions provide not only a better understanding of the physics behind the problem, but also a valuable benchmark solution for validating any numerical method. Since a geological lens is surrounded by a medium of large extent in nature and the finite element method is efficient at modelling only media of finite size, the determination of the size of the computational domain of a finite element model, which is often overlooked by numerical analysts, is very important in order to ensure both the efficiency of the method and the accuracy of the numerical solution obtained. To highlight this issue, we use the derived analytical solutions to deduce a rigorous mathematical formula for designing the computational domain size of a finite element model. The proposed mathematical formula has indicated that, no matter how fine the mesh or how high the order of elements, the desired accuracy of a finite element solution for pore-fluid flow near a geological lens cannot be achieved unless the size of the finite element model is determined appropriately. Once the finite element computational model has been appropriately designed and validated in a hydrodynamic system, it is used to examine general pore-fluid flow patterns near geological lenses in hydrothermal systems. Some interesting conclusions on the behaviour of geological lenses in hydrodynamic and hydrothermal systems have been reached through the analytical and numerical analyses carried out in this paper.