859 resultados para robust hedging
Resumo:
A CE method was developed and validated for the stereoselective determination of midodrine and desglymidodrine in Czapek culture medium to be applied to a stereoselective biotransformation study employing endophytic fungi. The electrophoretic analyses were performed using an uncoated fused-silica capillary and 70 mmol/L sodium acetate buffer solution (pH 5.0) containing 30 mmol/L heptakis (2, 3, 6-tri-O-methyl)-beta-CD as running electrolyte. The applied voltage and temperature used were 15 kV and 15 C, respectively. The UV detector was set at 200 nm. The sample preparation was carried out by liquid-liquid extraction using ethyl acetate as extractor solvent. The method was linear over the concentration range of 0.1-12 mu g/mL for each enantiomer of midodrine and desglymidodrine (r >= 0.9975). Within-day and between-day precision and accuracy evaluated by RSDs and relative errors, respectively, were lower than 15% for all analytes. The method proved to be robust by a fractional factorial design evaluation. The validated method was used to assess the midodrine biotransformation to desglymidodrine by the fungus Phomopsis sp. (TD2), which biotransformed 1.1% of (-)-midodrine to (-)-desglymidodrine and 6.1% of (+)-midodrine to (+)-desglymidodrine.
Resumo:
Rectangular dropshafts, commonly used in sewers and storm water systems, are characterised by significant flow aeration. New detailed air-water flow measurements were conducted in a near-full-scale dropshaft at large discharges. In the shaft pool and outflow channel, the results demonstrated the complexity of different competitive air entrainment mechanisms. Bubble size measurements showed a broad range of entrained bubble sizes. Analysis of streamwise distributions of bubbles suggested further some clustering process in the bubbly flow although, in the outflow channel, bubble chords were in average smaller than in the shaft pool. A robust hydrophone was tested to measure bubble acoustic spectra and to assess its field application potential. The acoustic results characterised accurately the order of magnitude of entrained bubble sizes, but the transformation from acoustic frequencies to bubble radii did not predict correctly the probability distribution functions of bubble sizes.
Resumo:
There is now considerable evidence to suggest that non-demented people with Parkinson's disease (PD) experience difficulties using the morphosyntactic aspects of language. It remains unclear, however, at precisely which point in the processing of morphosyntax, these difficulties emerge. The major objective of the present study was to examine the impact of PD on the processes involved in accessing morphosyntactic information in the lexicon. Nineteen people with PD and 19 matched control subjects participated in the study which employed on-line word recognition tasks to examine morphosyntactic priming for local grammatical dependencies that occur both within (e.g. is going) and across (e.g. she gives) phrasal boundaries (Experiments 1 and 2, respectively). The control group evidenced robust morphosyntactic priming effects that were consistent with the involvement of both pre- (Experiment 1) and post-lexical (Experiment 2) processing routines. Whilst the participants with PD also recorded priming for dependencies within phrasal boundaries (Experiment 1), priming effects were observed over an abnormally brief time course. Further, in contrast to the controls, the PD group failed to record morphosyntactic priming for constructions that crossed phrasal boundaries (Experiment 2). The results demonstrate that attentionally mediated mechanisms operating at both the pre- and post-lexical stages of processing are able to contribute to morphosyntactic priming effects. In addition, the findings support the notion that, whilst people with PD are able to access morphosyntactic information in a normal manner, the time frame in which this information remains available for processing is altered. Deficits may also be experienced at the post-lexical integrational stage of processing.
Resumo:
Experimental mechanical sieving methods are applied to samples of shellfish remains from three sites in southeast Queensland, Seven Mile Creek Mound, Sandstone Point and One-Tree, to test the efficacy of various recovery and quantification procedures commonly applied to shellfish assemblages in Australia. There has been considerable debate regarding the most appropriate sieve sizes and quantification methods that should be applied in the recovery of vertebrate faunal remains. Few studies, however, have addressed the impact of recovery and quantification methods on the interpretation of invertebrates, specifically shellfish remains. In this study, five shellfish taxa representing four bivalves (Anadara trapezia, Trichomya hirsutus, Saccostrea glomerata, Donax deltoides) and one gastropod (Pyrazus ebeninus) common in eastern Australian midden assemblages are sieved through 10mm, 6.3mm and 3.15mm mesh. Results are quantified using MNI, NISP and weight. Analyses indicate that different structural properties and pre- and postdepositional factors affect recovery rates. Fragile taxa (T. hirsutus) or those with foliated structure (S. glomerata) tend to be overrepresented by NISP measures in smaller sieve fractions, while more robust taxa (A. trapezia and P. ebeninus) tend to be overrepresented by weight measures. Results demonstrate that for all quantification methods tested a 3mm sieve should be used on all sites to allow for regional comparability and to effectively collect all available information about the shellfish remains.
Resumo:
Dimensionless spray flux Ψa is a dimensionless group that characterises the three most important variables in liquid dispersion: flowrate, drop size and powder flux through the spray zone. In this paper, the Poisson distribution was used to generate analytical solutions for the proportion of nuclei formed from single drops (fsingle) and the fraction of the powder surface covered by drops (fcovered) as a function of Ψa. Monte-Carlo simulations were performed to simulate the spray zone and investigate how Ψa, fsingle and fcovered are related. The Monte-Carlo data was an excellent match with analytical solutions of fcovered and fsingle as a function of Ψa. At low Ψa, the proportion of the surface covered by drops (fcovered) was equal to Ψa. As Ψa increases, drop overlap becomes more dominant and the powder surface coverage levels off. The proportion of nuclei formed from single drops (fsingle) falls exponentially with increasing Ψa. In the ranges covered, these results were independent of drop size, number of drops, drop size distribution (mono-sized, bimodal and trimodal distributions), and the uniformity of the spray. Experimental data of nuclei size distributions as a function of spray flux were fitted to the analytical solution for fsingle by defining a cutsize for single drop nuclei. The fitted cutsizes followed the spray drop sizes suggesting that the method is robust and that the cutsize does indicate the transition size between single drop and agglomerate nuclei. This demonstrates that the nuclei distribution is determined by the dimensionless spray flux and the fraction of drop controlled nuclei can be calculated analytically in advance.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
A major challenge in successfully implementing transit-oriented development (TOD) is having a robust process that ensures effective appraisal, initiation and delivery of multi-stakeholder TOD projects. A step-by step project development process can assist in the methodic design, evaluation, and initiation of TOD projects. Successful TOD requires attention to transit, mixed-use development and public space. Brisbane, Australia provides a case-study where recent planning policies and infrastructure documents have laid a foundation for TOD, but where barriers lie in precinct level planning and project implementation. In this context and perhaps in others, the research effort needs to shift toward identification of appropriate project processes and strategies. This paper presents the outcomes of research conducted to date. Drawing on the mainstream approach to project development and financial evaluation for property projects, key steps for potential use in successful delivery of TOD projects have been identified, including: establish the framework; location selection; precinct context review; preliminary precinct design; the initial financial viability study; the decision stage; establishment of project structure; land acquisition; development application; and project delivery. The appropriateness of this mainstream development and appraisal process will be tested through stakeholder research, and the proposed process will then be refined for adoption in TOD projects. It is suggested that the criteria for successful TOD should be broadened beyond financial concerns in order to deliver public sector support for project initiation.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
Recently Adams and Bischof (1994) proposed a novel region growing algorithm for segmenting intensity images. The inputs to the algorithm are the intensity image and a set of seeds - individual points or connected components - that identify the individual regions to be segmented. The algorithm grows these seed regions until all of the image pixels have been assimilated. Unfortunately the algorithm is inherently dependent on the order of pixel processing. This means, for example, that raster order processing and anti-raster order processing do not, in general, lead to the same tessellation. In this paper we propose an improved seeded region growing algorithm that retains the advantages of the Adams and Bischof algorithm fast execution, robust segmentation, and no tuning parameters - but is pixel order independent. (C) 1997 Elsevier Science B.V.
Resumo:
The avian hippocampus plays a pivotal role in memory required for spatial navigation and food storing. Here we have examined synaptic transmission and plasticity within the hippocampal formation of the domestic chicken using an in vitro slice preparation. With the use of sharp microelectrodes we have shown that excitatory synaptic inputs in this structure are glutamatergic and activate both NMDA-and AMPA-type receptors on the postsynaptic membrane. In response to tetanic stimulation, the EPSP displayed a robust long-term potentiation (LTP) lasting >1 hr. This LTP was unaffected by blockade of NMDA receptors or chelation of postsynaptic calcium. Application of forskolin increased the EPSP and reduced paired-pulse facilitation: (PPF), indicating an increase in release probability. In contrast, LTP was not associated with a change in the PPF ratio. Induction of LTP did not occlude the effects of forskolin. Thus, in contrast to NMDA receptor-independent LTP in the mammalian brain, LTP in the chicken hippocampus is not attributable to a change in the probability of transmitter release and does not require activation of adenylyl cyclase, These findings indicate that a novel form of synaptic plasticity might underlie learning in the avian hippocampus.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
A sensitive high-performance liquid chromatographic assay has been developed for measuring plasma concentrations of methotrexate and its major metabolite, 7-hydroxymethotrexate. Methotrexate and metabolite were extracted from plasma using solid-phase extraction. An internal standard, aminopterin was used. Chromatographic separation was achieved using a 15-cm poly(styrene-divinylbenzene) (PRP-1(R)) column. This column is more robust than a silica-based stationary phase. Post column, the eluent was irradiated with UV light, producing fluorescent photolytic degradation products of methotrexate and the metabolite. The excitation and emission wavelengths of fluorescence detection were at 350 and 435 nm, respectively. The mobile phase consisted of 0.1 M phosphate buffer (pH 6.5), with 6% N,N-dimethylformamide and 0.2% of 30% hydrogen peroxide. The absolute recoveries for methotrexate and 7-hydroxymethotrexate were greater than 86%. Precision, expressed as a coefficient of variation (n=6), was
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
Multiple sampling is widely used in vadose zone percolation experiments to investigate the extent in which soil structure heterogeneities influence the spatial and temporal distributions of water and solutes. In this note, a simple, robust, mathematical model, based on the beta-statistical distribution, is proposed as a method of quantifying the magnitude of heterogeneity in such experiments. The model relies on fitting two parameters, alpha and zeta to the cumulative elution curves generated in multiple-sample percolation experiments. The model does not require knowledge of the soil structure. A homogeneous or uniform distribution of a solute and/or soil-water is indicated by alpha = zeta = 1, Using these parameters, a heterogeneity index (HI) is defined as root 3 times the ratio of the standard deviation and mean. Uniform or homogeneous flow of water or solutes is indicated by HI = 1 and heterogeneity is indicated by HI > 1. A large value for this index may indicate preferential flow. The heterogeneity index relies only on knowledge of the elution curves generated from multiple sample percolation experiments and is, therefore, easily calculated. The index may also be used to describe and compare the differences in solute and soil-water percolation from different experiments. The use of this index is discussed for several different leaching experiments. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.