870 resultados para Nonstandard scheme
Resumo:
Objective: The aim of this study was to assess the effect of repeated cycles of five chemical disinfectant solutions on the roughness and hardness of three hard chairside reliners. Methods: A total of 180 circular specimens (30 mm x 6 mm) were fabricated using three hard chairside reliners (Jet; n = 60, Kooliner; n = 60, Tokuyama Rebase II Fast; n = 60), which were immersed in deionised water (control), and five disinfectant solutions (1%, 2%, 5.25% sodium hypochlorite; 2% glutaraldehyde; 4% chlorhexidine gluconate). They were tested for Knoop hardness (KHN) and surface roughness (mu m), before and after 30 simulated disinfecting cycles. Data was analysed by the factorial scheme (6 x 2), two-way analysis of variance (anova), followed by Tukey`s test. Results: For Jet (from 18.74 to 13.86 KHN), Kooliner (from 14.09 to 8.72 KHN), Tokuyama (from 12.57 to 8.28 KHN) a significant decrease in hardness was observed irrespective of the solution used on all materials. For Jet (from 0.09 to 0.11 mu m) there was a statistically significant increase in roughness. Kooliner (from 0.36 to 0.26 mu m) presented a statistically significant decrease in roughness and Tokuyama (from 0.15 to 0.11 mu m) presented no statistically significant difference after 30 days. Conclusions: This study showed that all disinfectant solutions promoted a statistically significant decrease in hardness, whereas with roughness, the materials tested showed a statistically significant increase, except for Tokuyama. Although statistically significant values were registered, these results could not be considered clinically significant.
Resumo:
Medication data retrieved from Australian Repatriation Pharmaceutical Benefits Scheme (RPBS) claims for 44 veterans residing in nursing homes and Pharmaceutical Benefits Scheme (PBS) claims for 898 nursing home residents were compared with medication data from nursing home records to determine the optimal time interval for retrieving claims data and its validity. Optimal matching was achieved using 12 weeks of RPBS claims data, with 60% of medications in the RPBS claims located in nursing home administration records, and 78% of medications administered to nursing home residents identified in RPBS claims. In comparison, 48% of medications administered to nursing home residents could be found in 12 weeks of PBS data, and 56% of medications present in PBS claims could be matched with nursing home administration records. RPBS claims data was superior to PBS, due to the larger number of scheduled items available to veterans and the veteran's file number, which acts as a unique identifier. These findings should be taken into account when using prescription claims data for medication histories, prescriber feedback, drug utilisation, intervention or epidemiological studies. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
The Cenozoic Victoria Land Basin (VLB) stratigraphic section penetrated by CRP-3 is mostly of Early Oligocene age. It contains an array of lithofacies comprising fine-grained mudrocks, interlaminated and interbedded mudrocks/sandstones, mud-rich and mud-poor sandstones, conglomerates and diametites that are together interpreted as the products of shallow marine to possibly non-marine environments of deposition, affected by the periodic advance and retreat of tidewater glaciers. This lithofacies assemblage can be readily rationalised using the facies scheme designed originally for CRP-2/2A, and published previously. The uppermost 330 metres below sea floor (mbsf) shows a cyclical arrangement of lithofacies also similar to that recognised throughout CRP-2/2A, and interpreted to reflect cyclical variations in relative sea-level driven by ice volume fluctuations ("Motif A"). Between 330 and 480 mbsf, a series of less clearly cyclical units, generally fining-upward but nonetheless incorporating a significant subset of the facies assemblage, has been identified and noted in the Initial Report as "Motif B. Below 480 mbsf, the section is arranged into a repetitive succession of fining-upward units, each of which comprises dolerite clast conglomerate at the base passing upward into relatively thick intervals of sandstones. The cycles present down 480 mbsf are defined as sequences, each interpreted to record cyclical variation of relative sea-level. The thickness distribution of sequences in CRP-3 provides some insights into the geological variables controlling sediment accumulation in the Early Oligocene section. The uppermost part of the section in CRP-3 comprises two or three thick, complete sequences that show a broadly symmetrical arrangement of lithofacies (similar to Sequences 9-11 in CRP-2/2A). This suggests a period of relatively rapid tectonic subsidence, which allowed preservation of the complete facies cycle. Below Sequence 3, however, is a considerable interval of thin, incomplete and erosionally truncated sequences (4-23), which incorporates both the remainder of Motif A sequences and all Motif B sequences recognised. The thinner and more truncated sequences suggest sediment accumulation under conditions of reduced accommodation, and given the lack of evidence for glacial conditions (see Powell et al., this volume) tends to argue for a period of reduced tectonic subsidence. The section below 480 mbsf consists of a series of fining-upward, conglomerate to sandstone intervals which cannot be readily interpreted in terms of relative sea-level change. A relatively mudrock-rich interval above the basal conglomerate/breccia (782-762 mbsf) may record initial flooding of the basin during early rift subsidence. The lithostratigraphy summarised above has been linked to seismic reflection data using depth conversion techniques (Henrys et al., this volume). The three uppermost reflectors ("o", "p" and "q") correlate to the package of thick sequences 1-3, and several deeper reflectors can also be correlated to sequence boundaries. The package of thick Sequences 1-3 shows a sheet-like cross-sectional geometry on seismic reflection lines, unlike the similar package recognised in CRP-2/2A.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Objective. To determine out-of-pocket expenditures related to osteoarthritis (OA) and to explore whether demographic details, health status scores (Medical Outcomes Study 36-item Short Form [SF-36] and Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC]), or perception of social effect were expenditure determinants. Methods. A prospective cohort study of community-dwelling subjects with OA completed 4 consecutive 3-month cost diaries. In addition, subjects completed the SF-36 and WOMAC at baseline and at 12 months. Social impact at baseline was collected. Four groups categorized by age and sex were compared. Patients undergoing joint replacement were excluded. Results. Differences in health status were defined more by age than by sex, especially for physical function. The costs to the patients were high, particularly for women, who spent more on medications and special equipment. Women also reported receiving more assistance from family and friends. Higher disease-related expenditures were associated with greater pain levels, poorer social function and mental health, and longer duration of disease. Significant independent predictors of total patient expenditures related to OA were being female and having joint stiffness. Conclusion. Despite having heavily subsidized health care and access to the Pharmaceutical Benefits Scheme, out-of-pocket costs for patients with OA in Australia are considerable. Higher expenditures for patients with OA are related to more advanced disease, especially for women.
Resumo:
The phase estimation algorithm is so named because it allows an estimation of the eigenvalues associated with an operator. However, it has been proposed that the algorithm can also be used to generate eigenstates. Here we extend this proposal for small quantum systems, identifying the conditions under which the phase-estimation algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate two simple examples, one in which the algorithm effectively generates eigenstates, and one in which it does not.
Resumo:
Self- and peer-assessment are being used increasingly in higher education, to help assign grades to students' work and to help students to learn more effectively. However, in spite of this trend there is little in the published literature on how students view these methods. In this paper we present an analysis of the views of a large number of students (N = 233) who had just experienced self- and peer-feedback as part of one of their subjects. It is a rarely questioned commonplace in the literature that in order to gain benefit from peer and self-assessment schemes students first need training in the specific scheme being used; ideally they will play a role in devising the scheme. The intervention reported here, which involved a large (N = 233) group of students, included no such measures. The results show that students felt, nonetheless, that they benefited from the intervention. The results also present prima facie evidence that training or other measures to further involve the students in the peer and self-assessment scheme might be beneficial. Our analysis of students' views revealed eight general dimensions under which are grouped twenty higher order themes. The results both support and extend previous research and give a more detailed picture than previously available. The general dimensions found were: Difficult; Gained Better Understanding of Marking; Discomfort; Productive (including learning benefits and improved work); Problems with Implementation; Read Others' Work; Develop Empathy (with assessing staff); and, Motivation (especially motivation to impress peers). The practical implications of these findings are discussed.
Resumo:
We develop a new iterative filter diagonalization (FD) scheme based on Lanczos subspaces and demonstrate its application to the calculation of bound-state and resonance eigenvalues. The new scheme combines the Lanczos three-term vector recursion for the generation of a tridiagonal representation of the Hamiltonian with a three-term scalar recursion to generate filtered states within the Lanczos representation. Eigenstates in the energy windows of interest can then be obtained by solving a small generalized eigenvalue problem in the subspace spanned by the filtered states. The scalar filtering recursion is based on the homogeneous eigenvalue equation of the tridiagonal representation of the Hamiltonian, and is simpler and more efficient than our previous quasi-minimum-residual filter diagonalization (QMRFD) scheme (H. G. Yu and S. C. Smith, Chem. Phys. Lett., 1998, 283, 69), which was based on solving for the action of the Green operator via an inhomogeneous equation. A low-storage method for the construction of Hamiltonian and overlap matrix elements in the filtered-basis representation is devised, in which contributions to the matrix elements are computed simultaneously as the recursion proceeds, allowing coefficients of the filtered states to be discarded once their contribution has been evaluated. Application to the HO2 system shows that the new scheme is highly efficient and can generate eigenvalues with the same numerical accuracy as the basic Lanczos algorithm.
Resumo:
Time-dependent wavepacket evolution techniques demand the action of the propagator, exp(-iHt/(h)over-bar), on a suitable initial wavepacket. When a complex absorbing potential is added to the Hamiltonian for combating unwanted reflection effects, polynomial expansions of the propagator are selected on their ability to cope with non-Hermiticity. An efficient subspace implementation of the Newton polynomial expansion scheme that requires fewer dense matrix-vector multiplications than its grid-based counterpart has been devised. Performance improvements are illustrated with some benchmark one and two-dimensional examples. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A scheme is presented to incorporate a mixed potential integral equation (MPIE) using Michalski's formulation C with the method of moments (MoM) for analyzing the scattering of a plane wave from conducting planar objects buried in a dielectric half-space. The robust complex image method with a two-level approximation is used for the calculation of the Green's functions for the half-space. To further speed up the computation, an interpolation technique for filling the matrix is employed. While the induced current distributions on the object's surface are obtained in the frequency domain, the corresponding time domain responses are calculated via the inverse fast Fourier transform (FFT), The complex natural resonances of targets are then extracted from the late time response using the generalized pencil-of-function (GPOF) method. We investigate the pole trajectories as we vary the distance between strips and the depth and orientation of single, buried strips, The variation from the pole position of a single strip in a homogeneous dielectric medium was only a few percent for most of these parameter variations.
Resumo:
We derive optimal N-photon two-mode input states for interferometric phase measurements. Under canonical measurements the phase variance scales as N-2 for these states, as compared to N-1 or N-1/2 for states considered bq previous authors. We prove, that it is not possible to realize the canonical measurement by counting photons in the outputs of the interferometer, even if an adjustable auxiliary phase shift is allowed in the interferometer. However. we introduce a feedback algorithm based on Bayesian inference to control this auxiliary phase shift. This makes the measurement close to a canonical one, with a phase variance scaling slightly above N-2. With no feedback, the best result (given that the phase to be measured is completely unknown) is a scaling of N-1. For optimal input states having up to four photons, our feedback scheme is the best possible one, but for higher photon numbers more complicated schemes perform marginally better.
Resumo:
It is not possible to make measurements of the phase of an optical mode using linear optics without introducing an extra phase uncertainty. This extra phase variance is quite large for heterodyne measurements, however it is possible to reduce it to the theoretical limit of log (n) over bar (4 (n) over bar (2)) using adaptive measurements. These measurements are quite sensitive to experimental inaccuracies, especially time delays and inefficient detectors. Here it is shown that the minimum introduced phase variance when there is a time delay of tau is tau/(8 (n) over bar). This result is verified numerically, showing that the phase variance introduced approaches this limit for most of the adaptive schemes using the best final phase estimate. The main exception is the adaptive mark II scheme with simplified feedback, which is extremely sensitive to time delays. The extra phase variance due to time delays is considered for the mark I case with simplified feedback, verifying the tau /2 result obtained by Wiseman and Killip both by a more rigorous analytic technique and numerically.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
Novel current density mapping (CDM) schemes are developed for the design of new actively shielded, clinical magnetic resonance imaging (MRI) magnets. This is an extended inverse method in which the entire potential solution space for the superconductors has been considered, rather than single current density layers. The solution provides an insight into the required superconducting coil pattern for a desired magnet configuration. This information is then used as an initial set of parameters for the magnet structure, and a previously developed hybrid numerical optimization technique is used to obtain the final geometry of the magnet. The CDM scheme is applied to the design of compact symmetric, asymmetric, and open architecture 1.0-1.5 T MRI magnet systems of novel geometry and utility. A new symmetric 1.0-T system that is just I m in length with a full 50-cm diameter of the active, or sensitive, volume (DSV) is detailed, as well as an asymmetric system in which a 50-cm DSV begins just 14 cm from the end of the coil structure. Finally a 1.0-T open magnet system with a full 50-cm DSV is presented. These new designs provide clinically useful homogeneous regions and have appropriately restricted stray fields but, in some of the designs, the DSV is much closer to the end of the magnet system than in conventional designs. These new designs have the potential to reduce patient claustrophobia and improve physician access to patients undergoing scans. (C) 2002 Wiley Periodicals, Inc.
Resumo:
A finite-element method is used to study the elastic properties of random three-dimensional porous materials with highly interconnected pores. We show that Young's modulus, E, is practically independent of Poisson's ratio of the solid phase, nu(s), over the entire solid fraction range, and Poisson's ratio, nu, becomes independent of nu(s) as the percolation threshold is approached. We represent this behaviour of nu in a flow diagram. This interesting but approximate behaviour is very similar to the exactly known behaviour in two-dimensional porous materials. In addition, the behaviour of nu versus nu(s) appears to imply that information in the dilute porosity limit can affect behaviour in the percolation threshold limit. We summarize the finite-element results in terms of simple structure-property relations, instead of tables of data, to make it easier to apply the computational results. Without using accurate numerical computations, one is limited to various effective medium theories and rigorous approximations like bounds and expansions. The accuracy of these equations is unknown for general porous media. To verify a particular theory it is important to check that it predicts both isotropic elastic moduli, i.e. prediction of Young's modulus alone is necessary but not sufficient. The subtleties of Poisson's ratio behaviour actually provide a very effective method for showing differences between the theories and demonstrating their ranges of validity. We find that for moderate- to high-porosity materials, none of the analytical theories is accurate and, at present, numerical techniques must be relied upon.