902 resultados para Graph-based methods
Resumo:
Objectives: To assess the in situ color stability, surface and the tooth/restoration interface degradation of a silorane-based composite (P90, 3M ESPE) after accelerated artificial ageing (AAA), in comparison with other dimethacrylate monomer-based composites (Z250/Z350, 3M ESPE and Esthet-X, Dentsply). Methods: Class V cavities (25 mm(2) x 2 mmdeep) were prepared in 48 bovine incisors, which were randomly allocated into 4 groups of 12 specimens each, according to the type of restorative material used. After polishing, 10 specimens were submitted to initial color readings (Easyshade, Vita) and 2 to analysis by scanning electronic microscopy (SEM). Afterwards, the teeth were submitted to AAA for 384 h, which corresponds to 1 year of clinical use, after which new color readings and microscopic images were obtained. The values obtained for the color analysis were submitted to statistical analysis (1-way ANOVA, Tukey, p < 0.05). Results: With regard to color stability, it was verified that all the composites showed color alteration above the clinically acceptable levels (Delta E >= 3.3), and that the silorane-based composite showed higher Delta E (18.6), with a statistically significant difference in comparison with the other composites (p < 0.05). The SEM images showed small alterations for the dimethacrylate-based composites after AAA and extensive degradation for the silorane-based composite with a rupture at the interface between the matrix/particle. Conclusion: It may be concluded that the silorane-based composite underwent greater alteration with regard to color stability and greater surface and tooth/restoration interface degradation after AAA. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Problems associated with the stickiness of food in processing and storage practices along with its causative factors are outlined. Fundamental mechanisms that explain why and how food products become sticky are discussed. Methods currently in use for characterizing and overcoming stickiness problems in food processing and storage operations are described. The use of glass transition temperature-based model, which provides a rational basis for understanding and characterizing the stickiness of many food products, is highlighted.
Resumo:
Dispersal, or the amount of dispersion between an individual's birthplace and that of its offspring, is of great importance in population biology, behavioural ecology and conservation, however, obtaining direct estimates from field data on natural populations can be problematic. The prickly forest skink, Gnypetoscincus queenslandiae, is a rainforest endemic skink from the wet tropics of Australia. Because of its log-dwelling habits and lack of definite nesting sites, a demographic estimate of dispersal distance is difficult to obtain. Neighbourhood size, defined as 4 piD sigma (2) (where D is the population density and sigma (2) the mean axial squared parent-offspring dispersal rate), dispersal and density were estimated directly and indirectly for this species using mark-recapture and microsatellite data, respectively, on lizards captured at a local geographical scale of 3 ha. Mark-recapture data gave a dispersal rate of 843 m(2)/generation (assuming a generation time of 6.5 years), a time-scaled density of 13 635 individuals * generation/km(2) and, hence, a neighbourhood size of 144 individuals. A genetic method based on the multilocus (10 loci) microsatellite genotypes of individuals and their geographical location indicated that there is a significant isolation by distance pattern, and gave a neighbourhood size of 69 individuals, with a 95% confidence interval between 48 and 184. This translates into a dispersal rate of 404 m(2)/generation when using the mark-recapture density estimation, or an estimate of time-scaled population density of 6520 individuals * generation/km(2) when using the mark-recapture dispersal rate estimate. The relationship between the two categories of neighbourhood size, dispersal and density estimates and reasons for any disparities are discussed.
Resumo:
This article describes a new test method for the assessment of the severity of environmental stress cracking of biomedical polyurethanes in a manner that minimizes the degree of subjectivity involved. The effect of applied strain and acetone pre-treatment on degradation of Pellethane 2363 80A and Pellethane 2363 55D polyurethanes under in vitro and in vivo conditions is studied. The results are presented using a magnification-weighted image rating system that allows the semi-quantitative rating of degradation based on distribution and severity of surface damage. Devices for applying controlled strain to both flat sheet and tubing samples are described. The new rating system consistently discriminated between. the effects of acetone pre-treatments, strain and exposure times in both in vitro and in vivo experiments. As expected, P80A underwent considerable stress cracking compared with P55D. P80A produced similar stress crack ratings in both in vivo and in vitro experiments, however P55D performed worse under in vitro conditions compared with in vivo. This result indicated that care must be taken when interpreting in vitro results in the absence of in vivo data. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Lateral ventricular volumes based on segmented brain MR images can be significantly underestimated if partial volume effects are not considered. This is because a group of voxels in the neighborhood of lateral ventricles is often mis-classified as gray matter voxels due to partial volume effects. This group of voxels is actually a mixture of ventricular cerebro-spinal fluid and the white matter and therefore, a portion of it should be included as part of the lateral ventricular structure. In this note, we describe an automated method for the measurement of lateral ventricular volumes on segmented brain MR images. Image segmentation was carried in combination of intensity correction and thresholding. The method is featured with a procedure for addressing mis-classified voxels in the surrounding of lateral ventricles. A detailed analysis showed that lateral ventricular volumes could be underestimated by 10 to 30% depending upon the size of the lateral ventricular structure, if mis-classified voxels were not included. Validation of the method was done through comparison with the averaged manually traced volumes. Finally, the merit of the method is demonstrated in the evaluation of the rate of lateral ventricular enlargement. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
Germline mutations of the PTEN tumor-suppressor gene, on 10q23, cause Cowden syndrome, an inherited hamartoma syndrome with a high risk of breast, thyroid and endometrial carcinomas and, some suggest, melanoma. To date, most studies which strongly implicate PTEN in the etiology of sporadic melanomas have depended on cell lines, short-term tumor cultures and noncultured metastatic melanomas. The only study which reports PTEN protein expression in melanoma focuses on cytoplasmic expression, mainly in metastatic samples. To determine how PTEN contributes to the etiology or the progression of primary cutaneous melanoma, we examined cytoplasmic and nuclear PTEN expression against clinical and pathologic features in a population-based sample of 150 individuals with incident primary cutaneous melanoma. Among 92 evaluable samples, 30 had no or decreased cytoplasmic PTEN protein expression and the remaining 62 had normal PTEN expression. In contrast, 84 tumors had no or decreased nuclear expression and 8 had normal nuclear PTEN expression. None of the clinical features studied, such as Clark's level and Breslow thickness or sun exposure, were associated with cytoplasmic PTEN expressional levels. An association with loss of nuclear PTEN expression was indicated for anatomical site (p = 0.06) and mitotic index (p = 0.02). There was also an association for melanomas to either not express nuclear PTEN or to express p53 alone, rather than both simultaneously (p = 0.02). In contrast with metastatic melanoma, where we have shown previously that almost two-thirds of tumors have some PTEN inactivation, only one-third of primary melanomas had PTEN silencing. This suggests that PTEN inactivation is a late event likely related to melanoma progression rather than initiation. Taken together with our previous observations in thyroid and islet cell tumors, our data suggest that nuclear-cytoplasmic partitioning of PTEN might also play a role in melanoma progression. (C) 2002 Wiley-Liss, Inc.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
Reaction between 5-(4-amino-2-thiabutyl)-5-methyl-3,7-dithianonane-1, 9-diamine (N3S3) and 5- methyl-2,2-bipyridine-5-carbaldehyde and subsequent reduction of the resulting imine with sodium borohydride results in a potentially ditopic ligand (L). Treatment of L with one equivalent of an iron( II) salt led to the monoprotonated complex [Fe(HL)](3+), isolated as the hexafluorophosphate salt. The presence of characteristic bands for the tris( bipyridyl) iron( II) chromophore in the UV/vis spectrum indicated that the iron( II) atom is coordinated octahedrally by the three bipyridyl (bipy) groups. The [Fe( bipy) 3] moiety encloses a cavity composed of the N3S3 portion of the ditopic ligand. The mononuclear and monomeric nature of the complex [Fe(HL)](3+) has been established also by accurate mass analysis. [Fe(HL)](3+) displays reduced stability to base compared with the complex [Fe(bipy)(3)](2+). In aqueous solution [Fe(HL)](3+) exhibits irreversible electrochemical behaviour with an oxidation wave ca. 60 mV to more positive potential than [Fe(bipy)(3)](2+). Investigations of the interaction of [Fe(L)](2+) with copper( II), iron( II), and mercury( II) using mass spectroscopic and potentiometric methods suggested that where complexation occurred, fewer than six of the N3S3 cavity donors were involved. The high affinity of the complex [Fe(L)](2+) for protons is one reason suggested to contribute to the reluctance to coordinate a second metal ion.
Resumo:
The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Systeme Pour I'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-cast Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.
Resumo:
We introduce a model of computation based on read only memory (ROM), which allows us to compare the space-efficiency of reversible, error-free classical computation with reversible, error-free quantum computation. We show that a ROM-based quantum computer with one writable qubit is universal, whilst two writable bits are required for a universal classical ROM-based computer. We also comment on the time-efficiency advantages of quantum computation within this model.
Resumo:
In this paper we construct predictor-corrector (PC) methods based on the trivial predictor and stochastic implicit Runge-Kutta (RK) correctors for solving stochastic differential equations. Using the colored rooted tree theory and stochastic B-series, the order condition theorem is derived for constructing stochastic RK methods based on PC implementations. We also present detailed order conditions of the PC methods using stochastic implicit RK correctors with strong global order 1.0 and 1.5. A two-stage implicit RK method with strong global order 1.0 and a four-stage implicit RK method with strong global order 1.5 used as the correctors are constructed in this paper. The mean-square stability properties and numerical results of the PC methods based on these two implicit RK correctors are reported.
Resumo:
This paper presents results on the simulation of the solid state sintering of copper wires using Monte Carlo techniques based on elements of lattice theory and cellular automata. The initial structure is superimposed onto a triangular, two-dimensional lattice, where each lattice site corresponds to either an atom or vacancy. The number of vacancies varies with the simulation temperature, while a cluster of vacancies is a pore. To simulate sintering, lattice sites are picked at random and reoriented in terms of an atomistic model governing mass transport. The probability that an atom has sufficient energy to jump to a vacant lattice site is related to the jump frequency, and hence the diffusion coefficient, while the probability that an atomic jump will be accepted is related to the change in energy of the system as a result of the jump, as determined by the change in the number of nearest neighbours. The jump frequency is also used to relate model time, measured in Monte Carlo Steps, to the actual sintering time. The model incorporates bulk, grain boundary and surface diffusion terms and includes vacancy annihilation on the grain boundaries. The predictions of the model were found to be consistent with experimental data, both in terms of the microstructural evolution and in terms of the sintering time. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniforniity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part 11 where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease. (C) 2002 Elsevier Science Inc. All rights reserved.