153 resultados para Data Retention
Resumo:
Seven hundred and nineteen samples from throughout the Cainozoic section in CRP-3 were analysed by a Malvern Mastersizer laser particle analyser, in order to derive a stratigraphic distribution of grain-size parameters downhole. Entropy analysis of these data (using the method of Woolfe and Michibayashi, 1995) allowed recognition of four groups of samples, each group characterised by a distinctive grain-size distribution. Group 1, which shows a multi-modal distribution, corresponds to mudrocks, interbedded mudrock/sandstone facies, muddy sandstones and diamictites. Group 2, with a sand-grade mode but showing wide dispersion of particle size, corresponds to muddy sandstones, a few cleaner sandstones and some conglomerates. Group 3 and Group 4 are also sand-dominated, with better grain-size sorting, and correspond to clean, well-washed sandstones of varying mean grain-size (medium and fine modes, respectively). The downhole disappearance of Group 1, and dominance of Groups 3 and 4 reflect a concomitant change from mudrock- and diamictite-rich lithology to a section dominated by clean, well-washed sandstones with minor conglomerates. Progressive downhole increases in percentage sand and principal mode also reflect these changes. Significant shifts in grain-size parameters and entropy group membership were noted across sequence boundaries and seismic reflectors, as recognised in others studies.
Resumo:
Using NONMEM, the population pharmacokinetics of perhexiline were studied in 88 patients (34 F, 54 M) who were being treated for refractory angina. Their mean +/- SD (range) age was 75 +/- 9.9 years (46-92), and the length of perhexiline treatment was 56 +/- 77 weeks (0.3-416). The sampling time after a dose was 14.1 +/- 21.4 hours (0.5-200), and the perhexiline plasma concentrations were 0.39 +/- 0.32 mg/L (0.03-1.56). A one-compartment model with first-order absorption was fitted to the data using the first-order (FO) approximation. The best model contained 2 subpopulations (obtained via the $MIXTURE subroutine) of 77 subjects (subgroup A) and 11 subjects (subgroup B) that had typical values for clearance (CL/F) of 21.8 L/h and 2.06 L/h, respectively. The volumes of distribution (V/F) were 1470 L and 260 L, respectively, which suggested a reduction in presystemic metabolism in subgroup B. The interindividual variability (CV%) was modeled logarithmically and for CL/F ranged from 69.1% (subgroup A) to 86.3% (subgroup B). The interindividual variability in V/F was 111%. The residual variability unexplained by the population model was 28.2%. These results confirm and extend the existing pharmacokinetic data on perhexiline, especially the bimodal distribution of CL/F manifested via an inherited deficiency in hepatic and extrahepatic CYP2D6 activity.
Resumo:
When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.
Resumo:
Regional planners, policy makers and policing agencies all recognize the importance of better understanding the dynamics of crime. Theoretical and application-oriented approaches which provide insights into why and where crimes take place are much sought after. Geographic information systems and spatial analysis techniques, in particular, are proving to be essential or studying criminal activity. However, the capabilities of these quantitative methods continue to evolve. This paper explores the use of geographic information systems and spatial analysis approaches for examining crime occurrence in Brisbane, Australia. The analysis highlights novel capabilities for the analysis of crime in urban regions.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Research on the stability of flavours during high temperature extrusion cooking is reviewed. The important factors that affect flavour and aroma retention during the process of extrusion are illustrated. A substantial number of flavour volatiles which are incorporated prior to extrusion are normally lost during expansion, this is because of steam distillation. Therefore, a general practice has been to introduce a flavour mix after the extrusion process. This extra operation requires a binding agent (normally oil), and may also result in a non-uniform distribution of the flavour and low oxidative stability of the flavours exposed on the surface. Therefore, the importance of encapsulated flavours, particularly the beta -cyclodextrin-flavour complex, is highlighted in this paper.
Resumo:
The cost and risk associated with mineral exploration in Australia increases significantly as companies move into deeper regolith-covered terrain. The ability to map the bedrock and the depth of weathering within an area has the potential to decrease this risk and increase the effectiveness of exploration programs. This paper is the second in a trilogy concerning the Grant's Patch area of the Eastern Goldfields. The recent development of the VPmg potential field inversion program in conjunction with the acquisition of high-resolution gravity data over an area with extensive drilling provided an opportunity to evaluate three-dimensional gravity inversion as a bedrock and regolith mapping tool. An apparent density model of the study area was constructed, with the ground represented as adjoining 200 m by 200 m vertical rectangular prisms. During inversion VPmg incrementally adjusted the density of each prism until the free-air gravity response of the model replicated the observed data. For the Grant's Patch study area, this image of the apparent density values proved easier to interpret than the Bouguer gravity image. A regolith layer was introduced into the model and realistic fresh-rock densities assigned to each basement prism according to its interpreted lithology. With the basement and regolith densities fixed, the VPmg inversion algorithm adjusted the depth to fresh basement until the misfit between the calculated and observed gravity response was minimised. The resulting geometry of the bedrock/regolith contact largely replicated the base of weathering indicated by drilling with predicted depth of weathering values from gravity inversion typically within 15% of those logged during RAB and RC drilling.
Resumo:
Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.
Resumo:
Large (>1600 mum), ingestively masticated particles of bermuda grass (Cynodon dactylon L. Pers.) leaf and stem labelled with Yb-169 and Ce-144 respectively were inserted into the rumen digesta raft of heifers grazing bermuda grass. The concentration of markers in digesta sampled from the raft and ventral rumen were monitored at regular intervals over approximately 144 h. The data from the two sampling sites were simultaneously fitted to two pool (raft and ventral rumen-reticulum) models with either reversible or sequential flow between the two pools. The sequential flow model fitted the data equally as well as the reversible flow model but the reversible flow model was used because of its greater application. The reversible flow model, hereafter called the raft model, had the following features: a relatively slow age-dependent transfer rate from the raft (means for a gamma 2 distributed rate parameter for leaf 0.0740 v. stem 0.0478 h(-1)), a very slow first order reversible flow from the ventral rumen to the raft (mean for leaf and stem 0.010 h(-1)) and a very rapid first order exit from the ventral rumen (mean of leaf and stem 0.44 h(-1)). The raft was calculated to occupy approximately 0.82 total rumen DM of the raft and ventral rumen pools. Fitting a sequential two pool model or a single exponential model individually to values from each of the two sampling sites yielded similar parameter values for both sites and faster rate parameters for leaf as compared with stem, in agreement with the raft model. These results were interpreted as indicating that the raft forms a large relatively inert pool within the rumen. Particles generated within the raft have difficulty escaping but once into the ventral rumen pool they escape quickly with a low probability of return to the raft. It was concluded that the raft model gave a good interpretation of the data and emphasized escape from and movement within the raft as important components of the residence time of leaf and stem particles within the rumen digesta of cattle.