950 resultados para Extensible Pluggable Architecture Hydra Data
Resumo:
Using NONMEM, the population pharmacokinetics of perhexiline were studied in 88 patients (34 F, 54 M) who were being treated for refractory angina. Their mean +/- SD (range) age was 75 +/- 9.9 years (46-92), and the length of perhexiline treatment was 56 +/- 77 weeks (0.3-416). The sampling time after a dose was 14.1 +/- 21.4 hours (0.5-200), and the perhexiline plasma concentrations were 0.39 +/- 0.32 mg/L (0.03-1.56). A one-compartment model with first-order absorption was fitted to the data using the first-order (FO) approximation. The best model contained 2 subpopulations (obtained via the $MIXTURE subroutine) of 77 subjects (subgroup A) and 11 subjects (subgroup B) that had typical values for clearance (CL/F) of 21.8 L/h and 2.06 L/h, respectively. The volumes of distribution (V/F) were 1470 L and 260 L, respectively, which suggested a reduction in presystemic metabolism in subgroup B. The interindividual variability (CV%) was modeled logarithmically and for CL/F ranged from 69.1% (subgroup A) to 86.3% (subgroup B). The interindividual variability in V/F was 111%. The residual variability unexplained by the population model was 28.2%. These results confirm and extend the existing pharmacokinetic data on perhexiline, especially the bimodal distribution of CL/F manifested via an inherited deficiency in hepatic and extrahepatic CYP2D6 activity.
Resumo:
When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.
Resumo:
Regional planners, policy makers and policing agencies all recognize the importance of better understanding the dynamics of crime. Theoretical and application-oriented approaches which provide insights into why and where crimes take place are much sought after. Geographic information systems and spatial analysis techniques, in particular, are proving to be essential or studying criminal activity. However, the capabilities of these quantitative methods continue to evolve. This paper explores the use of geographic information systems and spatial analysis approaches for examining crime occurrence in Brisbane, Australia. The analysis highlights novel capabilities for the analysis of crime in urban regions.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A software package that efficiently solves a comprehensive range of problems based on coupled complex nonlinear stochastic ODEs and PDEs is outlined. Its input and output syntax is formulated as a subset of XML, thus making a step towards a standard for specifying numerical simulations.
Resumo:
The cost and risk associated with mineral exploration in Australia increases significantly as companies move into deeper regolith-covered terrain. The ability to map the bedrock and the depth of weathering within an area has the potential to decrease this risk and increase the effectiveness of exploration programs. This paper is the second in a trilogy concerning the Grant's Patch area of the Eastern Goldfields. The recent development of the VPmg potential field inversion program in conjunction with the acquisition of high-resolution gravity data over an area with extensive drilling provided an opportunity to evaluate three-dimensional gravity inversion as a bedrock and regolith mapping tool. An apparent density model of the study area was constructed, with the ground represented as adjoining 200 m by 200 m vertical rectangular prisms. During inversion VPmg incrementally adjusted the density of each prism until the free-air gravity response of the model replicated the observed data. For the Grant's Patch study area, this image of the apparent density values proved easier to interpret than the Bouguer gravity image. A regolith layer was introduced into the model and realistic fresh-rock densities assigned to each basement prism according to its interpreted lithology. With the basement and regolith densities fixed, the VPmg inversion algorithm adjusted the depth to fresh basement until the misfit between the calculated and observed gravity response was minimised. The resulting geometry of the bedrock/regolith contact largely replicated the base of weathering indicated by drilling with predicted depth of weathering values from gravity inversion typically within 15% of those logged during RAB and RC drilling.
Resumo:
Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
The 16S rRNA gene (16S rDNA) is currently the most widely used gene for estimating the evolutionary history of prokaryotes, To date, there are more than 30 000 16S rDNA sequences available from the core databases, GenBank, EMBL and DDBJ, This great number may cause a dilemma when composing datasets for phylogenetic analysis, since the choice and number of reference organisms are known to affect the resulting tree topology. A group of sequences appearing monophyletic in one dataset may not be so in another. This can be especially problematic when establishing the relationships of distantly related sequences at the division (phylum) level. In this study, a multiple-outgroup approach to resolving division-level phylogenetic relationships is suggested using 16S rDNA data. The approach is illustrated by two case studies concerning the monophyly of two recently proposed bacterial divisions, OP9 and OP10.