14 resultados para Maximum Degree Proximity algorithm (MAX-DPA)
em University of Queensland eSpace - Australia
Resumo:
Adsorption of different aromatic compounds (two of them are electrolytes) onto an untreated activated carbon (F100) is investigated. The experimental isotherms are fitted into Langmuir homogenous and heterogeneous Model. Theoretical maximum adsorption capacities that are based on the BET surface area of the adsorbent cannot be close to the real value. The affinity and the heterogeneity of the adsorption system observed to be related to the pK(a) of the solutes. The maximum adsorption capacity (Q(max)) of activated carbon for each solute dependent on the molecular area as well as the type of functional group attached on the aromatic compound and also pH of solution. The arrangement of the molecules on the carbon surface is not face down. Furthermore, it is illustrated that the packing arrangement is most likely edge to face (sorbate-sorbent) with various tilt angles. For characterization of the carbon, the N-2 and CO2 adsorption were used. X-ray Photoelectron Spectroscopy (XPS) measurement was used to surface elemental analysis of activated carbon.
Resumo:
Minimum/maximum autocorrelation factor (MAF) is a suitable algorithm for orthogonalization of a vector random field. Orthogonalization avoids the use of multivariate geostatistics during joint stochastic modeling of geological attributes. This manuscript demonstrates in a practical way that computation of MAF is the same as discriminant analysis of the nested structures. Mathematica software is used to illustrate MAF calculations from a linear model of coregionalization (LMC) model. The limitation of two nested structures in the LMC for MAF is also discussed and linked to the effects of anisotropy and support. The analysis elucidates the matrix properties behind the approach and clarifies relationships that may be useful for model-based approaches. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.
Resumo:
One of the most important determinants of dermatological and systemic penetration after topical application is the delivery or flux of solutes into or through the skin. The maximum dose of solute able to be delivered over a given period of time and area of application is defined by its maximum flux (J(max), mol per cm(2) per h) from a given vehicle. In this work, J(max) values from aqueous solution across human skin were acquired or estimated from experimental data and correlated with solute physicochemical properties. Whereas epidermal permeability coefficients (k(p)) are optimally correlated to solute octanol-water partition coefficient (K-ow) and molecular weight (MW) was found to be the dominant determinant of J(max) for this literature data set: log J(max)=-3.90-0.0190MW (n=87, r(2)=0.847, p
Resumo:
A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
We have used the Two-Degree Field (2dF) instrument on the Anglo-Australian Telescope (AAT) to obtain redshifts of a sample of z < 3 and 18.0 < g < 21.85 quasars selected from Sloan Digital Sky Survey (SDSS) imaging. These data are part of a larger joint programme between the SDSS and 2dF communities to obtain spectra of faint quasars and luminous red galaxies, namely the 2dF-SDSS LRG and QSO (2SLAQ) Survey. We describe the quasar selection algorithm and present the resulting number counts and luminosity function of 5645 quasars in 105.7 deg(2). The bright-end number counts and luminosity functions agree well with determinations from the 2dF QSO Redshift Survey (2QZ) data to g similar to 20.2. However, at the faint end, the 2SLAQ number counts and luminosity functions are steeper (i.e. require more faint quasars) than the final 2QZ results from Croom et al., but are consistent with the preliminary 2QZ results from Boyle et al. Using the functional form adopted for the 2QZ analysis ( a double power law with pure luminosity evolution characterized by a second-order polynomial in redshift), we find a faint-end slope of beta =-1.78 +/- 0.03 if we allow all of the parameters to vary, and beta =-1.45 +/- 0.03 if we allow only the faint-end slope and normalization to vary (holding all other parameters equal to the final 2QZ values). Over the magnitude range covered by the 2SLAQ survey, our maximum-likelihood fit to the data yields 32 per cent more quasars than the final 2QZ parametrization, but is not inconsistent with other g > 21 deep surveys for quasars. The 2SLAQ data exhibit no well-defined 'break' in the number counts or luminosity function, but do clearly flatten with increasing magnitude. Finally, we find that the shape of the quasar luminosity function derived from 2SLAQ is in good agreement with that derived from Type I quasars found in hard X-ray surveys.
Resumo:
Treatment of sepsis remains a significant challenge with persisting high mortality and morbidity. Early and appropriate antibacterial therapy remains an important intervention for such patients. To optimise antibacterial therapy, the clinician must possess knowledge of the pharmacokinetic and pharmacodynamic properties of commonly used antibacterials and how these parameters may be affected by the constellation of pathophysiological changes occurring during sepsis. Sepsis, and the treatment thereof, increases renal preload and, via capillary permeability, leads to 'third-spacing', both resulting in higher antibacterial clearances. Alternatively, sepsis can induce multiple organ dysfunction, including renal and/or hepatic dysfunction, causing a decrease in antibacterial clearance. Aminoglycosides are concentration-dependent antibacterials and they display an increased volume of distribution (V-d) in sepsis, resulting in decreased peak serum concentrations. Reduced clearance from renal dysfunction would increase the likelihood of toxicity. Individualised dosing using extended interval dosing, which maximises the peak serum drug concentration (C-max)/minimum inhibitory concentration ratio is recommended. beta-Lactams and carbapenems are time-dependent antibacterials. An increase in Vd and renal clearance will require increased dosing or administration by continuous infusion. If renal impairment occurs a corresponding dose reduction may be required. Vancomycin displays predominantly time-dependent pharmacodynamic properties and probably requires higher than conventionally recommended doses because of an increased V-d and clearance during sepsis without organ dysfunction. However, optimal dosing regimens remain unresolved. The poor penetration of vancomycin into solid organs may require alternative therapies when sepsis involves solid organs (e.g. lung). Ciprofloxacin displays largely concentration-dependent kill characteristics, but also exerts some time-dependent effects. The V-d of ciprofloxacin is not altered with fluid shifts or over time, and thus no alterations of standard doses are required unless renal dysfunction occurs. In order to optimise antibacterial regimens in patients with sepsis, the pathophysiological effects of systemic inflammatory response syndrome need consideration, in conjunction with knowledge of the different kill characteristics of the various antibacterial classes. In conclusion, certain antibacterials can have a very high V-d, therefore leading to a low C-max and if a high peak is needed, then this would lead to underdosing. The Vd of certain antibacterials, namely aminoglycosides and vancomycin, changes over time, which means dosing may need to be altered over time. Some patients with serum creatinine values within the normal range can have very high drug clearances, thereby producing low serum drug levels and again leading to underdosing. Copyright © 2010 Elsevier Inc. All rights reserved.
Resumo:
The degree to which Southern Hemisphere climatic changes during the end of the last glacial period and early Holocene (30-8 ka) were influenced or initiated by events occurring in the high latitudes of the Northern Hemisphere is a complex issue. There is conflicting evidence for the degree of hemispheric 'teleconnection' and an unresolved debate as to the principle forcing mechanism(s). The available hypotheses are difficult to test robustly, however, because the few detailed palaeoclimatic records in the Southern Hemisphere are widely dispersed and lack duplication. Here we present climatic and environmental reconstructions from across Australia, a key region of the Southern Hemisphere because of the range of environments it covers and the potentially important role regional atmospheric and oceanic controls play in global climate change. We identify a general scheme of events for the end of the last glacial period and early Holocene but a detailed reconstruction proved problematic. Significant progress in climate quantification and geochronological control is now urgently required to robustly investigate change through this period. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
We present a novel, maximum-likelihood (ML), lattice-decoding algorithm for noncoherent block detection of QAM signals. The computational complexity is polynomial in the block length; making it feasible for implementation compared with the exhaustive search ML detector. The algorithm works by enumerating the nearest neighbor regions for a plane defined by the received vector; in a conceptually similar manner to sphere decoding. Simulations show that the new algorithm significantly outperforms existing approaches
Resumo:
Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.