58 resultados para Hypergraph Partitioning
Resumo:
The stable free radical 1,1,3,3-tetramethylisoindolin-2-yloxyl (TMIO) has proved to be very suitable for use as a spin probe for a number of applications. Because it is soluble mainly in non-polar liquids, there is a need for new derivatives that can be used in a variety of environments. This has been done by introducing substituents in the 5-position of the aromatic ring, namely carboxyl (CTMIO), trimethylamino (TMTMIOI) and sodium sulphonate (NaTMIOS). An accurate ESR method was developed for the measurement of partition coefficients in n-octanol–water. For comparison purposes the method was also applied to some Tempo derivatives. The effect of temperature on the rotational correlation times and the nitrogen-14 hyperfine coupling constant of some of the spin probes was investigated. There is evidence for dimerization of CTMIO to form a biradical
Resumo:
Common diseases such as endometriosis (ED), Alzheimer's disease (AD) and multiple sclerosis (MS) account for a significant proportion of the health care burden in many countries. Genome-wide association studies (GWASs) for these diseases have identified a number of individual genetic variants contributing to the risk of those diseases. However, the effect size for most variants is small and collectively the known variants explain only a small proportion of the estimated heritability. We used a linear mixed model to fit all single nucleotide polymorphisms (SNPs) simultaneously, and estimated genetic variances on the liability scale using SNPs from GWASs in unrelated individuals for these three diseases. For each of the three diseases, case and control samples were not all genotyped in the same laboratory. We demonstrate that a careful analysis can obtain robust estimates, but also that insufficient quality control (QC) of SNPs can lead to spurious results and that too stringent QC is likely to remove real genetic signals. Our estimates show that common SNPs on commercially available genotyping chips capture significant variation contributing to liability for all three diseases. The estimated proportion of total variation tagged by all SNPs was 0.26 (SE 0.04) for ED, 0.24 (SE 0.03) for AD and 0.30 (SE 0.03) for MS. Further, we partitioned the genetic variance explained into five categories by a minor allele frequency (MAF), by chromosomes and gene annotation. We provide strong evidence that a substantial proportion of variation in liability is explained by common SNPs, and thereby give insights into the genetic architecture of the diseases.
Resumo:
Intensively managed pastures in subtropical Australia under dairy production are nitrogen (N) loaded agro-ecosystems, with an increased pool of N available for denitrification. The magnitude of denitrification losses and N2:N2O partitioning in these agro-ecosystems is largely unknown, representing a major uncertainty when estimating total N loss and replacement. This study investigated the influence of different soil moisture contents on N2 and N2O emissions from a subtropical dairy pasture in Queensland, Australia. Intact soil cores were incubated over 15 days at 80% and 100% water-filled pore space (WFPS), after the application of 15N labelled nitrate, equivalent to 50 kg N ha−1. This setup enabled the direct quantification of N2 and N2O emissions following fertilisation using the 15N gas flux method. The main product of denitrification in both treatments was N2. N2 emissions exceeded N2O emissions by a factor of 8 ± 1 at 80% WFPS and a factor of 17 ± 2 at 100% WFPS. The total amount of N-N2 lost over the incubation period was 21.27 kg ± 2.10 N2-N ha−1 at 80% WFPS and 25.26 kg ± 2.79 kg ha−1 at 100% WFPS respectively. N2 emissions remained high at 100% WFPS, while related N2O emissions decreased. At 80% WFPS, N2 emissions increased constantly over time while N2O fluxes declined. Consequently, N2/(N2 + N2O) product ratios increased over the incubation period in both treatments. N2/(N2 + N2O) product ratios responded significantly to soil moisture, confirming WFPS as a key driver of denitrification. The substantial amount of fertiliser lost as N2 reveals the agronomic significance of denitrification as a major pathway of N loss for sub-tropical pastures at high WFPS and may explain the low fertiliser N use efficiency observed for these agro-ecosystems.
Resumo:
This paper reports on a study of ERP lifecycle major issues from the perspectives of individuals with substantial and diverse involvement with SAP Financials in Queensland Government. A survey was conducted of 117 ERP system project participants in five closely related state government agencies. A modified Delphi technique identified, rationalized and weighed perceived major issues in ongoing ERP life cycle implementation, management and support. The five agencies each implemented SAP Financials simultaneously using a common implementation partner. The three survey rounds of the Delphi technique, together with coding and synthesizing procedures, resulted in a set of 10 major issue categories with 38 sub-issues. Relative scores of issue importance are compared across government agencies, roles (client vs implementation partner) and organizational levels (strategic, technical and operational). Study findings confirm the importance of this finer partitioning of the data, and distinctions identified reflect the circumstances of ERP lifecycle implementation, management and support among the stakeholder groups. The study findings should also be of interest to stakeholders who seek to better understand the issues surrounding ERP systems and to better realise the benefits of ERP.
Resumo:
In the design of tissue engineering scaffolds, design parameters including pore size, shape and interconnectivity, mechanical properties and transport properties should be optimized to maximize successful inducement of bone ingrowth. In this paper we describe a 3D micro-CT and pore partitioning study to derive pore scale parameters including pore radius distribution, accessible radius, throat radius, and connectivity over the pore space of the tissue engineered constructs. These pore scale descriptors are correlated to bone ingrowth into the scaffolds. Quantitative and visual comparisons show a strong correlation between the local accessible pore radius and bone ingrowth; for well connected samples a cutoff accessible pore radius of approximately 100 microM is observed for ingrowth. The elastic properties of different types of scaffolds are simulated and can be described by standard cellular solids theory: (E/E(0))=(rho/rho(s))(n). Hydraulic conductance and diffusive properties are calculated; results are consistent with the concept of a threshold conductance for bone ingrowth. Simple simulations of local flow velocity and local shear stress show no correlation to in vivo bone ingrowth patterns. These results demonstrate a potential for 3D imaging and analysis to define relevant pore scale morphological and physical properties within scaffolds and to provide evidence for correlations between pore scale descriptors, physical properties and bone ingrowth.
Resumo:
The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
The dynamic lateral segregation of signaling proteins into microdomains is proposed to facilitate signal transduction, but the constraints on microdomain size, mobility, and diffusion that might realize this function are undefined. Here we interrogate a stochastic spatial model of the plasma membrane to determine how microdomains affect protein dynamics. Taking lipid rafts as representative microdomains, we show that reduced protein mobility in rafts segregates dynamically partitioning proteins, but the equilibrium concentration is largely independent of raft size and mobility. Rafts weakly impede small-scale protein diffusion but more strongly impede long-range protein mobility. The long-range mobility of raft-partitioning and raft-excluded proteins, however, is reduced to a similar extent. Dynamic partitioning into rafts increases specific interprotein collision rates, but to maximize this critical, biologically relevant function, rafts must be small (diameter, 6 to 14 nm) and mobile. Intermolecular collisions can also be favored by the selective capture and exclusion of proteins by rafts, although this mechanism is generally less efficient than simple dynamic partitioning. Generalizing these results, we conclude that microdomains can readily operate as protein concentrators or isolators but there appear to be significant constraints on size and mobility if microdomains are also required to function as reaction chambers that facilitate nanoscale protein-protein interactions. These results may have significant implications for the many signaling cascades that are scaffolded or assembled in plasma membrane microdomains.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
The editor, Gerard de Valence, points out in the preface, this book is neither a textbook nor a guide to what is done by construction managers and construction economists – read quantity surveyors and the like. Rather, de Valence notes it comprises a collection of chapters each of which focus on matters at the industry level and, in doing so, illustrates that a substantially improved understanding of the building and construction industry can be gained beyond the economics of delivering projects. Before giving some thought to how far each of the chapters achieve this, it’s worth reflecting on the virtues of developing construction economics as its own discipline or sub-discipline in general economics and the bold manner by which de Valence is proposing we do this. That is, de Valence proposes partitioning industry and project economics - as explained in the preface and in Chapter 1. de Valence’s view that “the time seems right” for these developments is also worthy of some consideration.
Resumo:
Electronic Health Record (EHR) retrieval processes are complex demanding Information Technology (IT) resources exponentially in particular memory usage. Database-as-a-service (DAS) model approach is proposed to meet the scalability factor of EHR retrieval processes. A simulation study using ranged of EHR records with DAS model was presented. The bucket-indexing model incorporated partitioning fields and bloom filters in a Singleton design pattern were used to implement custom database encryption system. It effectively provided faster responses in the range query compared to different types of queries used such as aggregation queries among the DAS, built-in encryption and the plain-text DBMS. The study also presented with constraints around the approach should consider for other practical applications.