46 resultados para PDE-based parallel preconditioner
em CentAUR: Central Archive University of Reading - UK
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
Inferring population admixture from genetic data and quantifying it is a difficult but crucial task in evolutionary and conservation biology. Unfortunately state-of-the-art probabilistic approaches are computationally demanding. Effectively exploiting the computational power of modern multiprocessor systems can thus have a positive impact to Monte Carlo-based simulation of admixture modeling. A novel parallel approach is briefly described and promising results on its message passing interface (MPI)-based C implementation are reported.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Partnership is increasingly espoused as the best relationship between members of the sustainable development aid chain, and implies a respect for the position of all and a desire to avoid a situation where one group dominates another. It also implies a form of relationship that is not just 'better' for the sake of it but that is more able to help achieve sustainable development. However, given the inevitable inequalities in power between donors that have the resources and field partners that do not it can be hard to put this ideal into practice. This paper explores the function of partnership within a group of closely related institutions that comprise the Catholic Church development chain. The research focussed on three Catholic Church based donors (one from the USA and two from Europe) and their partners in Abuja Ecclesiastical Province, Nigeria. Relationships between and within various strata of the Church in Nigeria were also examined. Relationships were 'patchy' at all levels. One of the donors had a significant operational presence in Nigeria and this was regarded by some respondents as a parallel structure that seriously undermined local bodies. However, while problems existed, there was a sense of inter-dependence arising from a shared sense of values and Catholic Social Teaching, which allowed partners to work through their stresses and conflicts. It is the innate sustainability of the aid chain itself founded upon a set of shared values that provided the space and time for problems to be addressed. Copyright (C) 2008 John Wiley & Sons, Ltd and ERP Environment.
Resumo:
Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
Statistical approaches have been applied to examine amino acid pairing preferences within parallel beta-sheets. The main chain hydrogen bonding pattern in parallel beta-sheets means that, for each residue pair, only one of the residues is involved in main chain hydrogen bonding with the strand containing the partner residue. We call this the hydrogen bonded (HB) residue and the partner residue the non-hydrogen bonded (nHB) residue, and differentiate between the favorability of a pair and that of its reverse pair, e.g. Asn(HB)-Thr(nHB)versus Thr(HB)-Asn(nHB). Significantly (p < or = 0.000001) favoured pairings were rationalised using stereochemical arguments. For instance, Asn(HB)-Thr(nHB) and Arg(HB)-Thr(nHB) were favoured pairs, where the residues adopted favoured chi1 rotamer positions that allowed side-chain interactions to occur. In contrast, Thr(HB)-Asn(nHB) and Thr(HB)-Arg(nHB) were not significantly favoured, and could only form side-chain interactions if the residues involved adopted less favourable chi1 conformations. The favourability of hydrophobic pairs e.g. Ile(HB)-Ile(nHB), Val(HB)-Val(nHB) and Leu(HB)-Ile(nHB) was explained by the residues adopting their most preferred chi1 and chi2 conformations, which enabled them to form nested arrangements. Cysteine-cysteine pairs are significantly favoured, although these do not form intrasheet disulphide bridges. Interactions between positively and negatively charged residues were asymmetrically preferred: those with the negatively charged residue at the HB position were more favoured. This trend was accounted for by the presence of general electrostatic interactions, which, based on analysis of distances between charged atoms, were likely to be stronger when the negatively charged residue is the HB partner. The Arg(HB)-Asp(nHB) interaction was an exception to this trend and its favorability was rationalised by the formation of specific side-chain interactions. This research provides rules that could be applied to protein structure prediction, comparative modelling and protein engineering and design. The methods used to analyse the pairing preferences are automated and detailed results are available (http://www.rubic.rdg.ac.uk/betapairprefsparallel/).
Resumo:
Statistical approaches have been applied to examine amino acid pairing preferences within parallel beta-sheets. The main chain hydrogen bonding pattern in parallel beta-sheets means that, for each residue pair, only one of the residues is involved in main chain hydrogen bonding with the strand containing the partner residue. We call this the hydrogen bonded (HB) residue and the partner residue the non-hydrogen bonded (nHB) residue, and differentiate between the favourability of a pair and that of its reverse pair, e.g. Asn(HB)-Thr(nHB) versus Thr(HB)-Asn(nHB). Significantly (p <= 0.000001) favoured pairings were rationalised using stereochemical arguments. For instance, Asn(HB)-Thr(nHB) and Arg(HB)-Thr(nHB) were favoured pairs, where the residues adopted favoured chi(1) rotamer positions that allowed side-chain interactions to occur. In contrast, Thr(HB)-Asn(nHB) and Thr(HB)-Arg(nHB) were not significantly favoured, and could only form side-chain interactions if the residues involved adopted less favourable chi(1) conformations. The favourability of hydrophobic pairs e.g. Ile(HB)-Ile(nHB), Val(HB)-Val(nHB) and Leu(HB)-Ile(nHB) was explained by the residues adopting their most preferred chi(1) and chi(2) conformations, which enabled them to form nested arrangements. Cysteine-cysteine pairs are significantly favoured, although these do not form intrasheet disulphide bridges. Interactions between positively and negatively charged residues were asymmetrically preferred: those with the negatively charged residue at the HB position were more favoured. This trend was accounted for by the presence of general electrostatic interactions, which, based on analysis of distances between charged atoms, were likely to be stronger when the negatively charged residue is the HB partner. The Arg(HB)-Asp(nHB) interaction was an exception to this trend and its favourability was rationalised by the formation of specific side-chain interactions. This research provides rules that could be applied to protein structure prediction, comparative modelling and protein engineering and design. The methods used to analyse the pairing preferences are automated and detailed results are available (http:// www.rubic.rdg.ac.uk/betapairprefsparallel/). (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
A number of strategies are emerging for the high throughput (HTP) expression of recombinant proteins to enable structural and functional study. Here we describe a workable HTP strategy based on parallel protein expression in E. coli and insect cells. Using this system we provide comparative expression data for five proteins derived from the Autographa californica polyhedrosis virus genome that vary in amino acid composition and in molecular weight. Although the proteins are part of a set of factors known to be required for viral late gene expression, the precise function of three of the five, late expression factors (lefs) 6, 7 and 10, is unknown. Rapid expression and characterisation has allowed the determination of their ability to bind DNA and shown a cellular location consistent with their properties. Our data point to the utility of a parallel expression strategy to rapidly obtain workable protein expression levels from many open reading frames (ORFs).
Resumo:
WO3-based materials as sensors for the monitor of environmental gases such as NO2 (NO + NO2) have been rapidly developed for various potential applications (stationary and mobile uses). It has been reported that these materials are highly sensitive to NOx with the sensitivity further enhanced by adding precious group metals (PGM such as Pt, Pd, Au, etc.). However, there has been limited work in revealing the sensing mechanism for these gases over the WO3-based sensors. In particular, the role of promoter is not yet clear though speculations on their catalytic, electronic and structural effects have been made in the past. In parallel to these PGM promoters here we report,for the first time, that Ag promotion can also enhance WO3 sensitivity significantly. In addition, this promotion decreases the optimum sensor temperature of 300 degreesC for Most WO3-based sensors, to below 200 degreesC. Characterizations (XRD, TEM, and impedance measurement) reveal that there is no significant bulk structure change nor particle size alteration in the WO3 phases during the NO exposure. However, it is found that the Ag doping creates a high concentration of oxygen vacancies in form of coordinated crystallographic shear (CS) planes onto the underneath WO3. It is thus proposed that the Ag particle facilitates the oxidative conversion of NO to NO2 followed by a subsequent NO2 adsorption on the defective WO, sites created at the Ag-WO3 interface; hence, accounting for the high molecular sensitivity. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Uncertainty contributes a major part in the accuracy of a decision-making process while its inconsistency is always difficult to be solved by existing decision-making tools. Entropy has been proved to be useful to evaluate the inconsistency of uncertainty among different respondents. The study demonstrates an entropy-based financial decision support system called e-FDSS. This integrated system provides decision support to evaluate attributes (funding options and multiple risks) available in projects. Fuzzy logic theory is included in the system to deal with the qualitative aspect of these options and risks. An adaptive genetic algorithm (AGA) is also employed to solve the decision algorithm in the system in order to provide optimal and consistent rates to these attributes. Seven simplified and parallel projects from a Hong Kong construction small and medium enterprise (SME) were assessed to evaluate the system. The result shows that the system calculates risk adjusted discount rates (RADR) of projects in an objective way. These rates discount project cash flow impartially. Inconsistency of uncertainty is also successfully evaluated by the use of the entropy method. Finally, the system identifies the favourable funding options that are managed by a scheme called SME Loan Guarantee Scheme (SGS). Based on these results, resource allocation could then be optimized and the best time to start a new project could also be identified throughout the overall project life cycle.
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
A parallel interference cancellation (PIC) detection scheme is proposed to suppress the impact of imperfect synchronisation. By treating as interference the extra components in the received signal caused by timing misalignment, the PIC detector not only offers much improved performance but also retains a low structural and computational complexity.