435 resultados para distributed generation (DG)
Resumo:
Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.
Resumo:
Platelet-derived microparticles (PMPs) which are produced during platelet activation contribute to coagulation1 and bind to traumatized endothelium in an animal model2. Such endothelial injury occurs during percutaneous transluminal coronary angioplasty (PTCA), a procedure which restores the diameter of occluded coronary arteries using balloon inflations. However, re-occlusions subsequently develop in 20-25% of patients3, although this is limited by treatment with anti-platelet glycoprotein IIb/IIIa receptor drugs such as abciximab4. However, abciximab only partially decreases the need for revascularisation5, and therefore other mechanisms appear to be involved. As platelet activation occurs during PTCA, it is likely that PMPs may be produced and contribute to restenosis. This study population consisted of 113 PTCA patients, of whom 38 received abciximab. Paired peripheral arterial blood samples were obtained from the PTCA sheath: 1) following heparinisation (baseline); and 2) subsequent to all vessel manipulation (post-PTCA). Blood was prepared with an anti-CD61 (glycoprotein IIIa) fluorescence conjugated antibody to identify PMPs using flow cytometry, and PMP results expressed as a percentage of all CD61 events. The level of PMPs increased significantly from baseline following PTCA in the without abciximab group (paired t test, P=0.019). However, there was no significant change in the level of PMPs following PTCA in patients who received abciximab. Baseline clinical characteristics between patient groups were similar, although patients administered abciximab had more complex PTCA procedures, such as increased balloon inflation pressures (ANOVA, P=0.0219). In this study, we have clearly demonstrated that the level of CD61-positive PMPs increased during PTCA. This trend has been demonstrated previously, although a low sample size prevented statistical significance being attained6. The results of our work also demonstrate that there was no increase in PMPs after PTCA with abiciximab treatment. The increased PMPs may adhere to traumatized endothelium, contributing to re-occlusion of the arteries, but this remains to be determined. References: (1) Holme PA, Brosstad F, Solum NO. Blood Coagulation and Fibrinolysis. 1995;6:302-310. (2) Merten M, Pakala R, Thiagarajan P, Benedict CR. Circulation. 1999;99:2577-2582. (3) Califf RM. American Heart Journal.1995;130:680-684. (4) Coller BS, Scudder LE. Blood. 1985;66:1456-1459. (5) Topol EJ, Califf RM, Weisman HF, Ellis SG, Tcheng JE, Worley S, Ivanhoe R, George BS, Fintel D, Weston M, Sigmon K, Anderson KM, Lee KL, Willerson JT on behalf of the EPIC investigators. Lancet. 1994;343:881-886. (6) Scharf RE, Tomer A, Marzec UM, Teirstein PS, Ruggeri ZM, Harker LA. Arteriosclerosis and Thrombosis. 1992;12:1475-87.
Resumo:
In this chapter we continue the exposition of crypto topics that was begun in the previous chapter. This chapter covers secret sharing, threshold cryptography, signature schemes, and finally quantum key distribution and quantum cryptography. As in the previous chapter, we have focused only on the essentials of each topic. We have selected in the bibliography a list of representative items, which can be consulted for further details. First we give a synopsis of the topics that are discussed in this chapter. Secret sharing is concerned with the problem of how to distribute a secret among a group of participating individuals, or entities, so that only predesignated collections of individuals are able to recreate the secret by collectively combining the parts of the secret that were allocated to them. There are numerous applications of secret-sharing schemes in practice. One example of secret sharing occurs in banking. For instance, the combination to a vault may be distributed in such a way that only specified collections of employees can open the vault by pooling their portions of the combination. In this way the authority to initiate an action, e.g., the opening of a bank vault, is divided for the purposes of providing security and for added functionality, such as auditing, if required. Threshold cryptography is a relatively recently studied area of cryptography. It deals with situations where the authority to initiate or perform cryptographic operations is distributed among a group of individuals. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Signature schemes deal with the problem of generating and verifying electronic) signatures for documents.Asubclass of signature schemes is concerned with the shared-generation and the sharedverification of signatures, where a collaborating group of individuals are required to perform these actions. A new paradigm of security has recently been introduced into cryptography with the emergence of the ideas of quantum key distribution and quantum cryptography. While classical cryptography employs various mathematical techniques to restrict eavesdroppers from learning the contents of encrypted messages, in quantum cryptography the information is protected by the laws of physics.
Resumo:
Given the increased importance of adaptation debates in global climate negotiations, pressure to achieve biodiversity, food and water security through managed landscape-scale adaptation will likely increase across the globe over the coming decade. In parallel, emerging market-based, terrestrial greenhouse gas abatement programs present a real opportunity to secure such adaptation to climate change through enhanced landscape resilience. Australia has an opportunity to take advantage of such programs through regional planning aspects of its governance arrangements for NRM. This paper explores necessary reforms to Australia's regional NRM planning systems to ensure that they will be better able to direct the nation's emerging GGA programs to secure enhanced landscape adaptation. © 2013 Planning Institute Australia.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.
Resumo:
A comparison of relay power minimisation subject to received signal-to-noise ratio (SNR) at the receiver and SNR maximisation subject to the total transmitted power of relays for a typical wireless network with distributed beamforming is presented. It is desirable to maximise receiver quality-of-service (QoS) and also to minimise the cost of transmission in terms of power. Hence, these two optimisation problems are very common and have been addressed separately in the literature. It is shown that SNR maximisation subject to power constraint and power minimisation subject to SNR constraint yield the same results for a typical wireless network. It proves that either one of the optimisation approaches is sufficient.
Resumo:
Molecular orbital calculations have predicted the stability of a range of connectivities for the radical C5H potential surface. The most energetically favorable of these include the linear C4CH geometry and two ring-chain structures HC2C3 and C2C3H The corresponding anions are also shown to be theoretically stable, and furthermore, a fourth isomer, C2CHC2, is predicted to be the most stable anion connectivity. These results have motivated experimental efforts. Methodologies for the generation of the non-ring-containing isomeric anions C4CH and C2CHC2 have been developed utilizing negative ion mass spectrometry. The absolute connectivities of the anions have been established using deuterium labeling, charge reversal, and neutralization reionization techniques. The success of the latter experiment confirms theoretical predictions of stability of the corresponding neutral species. This is the first reported observation of the neutral C2CHC2 species that calculations predict to be substantially less stable than the C4CH connectivity but still bound relative to isomerization processes.
Resumo:
The dicoordinated borinium ion, dihydroxyborinium, B(OH)(2)(+) is generated from methyl boronic acid CH3B(OH)(2) by dissociative electron ionization and its connectivity confirmed by collisional activation. Neutralization-reionization (NR) experiments on this ion indicate that the neutral B(OH)(2) radical is a viable species in the gas phase. Both vertical neutralization of B(OH)(2)(+) and reionization of B(OH)(2) in the NR experiment are, however, associated with particularly unfavorable Franck-Condon factors. The differences in adiabatic and vertical electron transfer behavior can be traced back to a particular pi stabilization of the cationic species compared to the sp(2)-type neutral radical. Thermochemical data on several neutral and cationic boron compounds are presented based on calculations performed at the G2 level of theory.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work supplements rule-based reasoning with case based reasoning and intelligent information retrieval. This research, specifies an approach to the case based retrieval problem which relies heavily on an extended object-oriented / rule-based system architecture that is supplemented with causal background information. Machine learning techniques and a distributed agent architecture are used to help simulate the reasoning process of lawyers. In this paper, we outline our implementation of the hybrid IKBALS II Rule Based Reasoning / Case Based Reasoning system. It makes extensive use of an automated case representation editor and background information.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.
Resumo:
Molecular orbital calculations have predicted the stability of a range of connectivities for the radical C5H potential surface. The most energetically favorable of these include the linear C4CH geometry and two ring-chain structures HC2C3 and C2C3H The corresponding anions are also shown to be theoretically stable, and furthermore, a fourth isomer, C2CHC2, is predicted to be the most stable anion connectivity. These results have motivated experimental efforts. Methodologies for the generation of the non-ring-containing isomeric anions C4CH and C2CHC2 have been developed utilizing negative ion mass spectrometry. The absolute connectivities of the anions have been established using deuterium labeling, charge reversal, and neutralization reionization techniques. The success of the latter experiment confirms theoretical predictions of stability of the corresponding neutral species. This is the first reported observation of the neutral C2CHC2 species that calculations predict to be substantially less stable than the C4CH connectivity but still bound relative to isomerization processes.
Resumo:
In recent years Australian Law Schools have implemented various forms of peer assisted learning or mentoring, including career mentoring by former students of final year students and orientation mentoring or tutoring by later year students of incoming first year students. The focus of these programs therefore is on the transition into or out of law school. There is not always as great an emphasis however, as part of this transition, on the use of law students belonging to the same unit cohort as a learning resource for each other within their degree. This is despite the claimed preference of Generation Y students for collaborative learning environments, authentic learning experiences and the development of marketable workplace skills. In the workplace, be it professional legal practice or otherwise, colleagues rely heavily on each other for information, support and guidance. In the undergraduate law degree at the Queensland University of Technology (‘QUT’) the Torts Student Peer Mentor Program aims to supplement a student’s understanding of the substantive law of torts with the development of life-long skills. As such it has the primary objective, albeit through discussion facilitated by more senior students, of encouraging first year students to develop for themselves the skills they need to be successful both as law students and as legal practitioners. Examples of such skills include those relevant to: preparation for assessment tasks; group work; problem solving, cognition and critical thinking; independent learning; and communication. Significantly, in this way, not only do the mentees benefit from involvement in the program, but the peer mentors, or program facilitators, themselves also benefit from their participation in the real world learning environment the program provides. This paper outlines the development and implementation of the above program, the pedagogy which influenced it, and its impact on student learning experiences
Resumo:
Three anion isomers of formula C7H have been synthesised in the mass spectrometer by unequivocal routes. The structures of the isomers are \[HCCC(C-2)(2)](-), C6CH- and C2CHC4-. One of these, \[HCCC(C-2)(2)](-), is formed in sufficient yield to allow it to be charge stripped to the corresponding neutral radical.
Resumo:
Design of a series-connected photovoltaic generator (SPVG) capable of enhancing power quality is investigated. Analysis of the SPVG operations under disturbance conditions shows explicitly how achievable network voltage quality is affected by the SPVG injected power and its apparent power rating, and that voltage quality can be significantly improved even with a modest level of energy storage capacity incorporated in the SPVG. A control system for the SPVG is also proposed. Both simulation and laboratory tests confirm the efficacy of the distributed generator system.