996 resultados para function sharing
Resumo:
Introducing function sharing into designs allows eliminating costly structure by adapting existing structure to perform its function. This can eliminate many inefficiencies of reusing general componentssin specific contexts. "Redistribution of intermediate results'' focuses on instances where adaptation requires only addition/deletion of data flow and unused code removal. I show that this approach unifies and extends several well-known optimization classes. The system performs search and screening by deriving, using a novel explanation-based generalization technique, operational filtering predicates from input teleological information. The key advantage is to focus the system's effort on optimizations that are easier to prove safe.
Resumo:
Five models for human interleukin-7 (HIL-7), HIL-9, HIL-13, HIL-15 and HIL-17 have been generated by SYBYL software package. The primary models were optimized using molecular dynamics and molecular mechanics methods. The final models were optimized using a steepest descent algorithm and a subsequent conjugate gradient method. The complexes with these interleukins and the common gamma chain of interleukin-2 receptor (IL-2R) were constructed and subjected to energy minimization. We found residues, such as Gln127 and Tyr103, of the common gamma chain of IL-2R are very important. Other residues, e.g. Lys70, Asn128 and Glu162, are also significant. Four hydrophobic grooves and two hydrophilic sites converge at the active site triad of the gamma chain. The binding sites of these interleukins interaction with the common gamma chain exist in the first helical and/or the fourth helical domains. Therefore, we conclude that these interleukins binds to the common gamma chain of IL-2R by the first and the fourth helix domain. Especially at the binding sites of some residues (lysine, arginine, asparagine, glutamic acid and aspartic acid), with a discontinuous region of the common gamma chain of IL-2R, termed the interleukins binding sites (103-210). The study of these sites can be important for the development of new drugs. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Even though today’s corporations recognize that they need to understand modern project management techniques (Schwalbe, 2002, p2), many researchers continue to provide evidence of poor IT project success. With Kotnour, (2000) finding that project performance is positively associated with project knowledge, a better understanding of how to effectively manage knowledge in IT projects should have considerable practical significance for increasing the chances of project success. Using a combined qualitative/quantitative method of data collection in multiple case studies spanning four continents, and comprising a variety of organizational types, the focus of this current research centered on the question of why individuals working within IT project teams might be motivated towards, or inhibited from, sharing their knowledge and experience in their activities, procedures, and processes. The research concluded with the development of a new theoretical model of knowledge sharing behavior, ‘The Alignment Model of Motivational Focus’. This model suggests that an individual’s propensity to share knowledge and experience is a function of perceived personal benefits and costs associated with the activity, balanced against the individual’s alignment to a group of ‘institutional’ factors. These factors are identified as alignments to the project team, to the organization, and dependent on the circumstances, to either the professional discipline or community of practice, to which the individual belongs.
Resumo:
Computational biology increasingly demands the sharing of sophisticated data and annotations between research groups. Web 2.0 style sharing and publication requires that biological systems be described in well-defined, yet flexible and extensible formats which enhance exchange and re-use. In contrast to many of the standards for exchange in the genomic sciences, descriptions of biological sequences show a great diversity in format and function, impeding the definition and exchange of sequence patterns. In this presentation, we introduce BioPatML, an XML-based pattern description language that supports a wide range of patterns and allows the construction of complex, hierarchically structured patterns and pattern libraries. BioPatML unifies the diversity of current pattern description languages and fills a gap in the set of XML-based description languages for biological systems. We discuss the structure and elements of the language, and demonstrate its advantages on a series of applications, showing lightweight integration between the BioPatML parser and search engine, and the SilverGene genome browser. We conclude by describing our site to enable large scale pattern sharing, and our efforts to seed this repository.
Resumo:
Secret-sharing schemes describe methods to securely share a secret among a group of participants. A properly constructed secret-sharing scheme guarantees that the share belonging to one participant does not reveal anything about the shares of others or even the secret itself. Besides the obvious feature which is to distribute a secret, secret-sharing schemes have also been used in secure multi-party computations and redundant residue number systems for error correction codes. In this paper, we propose that the secret-sharing scheme be used as a primitive in a Network-based Intrusion Detection System (NIDS) to detect attacks in encrypted networks. Encrypted networks such as Virtual Private Networks (VPNs) fully encrypt network traffic which can include both malicious and non-malicious traffic. Traditional NIDS cannot monitor encrypted traffic. Our work uses a combination of Shamir's secret-sharing scheme and randomised network proxies to enable a traditional NIDS to function normally in a VPN environment. In this paper, we introduce a novel protocol that utilises a secret-sharing scheme to detect attacks in encrypted networks.
Resumo:
Secret-sharing schemes describe methods to securely share a secret among a group of participants. A properly constructed secret-sharing scheme guarantees that the share belonging to one participant does not reveal anything about the shares of others or even the secret itself. Besides being used to distribute a secret, secret-sharing schemes have also been used in secure multi-party computations and redundant residue number systems for error correction codes. In this paper, we propose that the secret-sharing scheme be used as a primitive in a Network-based Intrusion Detection System (NIDS) to detect attacks in encrypted Networks. Encrypted networks such as Virtual Private Networks (VPNs) fully encrypt network traffic which can include both malicious and non-malicious traffic. Traditional NIDS cannot monitor such encrypted traffic. We therefore describe how our work uses a combination of Shamir's secret-sharing scheme and randomised network proxies to enable a traditional NIDS to function normally in a VPN environment.
Resumo:
While undertaking the ANDS RDA Gold Standard Record Exemplars project, research data sharing was discussed with many QUT researchers. Our experiences provided rich insight into researcher attitudes towards their data and the sharing of such data. Generally, we found traditional altruistic motivations for research data sharing did not inspire researchers, but an explanation of the more achievement-oriented benefits were more compelling.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
The work addresses the problem of cheating prevention in secret sharing. Two cheating scenarios are considered. In the first one, the cheaters always submit invalid shares to the combiner. In the second one, the cheaters collectively decide which shares are to be modified so the combiner gets a mixture of valid and invalid shares from the cheaters. The secret scheme is said to be k-cheating immune if any group of k cheaters has no advantage over honest participants. The paper investigates cryptographic properties of the defining function of secret sharing so the scheme is k-cheating immune. Constructions of secret sharing immune against k cheaters are given.
Resumo:
This article examines a series of controversies within the life sciences over data sharing. Part 1 focuses upon the agricultural biotechnology firm Syngenta publishing data on the rice genome in the journal Science, and considers proposals to reform scientific publishing and funding to encourage data sharing. Part 2 examines the relationship between intellectual property rights and scientific publishing, in particular copyright protection of databases, and evaluates the declaration of the Human Genome Organisation that genomic databases should be global public goods. Part 3 looks at varying opinions on the information function of patent law, and then considers the proposals of Patrinos and Drell to provide incentives for private corporations to release data into the public domain.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
The process cascade leading to the final accommodation of the carbohydrate ligand in the lectin’s binding site comprises enthalpic and entropic contributions of the binding partners and solvent molecules. With emphasis on lactose, N-acetyllactosamine, and thiodigalactoside as potent inhibitors of binding of galactoside-specific lectins, the question was addressed to what extent these parameters are affected as a function of the protein. The microcalorimetric study of carbohydrate association to the galectin from chicken liver (CG-16) and the agglutinin from Viscum album (VAA) revealed enthalpy–entropy compensation with evident protein type-dependent changes for N-acetyllactosamine. Reduction of the entropic penalty by differential flexibility of loops or side chains and/or solvation properties of the protein will have to be reckoned with to assign a molecular cause to protein type-dependent changes in thermodynamic parameters for lectins sharing the same monosaccharide specificity.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
We characterize the solution to a model of consumption smoothing using financing under non-commitment and savings. We show that, under certain conditions, these two different instruments complement each other perfectly. If the rate of time preference is equal to the interest rate on savings, perfect smoothing can be achieved in finite time. We also show that, when random revenues are generated by periodic investments in capital through a concave production function, the level of smoothing achieved through financial contracts can influence the productive investment efficiency. As long as financial contracts cannot achieve perfect smoothing, productive investment will be used as a complementary smoothing device.