959 resultados para Classical Peronism
Resumo:
An encryption scheme is non-malleable if giving an encryption of a message to an adversary does not increase its chances of producing an encryption of a related message (under a given public key). Fischlin introduced a stronger notion, known as complete non-malleability, which requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti later proposed a comparison-based definition of this security notion, which is more in line with the well-studied definitions proposed by Bellare et al. The authors also provide additional feasibility results by proposing two constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Therefore, the only previously known completely non-malleable (and non-interactive) scheme in the standard model, is quite inefficient as it relies on generic NIZK approach. They left the existence of efficient schemes in the common reference string model as an open problem. Recently, two efficient public-key encryption schemes have been proposed by Libert and Yung, and Barbosa and Farshim, both of them are based on pairing identity-based encryption. At ACISP 2011, Sepahi et al. proposed a method to achieve completely non-malleable encryption in the public-key setting using lattices but there is no security proof for the proposed scheme. In this paper we review the mentioned scheme and provide its security proof in the standard model. Our study shows that Sepahi’s scheme will remain secure even for post-quantum world since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best known classical (i.e., non-quantum) algorithms.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
The purpose of the present investigation was to examine relationships between coping strategies and competitive trait anxiety among ballet dancers. Participants were 104 classical ballet dancers from three professional ballet companies, two private dance schools, and two full-time, university dance courses in Australia. Coping strategies were assessed using the Modified COPE scale (MCOPE: Crocker & Graham, 1995), while competitive trait anxiety was assessed using the Sport Anxiety Scale (SAS: Smith, Smoll, & Schutz, 1990). Standard multiple regression analyses showed that trait anxiety scores were significant predictors of seven of the 12 coping strategies, with moderate to large effect sizes. High trait anxious dancers reported more frequent use of all categories of coping strategies. A two-way MANOVA showed no main effects for gender nor status (professional versus students) and no significant interaction effect. The present results emphasize the need for the effectiveness of specific coping strategies to be considered during the process of preparing young classical dancers for a career in professional ballet.
Resumo:
The vision of a digital earth (DE) is continuously evolving, and the next-generation infrastructures, platforms and applications are being implemented. In this article, we attempt to initiate a debate within the DE and with affine communities about 'why' a digital earth curriculum (DEC) is needed, 'how' it should be developed, and 'what' it could look like. It is impossible to do justice to the Herculean effort of DEC development without extensive consultations with the broader community. We propose a frame for the debate (what, why, and how of a DEC) and a rationale for and elements of a curriculum for educating the coming generations of digital natives and indicate possible realizations. We particularly argue that a DEC is not a déjà vu of classical research and training agendas of geographic information science, remote sensing, and similar fields by emphasizing its unique characteristics.
Resumo:
Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.
Resumo:
Pyrido[1,2-a]benzimidazoles1, 2a are interesting compounds both from the viewpoint of medicinal chemistry2–7 (solubility,7 DNA intercalation3) and materials chemistry8 (fluorescence). Of note among the former is the antibiotic drug Rifaximin,5 which contains this heteroaromatic core. The classical synthetic approach for the assembly of pyrido[1,2-a]benzimidazoles is by [3+3] cyclocondensation of benzimidazoles containing a methylene group at C2 with appropriate bielectrophiles.2a However, these procedures are often low-yielding, involve indirect/lengthy sequences, and/or provide access to a limited range of products, primarily providing derivatives with substituents located on the pyridine ring (A ring, Scheme 1).2–4 Theoretically, a good alternative synthetic method for the synthesis of pyrido[1,2-a]benzimidazoles with substituents in the benzene ring (C ring) should be accessible by intramolecular transition-metal-catalyzed CN bond formation in N-(2-chloroaryl)pyridin-2-amines, based on chemistry recently developed in our research group.9 These substrates themselves are easily available through SNAr or selective Pd-catalyzed amination10 of 2-chloropyridine with 2-chloroanilines.11 If a synthetic procedure that eliminated the need for preactivation of the 2-position of the 2-chloroarylamino entity could be developed, this would be even more powerful, as anilines are more readily commercially available than 2-chloroanilines. Therefore the synthesis of pyrido[1,2-a]benzimidazoles (4) by a transition-metal-catalyzed intramolecular CH amination approach from N-arylpyridin-2-amines (3) was explored (Scheme 1).
Resumo:
alpha-Carboxylate radical anions are potential reactive intermediates in the free radical oxidation of biological molecules (e. g., fatty acids, peptides and proteins). We have synthesised well-defined alpha-carboxylate radical anions in the gas phase by UV laser photolysis of halogenated precursors in an ion-trap mass spectrometer. Reactions of isolated acetate ((center dot)CH(2)CO(2)) and 1-carboxylatobutyl (CH(3)CH(2)CH(2)(center dot)CHCO(2)(-)) radical anions with dioxygen yield carbonate (CO(3)(center dot-)) radical anions and this chemistry is shown to be a hallmark of oxidation in simple and alkyl-substituted cross-conjugated species. Previous solution phase studies have shown that C(alpha)-radicals in peptides, formed from free radical damage, combine with dioxygen to form peroxyl radicals that subsequently decompose into imine and keto acid products. Here, we demonstrate that a novel alternative pathway exists for two alpha-carboxylate C(alpha)-radical anions: the acetylglycinate radical anion (CH(3)C(O)NH(center dot)CHCO(2)(-)) and the model peptide radical anion, YGGFG(center dot-). Reaction of these radical anions with dioxygen results in concerted loss of carbon dioxide and hydroxyl radical. The reaction of the acetylglycinate radical anion with dioxygen reveals a two-stage process involving a slow, followed by a fast kinetic regime. Computational modelling suggests the reversible formation of the C(alpha) peroxyl radical facilitates proton transfer from the amide to the carboxylate group, a process reminiscent of, but distinctive from, classical proton-transfer catalysis. Interestingly, inclusion of this isomerization step in the RRKM/ME modelling of a G3SX level potential energy surface enables recapitulation of the experimentally observed two-stage kinetics.
Resumo:
In this chapter we continue the exposition of crypto topics that was begun in the previous chapter. This chapter covers secret sharing, threshold cryptography, signature schemes, and finally quantum key distribution and quantum cryptography. As in the previous chapter, we have focused only on the essentials of each topic. We have selected in the bibliography a list of representative items, which can be consulted for further details. First we give a synopsis of the topics that are discussed in this chapter. Secret sharing is concerned with the problem of how to distribute a secret among a group of participating individuals, or entities, so that only predesignated collections of individuals are able to recreate the secret by collectively combining the parts of the secret that were allocated to them. There are numerous applications of secret-sharing schemes in practice. One example of secret sharing occurs in banking. For instance, the combination to a vault may be distributed in such a way that only specified collections of employees can open the vault by pooling their portions of the combination. In this way the authority to initiate an action, e.g., the opening of a bank vault, is divided for the purposes of providing security and for added functionality, such as auditing, if required. Threshold cryptography is a relatively recently studied area of cryptography. It deals with situations where the authority to initiate or perform cryptographic operations is distributed among a group of individuals. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Signature schemes deal with the problem of generating and verifying electronic) signatures for documents.Asubclass of signature schemes is concerned with the shared-generation and the sharedverification of signatures, where a collaborating group of individuals are required to perform these actions. A new paradigm of security has recently been introduced into cryptography with the emergence of the ideas of quantum key distribution and quantum cryptography. While classical cryptography employs various mathematical techniques to restrict eavesdroppers from learning the contents of encrypted messages, in quantum cryptography the information is protected by the laws of physics.
Resumo:
Secure multi-party computation (MPC) protocols enable a set of n mutually distrusting participants P 1, ..., P n , each with their own private input x i , to compute a function Y = F(x 1, ..., x n ), such that at the end of the protocol, all participants learn the correct value of Y, while secrecy of the private inputs is maintained. Classical results in the unconditionally secure MPC indicate that in the presence of an active adversary, every function can be computed if and only if the number of corrupted participants, t a , is smaller than n/3. Relaxing the requirement of perfect secrecy and utilizing broadcast channels, one can improve this bound to t a < n/2. All existing MPC protocols assume that uncorrupted participants are truly honest, i.e., they are not even curious in learning other participant secret inputs. Based on this assumption, some MPC protocols are designed in such a way that after elimination of all misbehaving participants, the remaining ones learn all information in the system. This is not consistent with maintaining privacy of the participant inputs. Furthermore, an improvement of the classical results given by Fitzi, Hirt, and Maurer indicates that in addition to t a actively corrupted participants, the adversary may simultaneously corrupt some participants passively. This is in contrast to the assumption that participants who are not corrupted by an active adversary are truly honest. This paper examines the privacy of MPC protocols, and introduces the notion of an omnipresent adversary, which cannot be eliminated from the protocol. The omnipresent adversary can be either a passive, an active or a mixed one. We assume that up to a minority of participants who are not corrupted by an active adversary can be corrupted passively, with the restriction that at any time, the number of corrupted participants does not exceed a predetermined threshold. We will also show that the existence of a t-resilient protocol for a group of n participants, implies the existence of a t’-private protocol for a group of n′ participants. That is, the elimination of misbehaving participants from a t-resilient protocol leads to the decomposition of the protocol. Our adversary model stipulates that a MPC protocol never operates with a set of truly honest participants (which is a more realistic scenario). Therefore, privacy of all participants who properly follow the protocol will be maintained. We present a novel disqualification protocol to avoid a loss of privacy of participants who properly follow the protocol.
Resumo:
The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.
Resumo:
BACKGROUND Integrating plant genomics and classical breeding is a challenge for both plant breeders and molecular biologists. Marker-assisted selection (MAS) is a tool that can be used to accelerate the development of novel apple varieties such as cultivars that have fruit with anthocyanin through to the core. In addition, determining the inheritance of novel alleles, such as the one responsible for red flesh, adds to our understanding of allelic variation. Our goal was to map candidate anthocyanin biosynthetic and regulatory genes in a population segregating for the red flesh phenotypes. RESULTS We have identified the Rni locus, a major genetic determinant of the red foliage and red colour in the core of apple fruit. In a population segregating for the red flesh and foliage phenotype we have determined the inheritance of the Rni locus and DNA polymorphisms of candidate anthocyanin biosynthetic and regulatory genes. Simple Sequence Repeats (SSRs) and Single Nucleotide Polymorphisms (SNPs) in the candidate genes were also located on an apple genetic map. We have shown that the MdMYB10 gene co-segregates with the Rni locus and is on Linkage Group (LG) 09 of the apple genome. CONCLUSION We have performed candidate gene mapping in a fruit tree crop and have provided genetic evidence that red colouration in the fruit core as well as red foliage are both controlled by a single locus named Rni. We have shown that the transcription factor MdMYB10 may be the gene underlying Rni as there were no recombinants between the marker for this gene and the red phenotype in a population of 516 individuals. Associating markers derived from candidate genes with a desirable phenotypic trait has demonstrated the application of genomic tools in a breeding programme of a horticultural crop species.
Resumo:
Besides classical criteria such as cost and overall organizational efficiency, an organization’s ability to being creative and to innovate is of increasing importance in markets that are overwhelmed with commodity products and services. Business Process Management (BPM) as an approach to model, analyze, and improve business processes has been successfully applied not only to enhance performance and reduce cost but also to facilitate business imperatives such as risk management and knowledge management. Can BPM also facilitate the management of creativity? We can find many examples where enterprises unintentionally reduced or even killed creativity and innovation for the sake of control, performance, and cost reduction. Based on the experiences we have made within case studies with organizations from the creative industries (film industry, visual effects production, etc.,) we believe that BPM can be a facilitator providing the glue between creativity management and well-established business principles. In this article we introduce the notions of creativity-intensive processes and pockets of creativity as new BPM concepts. We further propose a set of exemplary strategies that enable process owners and process managers to achieve creativity without sacrificing creativity. Our aim is to set the baseline for further discussions on what we call creativity-oriented BPM.
Resumo:
Anthocyanin concentration is an important determinant of the colour of many fruits. In apple (Malus x domestica), centuries of breeding have produced numerous varieties in which levels of anthocyanin pigment vary widely and change in response to environmental and developmental stimuli. The apple fruit cortex is usually colourless, although germplasm does exist where the cortex is highly pigmented due to the accumulation of either anthocyanins or carotenoids. From studies in a diverse array of plant species, it is apparent that anthocyanin biosynthesis is controlled at the level of transcription. Here we report the transcript levels of the anthocyanin biosynthetic genes in a red-fleshed apple compared with a white-fleshed cultivar. We also describe an apple MYB transcription factor, MdMYB10, that is similar in sequence to known anthocyanin regulators in other species. We further show that this transcription factor can induce anthocyanin accumulation in both heterologous and homologous systems, generating pigmented patches in transient assays in tobacco leaves and highly pigmented apple plants following stable transformation with constitutively expressed MdMYB10. Efficient induction of anthocyanin biosynthesis in transient assays by MdMYB10 was dependent on the co-expression of two distinct bHLH proteins from apple, MdbHLH3 and MdbHLH33. The strong correlation between the expression of MdMYB10 and apple anthocyanin levels during fruit development suggests that this transcription factor is responsible for controlling anthocyanin biosynthesis in apple fruit; in the red-fleshed cultivar and in the skin of other varieties, there is an induction of MdMYB10 expression concurrent with colour formation during development. Characterization of MdMYB10 has implications for the development of new varieties through classical breeding or a biotechnological approach.