372 resultados para Maximum independent set
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.
Resumo:
We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a nonstandard scheme designed specifically for this purpose, or to have communication between shareholders. In contrast, we show how to increase the threshold parameter of the standard Shamir secret-sharing scheme without communication between the shareholders. Our technique can thus be applied to existing Shamir schemes even if they were set up without consideration to future threshold increases. Our method is a new positive cryptographic application for lattice reduction algorithms, inspired by recent work on lattice-based list decoding of Reed-Solomon codes with noise bounded in the Lee norm. We use fundamental results from the theory of lattices (geometry of numbers) to prove quantitative statements about the information-theoretic security of our construction. These lattice-based security proof techniques may be of independent interest.
Resumo:
At Eurocrypt’04, Freedman, Nissim and Pinkas introduced a fuzzy private matching problem. The problem is defined as follows. Given two parties, each of them having a set of vectors where each vector has T integer components, the fuzzy private matching is to securely test if each vector of one set matches any vector of another set for at least t components where t < T. In the conclusion of their paper, they asked whether it was possible to design a fuzzy private matching protocol without incurring a communication complexity with the factor (T t ) . We answer their question in the affirmative by presenting a protocol based on homomorphic encryption, combined with the novel notion of a share-hiding error-correcting secret sharing scheme, which we show how to implement with efficient decoding using interleaved Reed-Solomon codes. This scheme may be of independent interest. Our protocol is provably secure against passive adversaries, and has better efficiency than previous protocols for certain parameter values.
Resumo:
Digital learning has come a long way from the days of simple 'if-then' queries. It is now enabled by countless innovations that support knowledge sharing, openness, flexibility, and independent inquiry. Set against an evolutionary context this study investigated innovations that directly support human inquiry. Specifically, it identified five activities that together are defined as the 'why dimension' – asking, learning, understanding, knowing, and explaining why. Findings highlight deficiencies in mainstream search-based approaches to inquiry, which tend to privilege the retrieval of information as distinct from explanation. Instrumental to sense-making, the 'why dimension' provides a conceptual framework for development of 'sense-making technologies'.
Resumo:
Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.
Resumo:
A dynamic accumulator is an algorithm, which gathers together a large set of elements into a constant-size value such that for a given element accumulated, there is a witness confirming that the element was indeed included into the value, with a property that accumulated elements can be dynamically added and deleted into/from the original set such that the cost of an addition or deletion operation is independent of the number of accumulated elements. Although the first accumulator was presented ten years ago, there is still no standard formal definition of accumulators. In this paper, we generalize formal definitions for accumulators, formulate a security game for dynamic accumulators so-called Chosen Element Attack (CEA), and propose a new dynamic accumulator for batch updates based on the Paillier cryptosystem. Our construction makes a batch of update operations at unit cost. We prove its security under the extended strong RSA (es-RSA) assumption
Resumo:
The house advantage for Baccarat is known, hence the theoretical win can be determined. What is impractical to theoretically determine is the frequency and financial implications of extreme events, for example, prolonged winning streaks coupled with various betting patterns. The simulation herein provides such granularity. We explore the effect of following the „hot hand‟, that is, rapidly escalating bets when players are on a winning streak. To minimize their exposure, casino management sets a table bet maximum as well as a table differential. These figures can and do serve as a means to differentiate one casino from another. As the allowable bet maximum increases so does the total amount bet, which increases the theoretical winnings, thus suggesting that a high bet limit and differential is beneficial for the house. However, the greater are these amounts, the greater the number of shoes that end with players losing relative to a constant betting scenario (the number of times a player wins at all can drop from ~47% of the time to less than a quarter); but there will, on occasion, be more extreme payouts to players. This simulation is therefore intended to help casino managers set betting limits that maximize total winnings while bearing in mind both the likelihood and magnitude of negative outcomes to the casino.
Resumo:
This paper proposes a method for designing set-point regulation controllers for a class of underactuated mechanical systems in Port-Hamiltonian System (PHS) form. A new set of potential shape variables in closed loop is proposed, which can replace the set of open loop shape variables-the configuration variables that appear in the kinetic energy. With this choice, the closed-loop potential energy contains free functions of the new variables. By expressing the regulation objective in terms of these new potential shape variables, the desired equilibrium can be assigned and there is freedom to reshape the potential energy to achieve performance whilst maintaining the PHS form in closed loop. This complements contemporary results in the literature, which preserve the open-loop shape variables. As a case study, we consider a robotic manipulator mounted on a flexible base and compensate for the motion of the base while positioning the end effector with respect to the ground reference. We compare the proposed control strategy with special cases that correspond to other energy shaping strategies previously proposed in the literature.
Resumo:
Background Flavonoids such as anthocyanins, flavonols and proanthocyanidins, play a central role in fruit colour, flavour and health attributes. In peach and nectarine (Prunus persica) these compounds vary during fruit growth and ripening. Flavonoids are produced by a well studied pathway which is transcriptionally regulated by members of the MYB and bHLH transcription factor families. We have isolated nectarine flavonoid regulating genes and examined their expression patterns, which suggests a critical role in the regulation of flavonoid biosynthesis. Results In nectarine, expression of the genes encoding enzymes of the flavonoid pathway correlated with the concentration of proanthocyanidins, which strongly increases at mid-development. In contrast, the only gene which showed a similar pattern to anthocyanin concentration was UDP-glucose-flavonoid-3-O-glucosyltransferase (UFGT), which was high at the beginning and end of fruit growth, remaining low during the other developmental stages. Expression of flavonol synthase (FLS1) correlated with flavonol levels, both temporally and in a tissue specific manner. The pattern of UFGT gene expression may be explained by the involvement of different transcription factors, which up-regulate flavonoid biosynthesis (MYB10, MYB123, and bHLH3), or repress (MYB111 and MYB16) the transcription of the biosynthetic genes. The expression of a potential proanthocyanidin-regulating transcription factor, MYBPA1, corresponded with proanthocyanidin levels. Functional assays of these transcription factors were used to test the specificity for flavonoid regulation. Conclusions MYB10 positively regulates the promoters of UFGT and dihydroflavonol 4-reductase (DFR) but not leucoanthocyanidin reductase (LAR). In contrast, MYBPA1 trans-activates the promoters of DFR and LAR, but not UFGT. This suggests exclusive roles of anthocyanin regulation by MYB10 and proanthocyanidin regulation by MYBPA1. Further, these transcription factors appeared to be responsive to both developmental and environmental stimuli.
Resumo:
Introduction Since 1992 there have been several articles published on research on plastic scintillators for use in radiotherapy. Plastic scintillators are said to be tissue equivalent, temperature independent and dose rate independent [1]. Although their properties were found to be promising for measurements in megavoltage X-ray beams there were some technical difficulties with regards to its commercialisation. Standard Imaging has produced the first commercial system which is now available for use in a clinical setting. The Exradin W1 scintillator device uses a dual fibre system where one fibre is connected to the Plastic Scintillator and the other fibre only measures Cerenkov radiation [2]. This paper presents results obtained during commissioning of this dosimeter system. Methods All tests were performed on a Novalis Tx linear accelerator equipped with a 6 MV SRS photon beam and conventional 6 and 18 MV X-ray beams. The following measurements were performed in a Virtual Water phantom at a depth of dose maximum. Linearity: The dose delivered was varied between 0.2 and 3.0 Gy for the same field conditions. Dose rate dependence: For this test the repetition rate of the linac was varied between 100 and 1,000 MU/min. A nominal dose of 1.0 Gy was delivered for each rate. Reproducibility: A total of five irradiations for the same setup. Results The W1 detector gave a highly linear relationship between dose and the number of Monitor Units delivered for a 10 9 10 cm2 field size at a SSD of 100 cm. The linearity was within 1 % for the high dose end and about 2 % for the very low dose end. For the dose rate dependence, the dose measured as a function of repetition the rate (100–1,000 MU/min) gave a maximum deviation of 0.9 %. The reproducibility was found to be better than 0.5 %. Discussion and conclusions The results for this system look promising so far being a new dosimetry system available for clinical use. However, further investigation is needed to produce a full characterisation prior to use in megavoltage X-ray beams.
Resumo:
Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.
Resumo:
Introduction Given the known challenges of obtaining accurate measurements of small radiation fields, and the increasing use of small field segments in IMRT beams, this study examined the possible effects of referencing inaccurate field output factors in the planning of IMRT treatments. Methods This study used the Brainlab iPlan treatment planning system to devise IMRT treatment plans for delivery using the Brainlab m3 microMLC (Brainlab, Feldkirchen, Germany). Four pairs of sample IMRT treatments were planned using volumes, beams and prescriptions that were based on a set of test plans described in AAPM TG 119’s recommendations for the commissioning of IMRT treatment planning systems [1]: • C1, a set of three 4 cm volumes with different prescription doses, was modified to reduce the size of the PTV to 2 cm across and to include an OAR dose constraint for one of the other volumes. • C2, a prostate treatment, was planned as described by the TG 119 report [1]. • C3, a head-and-neck treatment with a PTV larger than 10 cm across, was excluded from the study. • C4, an 8 cm long C-shaped PTV surrounding a cylindrical OAR, was planned as described in the TG 119 report [1] and then replanned with the length of the PTV reduced to 4 cm. Both plans in each pair used the same beam angles, collimator angles, dose reference points, prescriptions and constraints. However, one of each pair of plans had its beam modulation optimisation and dose calculation completed with reference to existing iPlan beam data and the other had its beam modulation optimisation and dose calculation completed with reference to revised beam data. The beam data revisions consisted of increasing the field output factor for a 0.6 9 0.6 cm2 field by 17 % and increasing the field output factor for a 1.2 9 1.2 cm2 field by 3 %. Results The use of different beam data resulted in different optimisation results with different microMLC apertures and segment weightings between the two plans for each treatment, which led to large differences (up to 30 % with an average of 5 %) between reference point doses in each pair of plans. These point dose differences are more indicative of the modulation of the plans than of any clinically relevant changes to the overall PTV or OAR doses. By contrast, the maximum, minimum and mean doses to the PTVs and OARs were smaller (less than 1 %, for all beams in three out of four pairs of treatment plans) but are more clinically important. Of the four test cases, only the shortened (4 cm) version of TG 119’s C4 plan showed substantial differences between the overall doses calculated in the volumes of interest using the different sets of beam data and thereby suggested that treatment doses could be affected by changes to small field output factors. An analysis of the complexity of this pair of plans, using Crowe et al.’s TADA code [2], indicated that iPlan’s optimiser had produced IMRT segments comprised of larger numbers of small microMLC leaf separations than in the other three test cases. Conclusion: The use of altered small field output factors can result in substantially altered doses when large numbers of small leaf apertures are used to modulate the beams, even when treating relatively large volumes.
Resumo:
Dietary fatty acids are known to influence the phospholipid composition of many tissues in the body, with lipid turnover occurring rapidly. The aim of this study was to investigate whether changes in the fatty acid composition of the diet can affect the phospholipid composition of the lens. Male Sprague-Dawley rats were fed three diets with distinct profiles in both essential and non-essential fatty acids. After 8 weeks, lenses and skeletal muscle were removed, and the lenses sectioned into nuclear and cortical regions. In these experiments, the lens cortex was synthesised during the course of the variable lipid diet. Phospholipids were then identified by electrospray ionisation tandem mass spectrometry, and quantified via the use of internal standards. The phospholipid compositions of the nuclear and cortical regions of the lens differed slightly between the two regions, but comparison of the equivalent regions across the diet groups showed remarkable similarity. In contrast, the phospholipid composition of skeletal muscle (medial gastrocnemius) in these rats varied significantly. This study provides the first direct evidence to show that the phospholipid composition of the lens is tightly regulated and thus appears to be independent of diet. As phospholipids determine membrane fluidity and influence the activity and function of integral membrane proteins, regulation of their composition may be important for the function of the lens. Crown Copyright (C) 2008 Published by Elsevier Ltd. All rights reserved.
Resumo:
There’s a diagram that does the rounds online that neatly sums up the difference between the quality of equipment used in the studio to produce music, and the quality of the listening equipment used by the consumer...