975 resultados para DEPENDENT QUANTUM PROBLEMS
Resumo:
Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.
Resumo:
Background Less invasive methods of determining cardiac output are now readily available. Using indicator dilution technique, for example has made it easier to continuously measure cardiac output because it uses the existing intra-arterial line. Therefore gone is the need for a pulmonary artery floatation catheter and with it the ability to measure left atrial and left ventricular work indices as well the ability to monitor and measure a mixed venous saturation (SvO2). Purpose The aim of this paper is to put forward the notion that SvO2 provides valuable information about oxygen consumption and venous reserve; important measures in the critically ill to ensure oxygen supply meets cellular demand. In an attempt to portray this, a simplified example of the septic patient is offered to highlight the changing pathophysiological sequelae of the inflammatory process and its importance for monitoring SvO2. Relevance to clinical practice SvO2 monitoring, it could be argued, provides the gold standard for assessing arterial and venous oxygen indices in the critically ill. For the bedside ICU nurse the plethora of information inherent in SvO2 monitoring could provide them with important data that will assist in averting potential problems with oxygen delivery and consumption. However, it has been suggested that central venous saturation (ScvO2) might be an attractive alternative to SvO2 because of its less invasiveness and ease of obtaining a sample for analysis. There are problems with this approach and these are to do with where the catheter tip is sited and the nature of the venous admixture at this site. Studies have shown that ScvO2 is less accurate than SvO2 and should not be used as a sole guiding variable for decision-making. These studies have demonstrated that there is an unacceptably wide range in variance between ScvO2 and SvO2 and this is dependent on the presenting disease, in some cases SvO2 will be significantly lower than ScvO2. Conclusion Whilst newer technologies have been developed to continuously measure cardiac output, SvO2 monitoring is still an important adjunct to clinical decision-making in the ICU. Given the information that it provides, seeking alternatives such as ScvO2 or blood samples obtained from femorally placed central venous lines, can unnecessarily lead to inappropriate treatment being given or withheld. Instead when using ScvO2, trending of this variable should provide clinical determinates that are useable for the bedside ICU nurse, remembering that in most conditions SvO2 will be approximately 16% lower.
Resumo:
In this paper we modeled a quantum dot at near proximity to a gap plasmon waveguide to study the quantum dot-plasmon interactions. Assuming that the waveguide is single mode, this paper is concerned about the dependence of spontaneous emission rate of the quantum dot on waveguide dimensions such as width and height. We compare coupling efficiency of a gap waveguide with symmetric configuration and asymmetric configuration illustrating that symmetric waveguide has a better coupling efficiency to the quantum dot. We also demonstrate that optimally placed quantum dot near a symmetric waveguide with 50 nm x 50 nm cross section can capture 80% of the spontaneous emission into a guided plasmon mode.
Resumo:
Significant attention has been given in urban policy literature to the integration of land-use and transport planning and policies—with a view to curbing sprawling urban form and diminishing externalities associated with car-dependent travel patterns. By taking land-use and transport interaction into account, this debate mainly focuses on how a successful integration can contribute to societal well-being, providing efficient and balanced economic growth while accomplishing the goal of developing sustainable urban environments and communities. The integration is also a focal theme of contemporary urban development models, such as smart growth, liveable neighbourhoods, and new urbanism. Even though available planning policy options for ameliorating urban form and transport-related externalities have matured—owing to growing research and practice worldwide—there remains a lack of suitable evaluation models to reflect on the current status of urban form and travel problems or on the success of implemented integration policies. In this study we explore the applicability of indicator-based spatial indexing to assess land-use and transport integration at the neighbourhood level. For this, a spatial index is developed by a number of indicators compiled from international studies and trialled in Gold Coast, Queensland, Australia. The results of this modelling study reveal that it is possible to propose an effective metric to determine the success level of city plans considering their sustainability performance via composite indicator methodology. The model proved useful in demarcating areas where planning intervention is applicable, and in identifying the most suitable locations for future urban development and plan amendments. Lastly, we integrate variance-based sensitivity analysis with the spatial indexing method, and discuss the applicability of the model in other urban contexts.
Resumo:
Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).
Resumo:
Composites with carbon nanotubes are becoming increasingly used in energy storage and electronic devices, due to incorporated excellent properties from carbon nanotubes and polymers. Although their properties make them more attractive than conventional smart materials, their electrical properties are found to be temperature-dependent which is important to consider for the design of devices. To study the effects of temperature in electrically conductive multi-wall carbon nanotube/epoxy composites, thin films were prepared and the effect of temperature on the resistivity, thermal properties and Raman spectral characteristics of the composite films was evaluated. Resistivity-temperature profiles showed three distinct regions in as-cured samples and only two regions in samples whose thermal histories had been erased. In the vicinity of the glass transition temperature, the as-cured composites exhibited pronounced resistivity and enthalpic relaxation peaks, which both disappeared after erasing the composites’ thermal histories by temperature cycling. Combined DSC, Raman spectroscopy, and resistivity-temperature analyses indicated that this phenomenon can be attributed to the physical aging of the epoxy matrix and that, in the region of the observed thermal history-dependent resistivity peaks, structural rearrangement of the conductive carbon nanotube network occurs through a volume expansion/relaxation process. These results have led to an overall greater understanding of the temperature-dependent behaviour of conductive carbon nanotube/epoxy composites, including the positive temperature coefficient effect.
Resumo:
We propose to use a simple and effective way to achieve secure quantum direct secret sharing. The proposed scheme uses the properties of fountain codes to allow a realization of the physical conditions necessary for the implementation of no-cloning principle for eavesdropping-check and authentication. In our scheme, to achieve a variety of security purposes, nonorthogonal state particles are inserted in the transmitted sequence carrying the secret shares to disorder it. However, the positions of the inserted nonorthogonal state particles are not announced directly, but are obtained by sending degrees and positions of a sequence that are pre-shared between Alice and each Bob. Moreover, they can confirm that whether there exists an eavesdropper without exchanging classical messages. Most importantly, without knowing the positions of the inserted nonorthogonal state particles and the sequence constituted by the first particles from every EPR pair, the proposed scheme is shown to be secure.
Resumo:
In this paper we introduce a formalization of Logical Imaging applied to IR in terms of Quantum Theory through the use of an analogy between states of a quantum system and terms in text documents. Our formalization relies upon the Schrodinger Picture, creating an analogy between the dynamics of a physical system and the kinematics of probabilities generated by Logical Imaging. By using Quantum Theory, it is possible to model more precisely contextual information in a seamless and principled fashion within the Logical Imaging process. While further work is needed to empirically validate this, the foundations for doing so are provided.
Resumo:
Social tagging systems are shown to evidence a well known cognitive heuristic, the guppy effect, which arises from the combination of different concepts. We present some empirical evidence of this effect, drawn from a popular social tagging Web service. The guppy effect is then described using a quantum inspired formalism that has been already successfully applied to model conjunction fallacy and probability judgement errors. Key to the formalism is the concept of interference, which is able to capture and quantify the strength of the guppy effect.
Resumo:
Quantum-inspired models have recently attracted increasing attention in Information Retrieval. An intriguing characteristic of the mathematical framework of quantum theory is the presence of complex numbers. However, it is unclear what such numbers could or would actually represent or mean in Information Retrieval. The goal of this paper is to discuss the role of complex numbers within the context of Information Retrieval. First, we introduce how complex numbers are used in quantum probability theory. Then, we examine van Rijsbergen’s proposal of evoking complex valued representations of informations objects. We empirically show that such a representation is unlikely to be effective in practice (confuting its usefulness in Information Retrieval). We then explore alternative proposals which may be more successful at realising the power of complex numbers.
Resumo:
An encryption scheme is non-malleable if giving an encryption of a message to an adversary does not increase its chances of producing an encryption of a related message (under a given public key). Fischlin introduced a stronger notion, known as complete non-malleability, which requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti later proposed a comparison-based definition of this security notion, which is more in line with the well-studied definitions proposed by Bellare et al. The authors also provide additional feasibility results by proposing two constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Therefore, the only previously known completely non-malleable (and non-interactive) scheme in the standard model, is quite inefficient as it relies on generic NIZK approach. They left the existence of efficient schemes in the common reference string model as an open problem. Recently, two efficient public-key encryption schemes have been proposed by Libert and Yung, and Barbosa and Farshim, both of them are based on pairing identity-based encryption. At ACISP 2011, Sepahi et al. proposed a method to achieve completely non-malleable encryption in the public-key setting using lattices but there is no security proof for the proposed scheme. In this paper we review the mentioned scheme and provide its security proof in the standard model. Our study shows that Sepahi’s scheme will remain secure even for post-quantum world since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best known classical (i.e., non-quantum) algorithms.
Resumo:
2,2'-Biphenols are a large and diverse group of compounds with exceptional properties both as ligands and bioactive agents. Traditional methods for their synthesis by oxidative dimerisation are often problematic and lead to mixtures of ortho- and para-connected regioisomers. To compound these issues, an intermolecular dimerisation strategy is often inappropriate for the synthesis of heterodimers. The ‘acetal method’ provides a solution for these problems: stepwise tethering of two monomeric phenols enables heterodimer synthesis, enforces ortho regioselectivity and allows relatively facile and selective intramolecular reactions to take place. The resulting dibenzo[1,3]dioxepines have been analysed by quantum chemical calculations to obtain information about the activation barrier for ring flip between the enantiomers. Hydrolytic removal of the dioxepine acetal unit revealed the 2,2′-biphenol target.
Resumo:
The complex supply chain relations of the construction industry, coupled with the substantial amount of information to be shared on a regular basis between the parties involved, make the traditional paper-based data interchange methods inefficient, error prone and expensive. The successful information technology (IT) applications that enable seamless data interchange, such as the Electronic Data Interchange (EDI) systems, have generally failed to be successfully implemented in the construction industry. An alternative emerging technology, Extensible Markup Language (XML), and its applicability to streamline business processes and to improve data interchange methods within the construction industry are analysed, as is the EDI technology to identify the strategic advantages that XML technology provides to overcome the barriers to implementation. In addition, the successful implementation of XML-based automated data interchange platforms for a large organization, and the proposed benefits thereof, are presented as a case study.
Resumo:
A sub‒domain smoothed Galerkin method is proposed to integrate the advantages of mesh‒free Galerkin method and FEM. Arbitrarily shaped sub‒domains are predefined in problems domain with mesh‒free nodes. In each sub‒domain, based on mesh‒free Galerkin weak formulation, the local discrete equation can be obtained by using the moving Kriging interpolation, which is similar to the discretization of the high‒order finite elements. Strain smoothing technique is subsequently applied to the nodal integration of sub‒domain by dividing the sub‒domain into several smoothing cells. Moreover, condensation of DOF can also be introduced into the local discrete equations to improve the computational efficiency. The global governing equations of present method are obtained on the basis of the scheme of FEM by assembling all local discrete equations of the sub‒domains. The mesh‒free properties of Galerkin method are retained in each sub‒domain. Several 2D elastic problems have been solved on the basis of this newly proposed method to validate its computational performance. These numerical examples proved that the newly proposed sub‒domain smoothed Galerkin method is a robust technique to solve solid mechanics problems based on its characteristics of high computational efficiency, good accuracy, and convergence.
Resumo:
The Comment by Mayers and Reiter criticizes our work on two counts. Firstly, it is claimed that the quantum decoherence effects that we report in consequence of our experimental analysis of neutron Compton scattering from H in gaseous H2 are not, as we maintain, outside the framework of conventional neutron scatteringtheory. Secondly, it is claimed that we did not really observe such effects, owing to a faulty analysis of the experimental data, which are claimed to be in agreement with conventional theory. We focus in this response on the critical issue of the reliability of our experimental results and analysis. Using the same standard Vesuvio instrument programs used by Mayers et al., we show that, if the experimental results for H in gaseous H2 are in agreement with conventional theory, then those for D in gaseous D2 obtained in the same way cannot be, and vice-versa. We expose a flaw in the calibration methodology used by Mayers et al. that leads to the present disagreement over the behaviour of H, namely the ad hoc adjustment of the measured H peak positions in TOF during the calibration of Vesuvio so that agreement is obtained with the expectation of conventional theory. We briefly address the question of the necessity to apply the theory of open quantum systems.