884 resultados para LWE practical hardness


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The author, Dean Shepherd, is of entrepreneurship—how entrepreneurs think, decide to act, and feel. He recently realized that while his publications in academic journals have implications for entrepreneurs, those implications have remained relatively hidden in the text of the articles and hidden in articles published in journals largely inaccessible to those involved in the entrepreneurial process. This series is designed to bring the practical implications of his research to the forefront.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attempts by universities to provide an improved learning environment to students have led to an increase in team-teaching approaches in higher education. While the definitions of team-teaching differ slightly, the benefits of team-teaching have been cited widely in the higher education literature. By tapping the specialist knowledge of a variety of staff members, students are exposed to current and emerging knowledge in different fields and topic areas; students are also able to understand concepts from a variety of viewpoints. However, while there is some evidence of the usefulness of team-teaching, there is patchy empirical support to underpin how well students appreciate and adapt to team-teaching approaches. This paper reports on the team-teaching approaches adopted in the delivery of an introductory journalism and communication course at the University of Queensland. The success of the approaches is examined against the background of quantitative and qualitative data. The study found that team-teaching is generally very well received by undergraduate students because they value the diverse expertise and teaching styles they are exposed to. Despite the positive feedback, students also complained about problems of continuity and cohesiveness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim The aim of this paper is to offer an alternative knowing-how knowing-that framework of nursing knowledge, which in the past has been accepted as the provenance of advanced practice. Background The concept of advancing practice is central to the development of nursing practice and has been seen to take on many different forms depending on its use in context. To many it has become synonymous with the work of the advanced or expert practitioner; others have viewed it as a process of continuing professional development and skills acquisition. Moreover, it is becoming closely linked with practice development. However, there is much discussion as to what constitutes the knowledge necessary for advancing and advanced practice, and it has been suggested that theoretical and practical knowledge form the cornerstone of advanced knowledge. Design The design of this article takes a discursive approach as to the meaning and integration of knowledge within the context of advancing nursing practice. Method A thematic analysis of the current discourse relating to knowledge integration models in an advancing and advanced practice arena was used to identify concurrent themes relating to the knowing-how knowing-that framework which commonly used to classify the knowledge necessary for advanced nursing practice. Conclusion There is a dichotomy as to what constitutes knowledge for advanced and advancing practice. Several authors have offered a variety of differing models, yet it is the application and integration of theoretical and practical knowledge that defines and develops the advancement of nursing practice. An alternative framework offered here may allow differences in the way that nursing knowledge important for advancing practice is perceived, developed and coordinated. Relevance to clinical practice What has inevitably been neglected is that there are various other variables which when transposed into the existing knowing-how knowing-that framework allows for advanced knowledge to be better defined. One of the more notable variables is pattern recognition, which became the focus of Benner’s work on expert practice. Therefore, if this is included into the knowing-how knowing-that framework, the knowing-how becomes the knowledge that contributes to advancing and advanced practice and the knowing-that becomes the governing action based on a deeper understanding of the problem or issue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper surveys the practical benefits and drawbacks of several identity-based encryption schemes based on bilinear pairings. After providing some background on identity-based cryptography, we classify the known constructions into a handful of general approaches. We then describe efficient and fully secure IBE and IBKEM instantiations of each approach, with reducibility to practice as the main design parameter. Finally, we catalogue the strengths and weaknesses of each construction according to a few theoretical and many applied comparison criteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the core values to be applied by a body reviewing the ethics of human research is justice. The inclusion of justice as a requirement in the ethical review of human research is relatively recent and its utility had been largely unexamined until debates arose about the conduct of international biomedical research in the late 1990s. The subsequent amendment of authoritative documents in ways that appeared to shift the meaning of conceptions of justice generated a deal of controversy. Another difficulty has been that both the theory and the substance of justice that are applied by researchers or reviewers can be frequently seen to be subjective. Both the concept of justice – whether distributive or commutative - and what counts as a just distribution or exchange – are given different weight and meanings by different people. In this paper, the origins and more recent debates about the requirement to consider justice as a criterion in the ethical review of human research are traced, relevant conceptions of justice are distinguished and the manner in which they can be applied meaningfully in the ethical review all human research is identified. The way that these concepts are articulated in, and the intent and function of, specific paragraphs of the National Statement on Ethical Conduct in Human Research (NHMRC, ARC, UA, 2007) (National Statement) is explained. The National Statement identifies a number of issues that should be considered when a human research ethics committee is reviewing the justice aspects of an application. It also provides guidance to researchers as to how they can show that there is a fair distribution of burdens and benefits in the participant experience and the research outcomes. It also provides practical guidance to researchers on how to think through issues of justice so that they can demonstrate that the design of their research projects meets this ethical requirement is also provided

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to continue to maintain public trust and confidence in human research, participants must be treated with respect. Researchers and Human Research Ethics Committee members need to be aware that modern considerations of this value include: the need for a valid consenting process, the protection of participants who have their capacity for consent compromised; the promotion of dignity for participants; and the effects that human research may have on cultures and communities. This paper explains the prominence of respect as a value when considering the ethics of human research and provides practical advice for both researchers and Human Research Ethics Committee members in developing respectful research practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I am interested in the psychology of entrepreneurship—how entrepreneurs think, decide to act, and feel. I recently realized that while my publications in academic journals have implications for entrepreneurs, those implications have remained relatively hidden in the text of the articles and hidden in articles published in journals largely inaccessible to those involved in the entrepreneurial process. This book is designed to bring the practical implications of my research to the forefront. I decided to take a different approach with this book and not write it for a publisher. I did this because I wanted the ideas to be freely available: (1) I wanted those interested in practical advice for entrepreneurs to be able to freely download, distribute, and use this information (I only ask that the content be properly cited), (2) I wanted to release the chapters independently and make chapters available as they are finished, and; (3) I wanted this work to be a dialogue rather than a one-way conversation—I hope readers email me feedback (positive and negative) so that I can use this information to revise the book. In producing the journal articles underpinning this book, I have had the pleasure of working with many talented and wonderful colleagues—they are cited at the end of each chapter. I hope you find some of the advice in this book useful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plasma Nanoscience is a multidisciplinary research field which aims to elucidate the specific roles, purposes, and benefits of the ionized gas environment in assembling and processing nanoscale objects in natural, laboratory and technological situations. Compared to neutral gas-based routes, in low-temperature weakly-ionized plasmas there is another level of complexity related to the necessity of creating and sustaining a suitable degree of ionization and a much larger number of species generated in the gas phase. The thinner the nanotubes, the stronger is the quantum confinement of electrons and more unique size-dependent quantum effects can emerge. Furthermore, due to a very high mobility of electrons, the surfaces are at a negative potential compared to the plasma bulk. Therefore, there are non-uniform electric fields within the plasma sheath. The electric field lines start in the plasma bulk and converge to the sharp tips of the developing one-dimensional nanostructures.