402 resultados para Order-preserving Functions
Resumo:
Discovering the means to prevent and cure schizophrenia is a vision that motivates many scientists. But in order to achieve this goal, we need to understand its neurobiological basis. The emergent metadiscipline of cognitive neuroscience fields an impressive array of tools that can be marshaled towards achieving this goal, including powerful new methods of imaging the brain (both structural and functional) as well as assessments of perceptual and cognitive capacities based on psychophysical procedures, experimental tasks and models developed by cognitive science. We believe that the integration of data from this array of tools offers the greatest possibilities and potential for advancing understanding of the neural basis of not only normal cognition but also the cognitive impairments that are fundamental to schizophrenia. Since sufficient expertise in the application of these tools and methods rarely reside in a single individual, or even a single laboratory, collaboration is a key element in this endeavor. Here, we review some of the products of our integrative efforts in collaboration with our colleagues on the East Coast of Australia and Pacific Rim. This research focuses on the neural basis of executive function deficits and impairments in early auditory processing in patients using various combinations of performance indices (from perceptual and cognitive paradigms), ERPs, fMRI and sMRI. In each case, integration of two or more sources of information provides more information than any one source alone by revealing new insights into structure-function relationships. Furthermore, the addition of other imaging methodologies (such as DTI) and approaches (such as computational models of cognition) offers new horizons in human brain imaging research and in understanding human behavior.
Resumo:
Purpose – This paper aims to recognise the importance of informal processes within corporate governance and complement existing research in this area by investigating factors associated with the existence of informal interactions between audit committees and internal audit functions and in providing directions for future research. Design/methodology/approach – To examine the existence and drivers of informal interactions between audit committees and internal audit functions, this paper relies on a questionnaire survey of chief audit executives (CAEs) in the UK from listed and non-listed, as well as financial and non-financial, companies. While prior qualitative research suggests that informal interactions do take place, most of the evidence is based on particular organisational setting or on a very small range of interviews. The use of a questionnaire enabled the examination of the existence of internal interactions across a relatively larger number of entities. Findings – The paper finds evidence of audit committees and internal audit functions engaging in informal interactions in addition to formal pre-scheduled regular meetings. Informal interactions complement formal meetings with the audit committee and as such represent additional opportunities for the audit committees to monitor internal audit functions. Audit committees’ informal interactions are significantly and positively associated with audit committee independence, audit chair’s knowledge and experience, and internal audit quality. Originality/value – The results demonstrate the importance of the background of the audit committee chair for the effectiveness of the governance process. This is possibly the first paper to examine the relationship between audit committee quality and internal audit, on the existence and driver of informal interactions. Policy makers should recognize that in addition to formal mechanisms, informal processes, such as communication outside of formal pre-scheduled meetings, play a significant role in corporate governance.
Resumo:
BACKGROUND: The evaluation of retinal image quality in cataract eyes has gained importance and the clinical modulation transfer functions (MTF) can obtained by aberrometer and double pass (DP) system. This study aimed to compare MTF derived from a ray tracing aberrometer and a DP system in early cataractous and normal eyes. METHODS: There were 128 subjects with 61 control eyes and 67 eyes with early cataract defined according to the Lens Opacities Classification System III. A laser ray-tracing wavefront aberrometer (iTrace) and a double pass (DP) system (OQAS) assessed ocular MTF for 6.0 mm pupil diameters following dilation. Areas under the MTF (AUMTF) and their correlations were analyzed. Stepwise multiple regression analysis assessed factors affecting the differences between iTrace- and OQAS-derived AUMTF for the early cataract group. RESULTS: For both early cataract and control groups, iTrace-derived MTFs were higher than OQAS-derived MTFs across a range of spatial frequencies (P < 0.01). No significant difference between the two groups occurred for iTrace-derived AUMTF, but the early cataract group had significantly smaller OQAS-derived AUMTF than did the control group (P < 0.01). AUMTF determined from both the techniques demonstrated significant correlations with nuclear opacities, higher-order aberrations (HOAs), visual acuity, and contrast sensitivity functions, while the OQAS-derived AUMTF also demonstrated significant correlations with age and cortical opacity grade. The factors significantly affecting the difference between iTrace and OQAS AUMTF were root-mean-squared HOAs (standardized beta coefficient = -0.63, P < 0.01) and age (standardized beta coefficient = 0.26, P < 0.01). CONCLUSIONS: MTFs determined from a iTrace and a DP system (OQAS) differ significantly in early cataractous and normal subjects. Correlations with visual performance were higher for the DP system. OQAS-derived MTF may be useful as an indicator of visual performance in early cataract eyes.
Resumo:
Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block cipher is ideal. We address the problem of building indifferentiable compression functions from the PGV compression functions. We consider a general form of 64 PGV compression functions and replace the linear feed-forward operation in this generic PGV compression function with an ideal block cipher independent of the one used in the generic PGV construction. This modified construction is called a generic modified PGV (MPGV). We analyse indifferentiability of the generic MPGV construction in the ideal cipher model and show that 12 out of 64 MPGV compression functions in this framework are indifferentiable from a FIL-RO. To our knowledge, this is the first result showing that two independent block ciphers are sufficient to design indifferentiable single-block-length compression functions.
Resumo:
Summary form only given. Geometric simplicity, efficiency and polarization purity make slot antenna arrays ideal solutions for many radar, communications and navigation applications, especially when high power, light weight and limited scan volume are priorities. Resonant arrays of longitudinal slots have a slot spacing of one-half guide wavelength at the design frequency, so that the slots are located at the standing wave peaks. Planar arrays are implemented using a number of rectangular waveguides (branch line guides), arranged side-by-side, while waveguides main lines located behind and at right angles to the branch lines excite the radiating waveguides via centered-inclined coupling slots. Planar slotted waveguide arrays radiate broadside beams and all radiators are designed to be in phase.
Resumo:
Structural damage detection using measured dynamic data for pattern recognition is a promising approach. These pattern recognition techniques utilize artificial neural networks and genetic algorithm to match pattern features. In this study, an artificial neural network–based damage detection method using frequency response functions is presented, which can effectively detect nonlinear damages for a given level of excitation. The main objective of this article is to present a feasible method for structural vibration–based health monitoring, which reduces the dimension of the initial frequency response function data and transforms it into new damage indices and employs artificial neural network method for detecting different levels of nonlinearity using recognized damage patterns from the proposed algorithm. Experimental data of the three-story bookshelf structure at Los Alamos National Laboratory are used to validate the proposed method. Results showed that the levels of nonlinear damages can be identified precisely by the developed artificial neural networks. Moreover, it is identified that artificial neural networks trained with summation frequency response functions give higher precise damage detection results compared to the accuracy of artificial neural networks trained with individual frequency response functions. The proposed method is therefore a promising tool for structural assessment in a real structure because it shows reliable results with experimental data for nonlinear damage detection which renders the frequency response function–based method convenient for structural health monitoring.
Resumo:
Cryptographic hash functions are an important tool of cryptography and play a fundamental role in efficient and secure information processing. A hash function processes an arbitrary finite length input message to a fixed length output referred to as the hash value. As a security requirement, a hash value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle–Damgård construction are followed in almost all widely used standard hash functions such as MD5 and SHA-1.
Straightforward biodegradable nanoparticle generation through megahertz-order ultrasonic atomization
Resumo:
Simple and reliable formation of biodegradable nanoparticles formed from poly-ε-caprolactone was achieved using 1.645 MHz piston atomization of a source fluid of 0.5% w/v of the polymer dissolved in acetone; the particles were allowed to descend under gravity in air 8 cm into a 1 mM solution of sodium dodecyl sulfate. After centrifugation to remove surface agglomerations, a symmetric monodisperse distribution of particles φ 186 nm (SD=5.7, n=6) was obtained with a yield of 65.2%. © 2006 American Institute of Physics.
Resumo:
Non-use values (i.e. economic values assigned by individuals to ecosystem goods and services unrelated to current or future uses) provide one of the most compelling incentives for the preservation of ecosystems and biodiversity. Assessing the non-use values of non-users is relatively straightforward using stated preference methods, but the standard approaches for estimating non-use values of users (stated decomposition) have substantial shortcomings which undermine the robustness of their results. In this paper, we propose a pragmatic interpretation of non-use values to derive estimates that capture their main dimensions, based on the identification of a willingness to pay for ecosystem protection beyond one's expected life. We empirically test our approach using a choice experiment conducted on coral reef ecosystem protection in two coastal areas in New Caledonia with different institutional, cultural, environmental and socio-economic contexts. We compute individual willingness to pay estimates, and derive individual non-use value estimates using our interpretation. We find that, a minima, estimates of non-use values may comprise between 25 and 40% of the mean willingness to pay for ecosystem preservation, less than has been found in most studies.
Resumo:
We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto’04. We also apply our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash functions based on generic checksum functions (e.g., one-way) are a special case of the attacks on the cascaded iterated hash functions previously analysed by Dunkelman and Preneel and are not better than their attacks. On hash functions with easily invertible checksums, our multicollision and herding attacks (if the hash value is short as in MD2) are more efficient than those of Dunkelman and Preneel.
Resumo:
In this paper we present concrete collision and preimage attacks on a large class of compression function constructions making two calls to the underlying ideal primitives. The complexity of the collision attack is above the theoretical lower bound for constructions of this type, but below the birthday complexity; the complexity of the preimage attack, however, is equal to the theoretical lower bound. We also present undesirable properties of some of Stam’s compression functions proposed at CRYPTO ’08. We show that when one of the n-bit to n-bit components of the proposed 2n-bit to n-bit compression function is replaced by a fixed-key cipher in the Davies-Meyer mode, the complexity of finding a preimage would be 2 n/3. We also show that the complexity of finding a collision in a variant of the 3n-bits to 2n-bits scheme with its output truncated to 3n/2 bits is 2 n/2. The complexity of our preimage attack on this hash function is about 2 n . Finally, we present a collision attack on a variant of the proposed m + s-bit to s-bit scheme, truncated to s − 1 bits, with a complexity of O(1). However, none of our results compromise Stam’s security claims.
Resumo:
In the modern era of information and communication technology, cryptographic hash functions play an important role in ensuring the authenticity, integrity, and nonrepudiation goals of information security as well as efficient information processing. This entry provides an overview of the role of hash functions in information security, popular hash function designs, some important analytical results, and recent advances in this field.
Resumo:
This book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems.
Resumo:
We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of 'easy data', which may be paraphrased as "the learning problem has small variance" and "multiple decisions are useful", are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. In this paper we outline a new method for obtaining such adaptive algorithms, based on a potential function that aggregates a range of learning rates (which are essential tuning parameters). By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.