178 resultados para Many-valued logic
Resumo:
The non-covalent incorporation of responsive luminescent lanthanide, Ln(iii), complexes with orthogonal outputs from Eu(iii) and Tb(iii) in a gel matrix allows for in situ logic operation with colorimetric outputs. Herein, we report an exemplar system with two inputs ([H(+)] and [F(-)]) within a p(HEMA-co-MMA) polymer organogel acting as a dual-responsive device and identify future potential for such systems.
Resumo:
WHIRLBOB, also known as STRIBOBr2, is an AEAD (Authenticated Encryption with Associated Data) algorithm derived from STRIBOBr1 and the Whirlpool hash algorithm. WHIRLBOB/STRIBOBr2 is a second round candidate in the CAESAR competition. As with STRIBOBr1, the reduced-size Sponge design has a strong provable security link with a standardized hash algorithm. The new design utilizes only the LPS or ρ component of Whirlpool in flexibly domain-separated BLNK Sponge mode. The number of rounds is increased from 10 to 12 as a countermeasure against Rebound Distinguishing attacks. The 8 ×8 - bit S-Box used by Whirlpool and WHIRLBOB is constructed from 4 ×4 - bit “MiniBoxes”. We report on fast constant-time Intel SSSE3 and ARM NEON SIMD WHIRLBOB implementations that keep full miniboxes in registers and access them via SIMD shuffles. This is an efficient countermeasure against AES-style cache timing side-channel attacks. Another main advantage of WHIRLBOB over STRIBOBr1 (and most other AEADs) is its greatly reduced implementation footprint on lightweight platforms. On many lower-end microcontrollers the total software footprint of π+BLNK = WHIRLBOB AEAD is less than half a kilobyte. We also report an FPGA implementation that requires 4,946 logic units for a single round of WHIRLBOB, which compares favorably to 7,972 required for Keccak / Keyak on the same target platform. The relatively small S-Box gate count also enables efficient 64-bit bitsliced straight-line implementations. We finally present some discussion and analysis on the relationships between WHIRLBOB, Whirlpool, the Russian GOST Streebog hash, and the recent draft Russian Encryption Standard Kuznyechik.
Resumo:
Structural and functional information encoded in DNA combined with unique properties of nanomaterials could be of use for the construction of novel biocomputational circuits and intelligent biomedical nanodevices. However, at present their practical applications are still limited by either low reproducibility of fabrication, modest sensitivity, or complicated handling procedures. Here, we demonstrate the construction of label-free and switchable molecular logic gates (AND, INHIBIT, and OR) that use specific conformation modulation of a guanine- and thymine-rich DNA, while the optical readout is enabled by the tunable metamaterials which serve as a substrate for surface enhanced Raman spectroscopy (MetaSERS). Our MetaSERS-based DNA logic is simple to operate, highly reproducible, and can be stimulated by ultra-low concentration of the external inputs, enabling an extremely sensitive detection of mercury ions down to 2×10-4 ppb, which is four orders of magnitude lower than the exposure limit allowed by United States Environmental Protection Agency
Resumo:
The second harmonic generation (SHG) intensity spectrum of SiC, ZnO, GaN two-dimensional hexagonal crystals is calculated by using a real-time first-principles approach based on Green's function theory [Attaccalite et al., Phys. Rev. B: Condens. Matter Mater. Phys. 2013 88, 235113]. This approach allows one to go beyond the independent particle description used in standard first-principles nonlinear optics calculations by including quasiparticle corrections (by means of the GW approximation), crystal local field effects and excitonic effects. Our results show that the SHG spectra obtained using the latter approach differ significantly from their independent particle counterparts. In particular they show strong excitonic resonances at which the SHG intensity is about two times stronger than within the independent particle approximation. All the systems studied (whose stabilities have been predicted theoretically) are transparent and at the same time exhibit a remarkable SHG intensity in the range of frequencies at which Ti:sapphire and Nd:YAG lasers operate; thus they can be of interest for nanoscale nonlinear frequency conversion devices. Specifically the SHG intensity at 800 nm (1.55 eV) ranges from about 40-80 pm V(-1) in ZnO and GaN to 0.6 nm V(-1) in SiC. The latter value in particular is 1 order of magnitude larger than values in standard nonlinear crystals.
Resumo:
Subjective risks of having contaminated apples elicited via the Exchangeability Method (EM) are examined in this study. In particular, as the experimental design allows us to investigate the validity of elicited risk measures, we examine the magnitude of differences between valid and invalid observations. In addition, using an econometric model, we also explore the effect of consumers’ socioeconomic status and attitudes toward food safety on subjects’ perceptions of pesticide residues in apples. Results suggest first, that consumers do not expect an increase in the number of apples containing only one pesticide residue, but, rather, in the number of those apples with traces of multiple residues. Second, we find that valid subjective risk measures do not significantly diverge from invalid ones, indicative of little effect of internal validity on the actual magnitude of subjective risks. Finally, we show that subjective risks depend on age, education, a subject’s ties to the apple industry, and consumer association membership.
Resumo:
Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.
Resumo:
In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.
In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.
Resumo:
A core activity in information systems development involves building a conceptual model of the domain that an information system is intended to support. Such models are created using a conceptual-modeling (CM) grammar. Just as high-quality conceptual models facilitate high-quality systems development, high-quality CM grammars facilitate high-quality conceptual modeling. This paper provides a new perspective on ways to improve the quality of the semantics of CM grammars. For many years, the leading approach to this topic has relied on ontological theory. We show, however, that the ontological approach captures only half the story. It needs to be coupled with a logical approach. We explain how the ontological quality and logical quality of CM grammars interrelate. Furthermore, we outline three contributions that a logical approach can make to evaluating the quality of CM grammars: a means of seeing some familiar conceptual-modeling problems in simpler ways; the illumination of new problems; and the ability to prove the benefit of modifying existing CM grammars in particular ways. We demonstrate these benefits in the context of the Entity-Relationship grammar. More generally, our paper opens up a new area of research with many opportunities for future research and practice.
Resumo:
A key assumption of dual process theory is that reasoning is an explicit, effortful, deliberative process. The present study offers evidence for an implicit, possibly intuitive component of reasoning. Participants were shown sentences embedded in logically valid or invalid arguments. Participants were not asked to reason but instead rated the sentences for liking (Experiment 1) and physical brightness (Experiments 2-3). Sentences that followed logically from preceding sentences were judged to be more likable and brighter. Two other factors thought to be linked to implicit processing-sentence believability and facial expression-had similar effects on liking and brightness ratings. The authors conclude that sensitivity to logical structure was implicit, occurring potentially automatically and outside of awareness. They discuss the results within a fluency misattribution framework and make reference to the literature on discourse comprehension.
Resumo:
Background: Peer tutoring has been described as “people from similar social groupings who are not professional teachers helping each other to learn and learning themselves by teaching”. Peer tutoring is well accepted as a source of support in many medical curricula, where participation and learning involve a process of socialisation.
Peer tutoring can ease the transition of the junior students from the university class environment to the hospital workplace. In this paper, we apply the Experienced Based Learning (ExBL) model to explore medical students’ perceptions of their experience of taking part in a newly established peer tutoring program at a hospital based
clinical school.
Methods: In 2014, all students at Sydney Medical School – Central, located at Royal Prince Alfred Hospital were invited to voluntarily participate in the peer tutoring program. Year 3 students (n = 46) were invited to act as tutors for Year 1 students (n = 50), and Year 4 students (n = 60) were invited to act as tutors for Year 2 students (n = 51). Similarly, the ‘tutees’ were invited to take part on a voluntary basis. Students were invited to attend focus groups, which were held at the end of the program. Framework analysis was used to code and categorise data into themes.
Results: In total, 108/207 (52 %) students participated in the program. A total of 42/106 (40 %) of Year 3 and 4 students took part as tutors; and of 66/101 (65 %) of Year 1 and 2 students took part as tutees. Five focus groups were held, with 50/108 (46 %) of students voluntarily participating. Senior students (tutors) valued the opportunity to practice and improve their medical knowledge and teaching skills. Junior students (tutees) valued the opportunity for additional practice and patient interaction, within a relaxed, small group learning environment.
Conclusion: Students perceived the peer tutoring program as affording opportunities not otherwise available within the curriculum. The peer teaching program provided a framework within the medical curriculum for senior students to practice and improve their medical knowledge and teaching skills. Concurrently, junior students were provided with a valuable learning experience that they reported as being qualitatively different to traditional teaching by faculty.
Resumo:
With the rapid development of internet-of-things (IoT), face scrambling has been proposed for privacy protection during IoT-targeted image/video distribution. Consequently in these IoT applications, biometric verification needs to be carried out in the scrambled domain, presenting significant challenges in face recognition. Since face models become chaotic signals after scrambling/encryption, a typical solution is to utilize traditional data-driven face recognition algorithms. While chaotic pattern recognition is still a challenging task, in this paper we propose a new ensemble approach – Many-Kernel Random Discriminant Analysis (MK-RDA) to discover discriminative patterns from chaotic signals. We also incorporate a salience-aware strategy into the proposed ensemble method to handle chaotic facial patterns in the scrambled domain, where random selections of features are made on semantic components via salience modelling. In our experiments, the proposed MK-RDA was tested rigorously on three human face datasets: the ORL face dataset, the PIE face dataset and the PUBFIG wild face dataset. The experimental results successfully demonstrate that the proposed scheme can effectively handle chaotic signals and significantly improve the recognition accuracy, making our method a promising candidate for secure biometric verification in emerging IoT applications.