943 resultados para Standard
Resumo:
The southern industrial rivers (Aire, Calder, Don and Trent) feeding the Humber estuary were routinely monitored for a range of chlorinated micro- organic contaminants at least once a week over a 1.5-year period. Environmental Quality Standards (EQSs) for inland waters were set under the European Economic Community for a limited number of problematic contaminants (18). The results of the monitoring program for seven classes of chlorinated pollutants on the EQS list are presented in this study. All compounds were detected frequently with the exception of hexachlorobutadiene (where only one detectable measurement out of 280 individual samples occurred). In general, the rivers fell into two classes with respect to their contamination patterns. The Aire and Calder carried higher concentrations of micro- pollutants than the Don and Trent, with the exception of hexachlorobenzene (HCB). For Σ hexachlorocyclohexane (HCH) isomers (α + γ) and for dieldrin, a number of samples (~ 5%) exceeded their EQS for both the Aire and Calder. Often, ΣHCH concentrations were just below the EQS level. Levels of p,p'- DDT on occasions approached the EQS for these two rivers, but only one sample (out of 140) exceeded the EQS. No compounds exceeded their EQS levels on the Don and Trent. Analysis of the ratio of γ HCH/αHCH indicated that the source of HCH for the Don and Trent catchments was primarily lindane (γHCH) and, to a lesser extent, technical HCH (mixture of HCH isomers, dominated by α HCH), while the source(s) for the Aire and Calder had a much higher contribution from technical HCH.
Resumo:
Introduction
Mild cognitive impairment (MCI) has clinical value in its ability to predict later dementia. A better understanding of cognitive profiles can further help delineate who is most at risk of conversion to dementia. We aimed to (1) examine to what extent the usual MCI subtyping using core criteria corresponds to empirically defined clusters of patients (latent profile analysis [LPA] of continuous neuropsychological data) and (2) compare the two methods of subtyping memory clinic participants in their prediction of conversion to dementia.
Methods
Memory clinic participants (MCI, n = 139) and age-matched controls (n = 98) were recruited. Participants had a full cognitive assessment, and results were grouped (1) according to traditional MCI subtypes and (2) using LPA. MCI participants were followed over approximately 2 years after their initial assessment to monitor for conversion to dementia.
Results
Groups were well matched for age and education. Controls performed significantly better than MCI participants on all cognitive measures. With the traditional analysis, most MCI participants were in the amnestic multidomain subgroup (46.8%) and this group was most at risk of conversion to dementia (63%). From the LPA, a three-profile solution fit the data best. Profile 3 was the largest group (40.3%), the most cognitively impaired, and most at risk of conversion to dementia (68% of the group).
Discussion
LPA provides a useful adjunct in delineating MCI participants most at risk of conversion to dementia and adds confidence to standard categories of clinical inference.
Resumo:
The area and power consumption of low-density parity check (LDPC) decoders are typically dominated by embedded memories. To alleviate such high memory costs, this paper exploits the fact that all internal memories of a LDPC decoder are frequently updated with new data. These unique memory access statistics are taken advantage of by replacing all static standard-cell based memories (SCMs) of a prior-art LDPC decoder implementation by dynamic SCMs (D-SCMs), which are designed to retain data just long enough to guarantee reliable operation. The use of D-SCMs leads to a 44% reduction in silicon area of the LDPC decoder compared to the use of static SCMs. The low-power LDPC decoder architecture with refresh-free D-SCMs was implemented in a 90nm CMOS process, and silicon measurements show full functionality and an information bit throughput of up to 600 Mbps (as required by the IEEE 802.11n standard).
Resumo:
Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.
Resumo:
The continuous demand for highly efficient wireless transmitter systems has triggered an increased interest in switching mode techniques to handle the required power amplification. The RF carrier amplitude-burst transmitter, i.e. a wireless transmitter chain where a phase-modulated carrier is modulated in amplitude in an on-off mode, according to some prescribed envelope-to-time conversion, such as pulse-width or sigma-delta modulation, constitutes a promising architecture capable of efficiently transmitting signals of highly demanding complex modulation schemes. However, the tested practical implementations present results that are way behind the theoretically advanced promises (perfect linearity and efficiency). My original contribution to knowledge presented in this thesis is the first thorough study and model of the power efficiency and linearity characteristics that can be actually achieved with this architecture. The analysis starts with a brief revision of the theoretical idealized behavior of these switched-mode amplifier systems, followed by the study of the many sources of impairments that appear when the real system is implemented. In particular, a special attention is paid to the dynamic load modulation caused by the often ignored interaction between the narrowband signal reconstruction filter and the usual single-ended switched-mode power amplifier, which, among many other performance impairments, forces a two transistor implementation. The performance of this architecture is clearly explained based on the presented theory, which is supported by simulations and corresponding measured results of a fully working implementation. The drawn conclusions allow the development of a set of design rules for future improvements, one of which is proposed and verified in this thesis. It suggests a significant modification to this traditional architecture, where now the phase modulated carrier is always on – and thus allowing a single transistor implementation – and the amplitude is impressed into the carrier phase according to a bi-phase code.
Resumo:
One of the tasks of teaching (Ball, Thames, & Phelps, 2008) concerns the work of interpreting student error and evaluating alternative algorithms used by students. Teachers’ abilities to understand nonstandard student work affects their instructional decisions, the explanations they provide in the classroom, the way they guide their students, and how they conduct mathematical discussions. However, their knowledge or their perceptions of the knowledge may not correspond to the actual level of knowledge that will support flexibility and fluency in a mathematics classroom. In this paper, we focus on Norwegian and Portuguese teachers’ reflections when trying to give sense to students’ use of nonstandard subtraction algorithms and of the mathematics imbedded in such. By discussing teachers’ mathematical knowledge associated with these situations and revealed in their reflections, we can perceive the difficulties teachers have in making sense of students’ solutions that differ from those most commonly reached.
Resumo:
This speech was given when the United States Senate was considering a bill to authorize the free coinage of the standard silver dollar and to restore its legal tender character. Mr. Bayard argues against the bill in this speech. He is interrupted during his speech multiple times and questioned about his points.