941 resultados para partial nullity
Resumo:
Levels of lignin and hydroxycinnamic acid wall components in three genera of forage grasses (Lolium,Festuca and Dactylis) have been accurately predicted by Fourier-transform infrared spectroscopy using partial least squares models correlated to analytical measurements. Different models were derived that predicted the concentrations of acid detergent lignin, total hydroxycinnamic acids, total ferulate monomers plus dimers, p-coumarate and ferulate dimers in independent spectral test data from methanol extracted samples of perennial forage grass with accuracies of 92.8%, 86.5%, 86.1%, 59.7% and 84.7% respectively, and analysis of model projection scores showed that the models relied generally on spectral features that are known absorptions of these compounds. Acid detergent lignin was predicted in samples of two species of energy grass, (Phalaris arundinacea and Pancium virgatum) with an accuracy of 84.5%.
Resumo:
We propose a new all-optical, all-fibre scheme for conversion of time-division multiplexed to wavelength-division multiplexed signals using cross-phase modulation with triangular pulses. Partial signal regeneration using this technique is also demonstrated.
Resumo:
Partial information leakage in deterministic public-key cryptosystems refers to a problem that arises when information about either the plaintext or the key is leaked in subtle ways. Quite a common case is where there are a small number of possible messages that may be sent. An attacker may be able to crack the scheme simply by enumerating all the possible ciphertexts. Two methods are proposed for facing the partial information leakage problem in RSA that incorporate a random element into the encrypted message to increase the number of possible ciphertexts. The resulting scheme is, effectively, an RSA-like cryptosystem which exhibits probabilistic encryption. The first method involves encrypting several similar messages with RSA and then using the Quadratic Residuosity Problem (QRP) to mark the intended one. In this way, an adversary who has correctly guessed two or more of the ciphertexts is still in doubt about which message is the intended one. The cryptographic strength of the combined system is equal to the computational difficulty of factorising a large integer; ideally, this should be feasible. The second scheme uses error-correcting codes for accommodating the random component. The plaintext is processed with an error-correcting code and deliberately corrupted before encryption. The introduced corruption lies within the error-correcting ability of the code, so as to enable the recovery of the original message. The random corruption offers a vast number of possible ciphertexts corresponding to a given plaintext; hence an attacker cannot deduce any useful information from it. The proposed systems are compared to other cryptosystems sharing similar characteristics, in terms of execution time and ciphertext size, so as to determine their practical utility. Finally, parameters which determine the characteristics of the proposed schemes are also examined.
Resumo:
The English writing system is notoriously irregular in its orthography at the phonemic level. It was therefore proposed that focusing beginner-spellers’ attention on sound-letter relations at the sub-syllabic level might improve spelling performance. This hypothesis was tested in Experiments 1 and 2 using a ‘clue word’ paradigm to investigate the effect of analogy teaching intervention / non-intervention on the spelling performance of an experimental group and controls. The results overall showed the intervention to be effective in improving spelling, and this effect to be enduring. Experiment 3 demonstrated a greater application of analogy in spelling, when clue words, which participants used in analogy to spell test words, remained in view during testing. A series of regression analyses, with spelling entered as the criterion variable and age, analogy and phonological plausibility (PP) as predictors, showed both analogy and PP to be highly predictive of spelling. Experiment 4 showed that children could use analogy to improve their spelling, even without intervention, by comparing their performance in spelling words presented in analogous categories or in random lists. Consideration of children’s patterns of analogy use at different points of development showed three age groups to use similar patterns of analogy, but contrasting analogy patterns for spelling different words. This challenges stage theories of analogy use in literacy. Overall the most salient units used in analogy were the rime and, to a slightly lesser degree, the onset-vowel and vowel. Finally, Experiment 5 showed analogy and phonology to be fairly equally influential in spelling, but analogy to be more influential than phonology in reading. Five separate experiments therefore found analogy to be highly influential in spelling. Experiment 5 also considered the role of memory and attention in literacy attainment. The important implications of this research are that analogy, rather than purely phonics-based strategy, is instrumental in correct spelling in English.
Resumo:
In previous statnotes, the application of correlation and regression methods to the analysis of two variables (X,Y) was described. The most important statistic used to measure the degree of correlation between two variables is Pearson’s ‘product moment correlation coefficient’ (‘r’). The correlation between two variables may be due to their common relation to other variables. Hence, investigators using correlation studies need to be alert to the possibilities of spurious correlation and the methods of ‘partial correlation’ are one method of taking this into account. This statnote applies the methods of partial correlation to three scenarios. First, to a fairly obvious example of a spurious correlation resulting from the ‘size effect’ involving the relationship between the number of general practitioners (GP) and the number of deaths of patients in a town. Second, to the relationship between the abundance of the nitrogen-fixing bacterium Azotobacter in soil and three soil variables, and finally, to a more complex scenario, first introduced in Statnote 24involving the relationship between the growth of lichens in the field and climate.
Resumo:
We experimentally confirm the optimum combination of modulator delay and filter bandwidth to maximize the dispersion tolerance of partial DPSK.
Resumo:
This paper is the first paper to present findings evaluating the consequences for employees of full and partial privatization using difference-in-differences combined with propensity score matching. We find: (1) partial privatization causes job creation in contrast to full privatization, which destroys jobs, (2) full privatization causes higher labor productivity improvement than partial privatization, (3) wage increases occur only in partially privatized firms and (4) there are small increases in labor quality investment in both cases. The results suggest partial privatization exploits market discipline to induce labor productivity whilst simultaneously providing welfare improvements for labor. This is the ‘win-win’ outcome predicted by the ‘helping hand’ theory of government. Our results suggest that governments are likely to gain wider support for a program of partial privatization rather than full privatization.
Resumo:
In this paper, we consider analytical and numerical solutions to the Dirichlet boundary-value problem for the biharmonic partial differential equation on a disc of finite radius in the plane. The physical interpretation of these solutions is that of the harmonic oscillations of a thin, clamped plate. For the linear, fourth-order, biharmonic partial differential equation in the plane, it is well known that the solution method of separation in polar coordinates is not possible, in general. However, in this paper, for circular domains in the plane, it is shown that a method, here called quasi-separation of variables, does lead to solutions of the partial differential equation. These solutions are products of solutions of two ordinary linear differential equations: a fourth-order radial equation and a second-order angular differential equation. To be expected, without complete separation of the polar variables, there is some restriction on the range of these solutions in comparison with the corresponding separated solutions of the second-order harmonic differential equation in the plane. Notwithstanding these restrictions, the quasi-separation method leads to solutions of the Dirichlet boundary-value problem on a disc with centre at the origin, with boundary conditions determined by the solution and its inward drawn normal taking the value 0 on the edge of the disc. One significant feature for these biharmonic boundary-value problems, in general, follows from the form of the biharmonic differential expression when represented in polar coordinates. In this form, the differential expression has a singularity at the origin, in the radial variable. This singularity translates to a singularity at the origin of the fourth-order radial separated equation; this singularity necessitates the application of a third boundary condition in order to determine a self-adjoint solution to the Dirichlet boundary-value problem. The penultimate section of the paper reports on numerical solutions to the Dirichlet boundary-value problem; these results are also presented graphically. Two specific cases are studied in detail and numerical values of the eigenvalues are compared with the results obtained in earlier studies.
Resumo:
We propose a new all-optical, all-fibre scheme for conversion of time-division multiplexed to wavelength-division multiplexed signals using cross-phase modulation with triangular pulses. Partial signal regeneration using this technique is also demonstrated.
Resumo:
Although the majority of people with epilepsy have a good prognosis and their seizures can be well controlled with pharmacotherapy, up to one-third of patients can develop drug-resistant epilepsy, especially those patients with partial seizures. This unmet need has driven considerable efforts over the last few decades aimed at developing and testing newer antiepileptic agents to improve seizure control. One of the most promising antiepileptic drugs of the new generation is zonisamide, a benzisoxazole derivative chemically unrelated to other anticonvulsant agents. In this article, the authors present the results of a systematic literature review summarizing the current evidence on the efficacy and tolerability of zonisamide for the treatment of partial seizures. Of particular interest within this updated review are the recent data on the use of zonisamide as monotherapy, as they might open new therapeutic avenues. © 2014 Springer Healthcare.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).