803 resultados para Counter-insurgency
Resumo:
Interference with time estimation from concurrent nontemporal processing has been shown to depend on the short-term memory requirements of the concurrent task (Fortin Breton, 1995; Fortin, Rousseau, Bourque, & Kirouac, 1993). In particular, it has been claimed that active processing of information in short-term memory produces interference, whereas simply maintaining information does not. Here, four experiments are reported in which subjects were trained to produce a 2,500-msec interval and then perform concurrent memory tasks. Interference with timing was demonstrated for concurrent memory tasks involving only maintenance. In one experiment, increasing set size in a pitch memory task systematically lengthened temporal production. Two further experiments suggested that this was due to a specific interaction between the short-term memory requirements of the pitch task and those of temporal production. In the final experiment, subjects performed temporal production while concurrently remembering the durations of a set of tones. Interference with interval production was comparable to that produced by the pitch memory task. Results are discussed in terms of a pacemaker-counter model of temporal processing, in which the counter component is supported by short-term memory.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
As consumers demand more functionality) from their electronic devices and manufacturers supply the demand then electrical power and clock requirements tend to increase, however reassessing system architecture can fortunately lead to suitable counter reductions. To maintain low clock rates and therefore reduce electrical power, this paper presents a parallel convolutional coder for the transmit side in many wireless consumer devices. The coder accepts a parallel data input and directly computes punctured convolutional codes without the need for a separate puncturing operation while the coded bits are available at the output of the coder in a parallel fashion. Also as the computation is in parallel then the coder can be clocked at 7 times slower than the conventional shift-register based convolutional coder (using DVB 7/8 rate). The presented coder is directly relevant to the design of modern low-power consumer devices
Resumo:
Meteorological measurements from Lerwick Observatory, Shetland (60°09′N, 1°08′W), are compared with short-term changes in Climax neutron counter cosmic ray measurements. For transient neutron count reductions of 10–12%, broken cloud becomes at least 10% more frequent on the neutron minimum day, above expectations from sampling. This suggests a rapid timescale (1 day) cloud response to cosmic ray changes. However, larger or smaller neutron count reductions do not coincide with cloud responses exceeding sampling effects. Larger events are too rare to provide a robust signal above the sampling noise. Smaller events are too weak to be observed above the natural variability.
Resumo:
Two vertical cosmic ray telescopes for atmospheric cosmic ray ionization event detection are compared. Counter A, designed for low power remote use, was deployed in the Welsh mountains; its event rate increased with altitude as expected from atmospheric cosmic ray absorption. Independently, Counter B’s event rate was found to vary with incoming particle acceptance angle. Simultaneous colocated comparison of both telescopes exposed to atmospheric ionization showed a linear relationship between their event rates.
Resumo:
Typeface design: collaborative work commissioned by Adobe Inc. Published but unreleased. The Adobe Devanagari typefaces were commissioned from Tiro Typeworks and collaboratively designed by Tim Holloway, Fiona Ross and John Hudson, beginning in 2005. The types were officially released in 2009. The design brief was to produce a typeface for modern business communications in Hindi and other languages, to be legible both in print and on screen. Adobe Devanagari was designed to be highly readable in a range of situations including quite small sizes in spreadsheets and in continuous text setting, as well as at display sizes, where the full character of the typeface reveals itself. The construction of the letters is based on traditional penmanship but possesses less stroke contrast than many Devanagari types, in order to maintain strong, legible forms at smaller sizes. To achieve a dynamic, fluid style the design features a rounded treatment of distinguishing terminals and stroke reversals, open counters that also aid legibility at smaller sizes, and delicately flaring strokes. Together, these details reveal an original hand and provide a contemporary approach that is clean, clear and comfortable to read whether in short or long passages of text. This new approach to a traditional script is intended to counter the dominance of rigid, staccato-like effects of straight verticals and horizontals in earlier types and many existing fonts. OpenType Layout features in the fonts provide both automated and discretionary access to an extensive glyph set, enabling sophisticated typography. Many conjuncts preferred in classical literary texts and particularly in some North Indian languages are included; these literary conjuncts may be substituted by specially designed alternative linear forms and fitted half forms. The length of the ikars—ि and ी—varies automatically according to adjacent letter or conjunct width. Regional variants of characters and numerals (e.g. Marathi forms) are included as alternates. Careful attention has been given to the placements of all vowel signs and modifiers. The fonts include both proportional and tabular numerals in Indian and European styles. Extensive kerning covers several thousand possible combinations of half forms and full forms to anticipate arbitrary conjuncts in foreign loan words. _____
Resumo:
Around the time of Clausewitz’s writing, a new element was introduced into partisan warfare: ideology. Previously, under the ancien régime, partisans were what today we would call special forces, light infantry or cavalry, almost always mercenaries, carrying out special operations, while the main action in war took place between regular armies. Clausewitz lectured his students on such ‘small wars’. In the American War of Independence and the resistance against Napoleon and his allies, operations carried out by such partisans merged with counter-revolutionary, nationalist insurgencies, but these Clausewitz analysed in a distinct category, ‘people's war’. Small wars, people's war, etc. should thus not be thought of as monopoly of either the political Right or the Left.
Resumo:
Adaptive filters used in code division multiple access (CDMA) receivers to counter interference have been formulated both with and without the assumption of training symbols being transmitted. They are known as training-based and blind detectors respectively. We show that the convergence behaviour of the blind minimum-output-energy (MOE) detector can be quite easily derived, unlike what was implied by the procedure outlined in a previous paper. The simplification results from the observation that the correlation matrix determining convergence performance can be made symmetric, after which many standard results from the literature on least mean square (LMS) filters apply immediately.
Resumo:
Adaptive least mean square (LMS) filters with or without training sequences, which are known as training-based and blind detectors respectively, have been formulated to counter interference in CDMA systems. The convergence characteristics of these two LMS detectors are analyzed and compared in this paper. We show that the blind detector is superior to the training-based detector with respect to convergence rate. On the other hand, the training-based detector performs better in the steady state, giving a lower excess mean-square error (MSE) for a given adaptation step size. A novel decision-directed LMS detector which achieves the low excess MSE of the training-based detector and the superior convergence performance of the blind detector is proposed.
Resumo:
Pregnant rats were given control (46 mg iron/kg, 61 mg zinc/kg), low-Zn (6.9 mg Zn/kg) or low-Zn plus Fe (168 mg Fe/kg) diets from day 1 of pregnancy. The animals were allowed to give birth and parturition times recorded. Exactly 24 h after the end of parturition the pups were killed and analysed for water, fat, protein, Fe and Zn contents and the mothers' haemoglobin (Hb) and packed cell volume (PCV) were measured. There were no differences in weight gain or food intakes throughout pregnancy. Parturition times were similar (mean time 123 (SE 15) min) and there were no differences in the number of pups born. Protein, water and fat contents of the pups were similar but the low-Zn Fe-supplemented group had higher pup Fe than the low-Zn unsupplemented group, and the control group had higher pup Zn than both the low-Zn groups. The low-Zn groups had a greater incidence of haemorrhaged or deformed pups, or both, than the controls. Pregnant rats were given diets of adequate Zn level (40 mg/kg) but with varying Fe:Zn (0.8, 1.7, 2.9, 3.7). Zn retention from the diet was measured using 65Zn as an extrinsic label on days 3, 10 and 17 of pregnancy with a whole-body gamma-counter. A group of non-pregnant rats was also included as controls. The 65Zn content of mothers and pups was measured 24-48 h after birth and at 14, 21 and 24 d of age. In all groups Zn retention was highest from the first meal, fell in the second meal and then rose in the third meal of the pregnant but not the non-pregnant rats. There were no differences between the groups given diets of varying Fe:Zn level. Approximately 25% of the 65Zn was transferred from the mothers to the pups by the time they were 48 h old, and a further 17% during the first 14 d of lactation. The pup 65Zn content did not significantly increase after the first 20 d of lactation but the maternal 65Zn level continued to fall gradually.
Resumo:
This paper explores principal‐agent issues in the stock selection processes of institutional property investors. Drawing upon an interview survey of fund managers and acquisition professionals, it focuses on the relationships between principals and external agents as they engage in property transactions. The research investigated the extent to which the presence of outcome‐based remuneration structures could lead to biased advice, overbidding and/or poor asset selection. It is concluded that institutional property buyers are aware of incentives for opportunistic behaviour by external agents, often have sufficient expertise to robustly evaluate agents’ advice and that these incentives are counter‐balanced by a number of important controls on potential opportunistic behaviour. There are strong counter‐incentives in the need for the agents to establish personal relationships and trust between themselves and institutional buyers, to generate repeat and related business and to preserve or generate a good reputation in the market.
Resumo:
DNA-strand exchange is a vital step in the recombination process, of which a key intermediate is the four-way DNA Holliday junction formed transiently in most living organisms. Here, the single-crystal structure at a resolution of 2.35 Å of such a DNA junction formed by d(CCGGTACCGG)2, which has crystallized in a more highly symmetrical packing mode to that previously observed for the same sequence, is presented. In this case, the structure is isomorphous to the mismatch sequence d(CCGGGACCGG)2, which reveals the roles of both lattice and DNA sequence in determining the junction geometry. The helices cross at the larger angle of 43.0° (the previously observed angle for this sequence was 41.4°) as a right-handed X. No metal cations were observed; the crystals were grown in the presence of only group I counter-cations.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Classical counterinsurgency theory – written before the 19th century – has generally strongly opposed atrocities, as have theoreticians writing on how to conduct insurgencies. For a variety of reasons – ranging from pragmatic to religious or humanitarian – theoreticians of both groups have particularly argued for the lenient treatment of civilians associated with the enemy camp, although there is a marked pattern of exceptions, for example, where heretics or populations of cities refusing to surrender to besieging armies are concerned. And yet atrocities – defined here as acts of violence against the unarmed (non-combatants, or wounded or imprisoned enemy soldiers), or needlessly painful and/or humiliating treatment of enemy combatants, beyond any action needed to incapacitate or disarm them – occur frequently in small wars. Examples abound where these exhortations have been ignored, both by forces engaged in an insurgency and by forces trying to put down a rebellion. Why have so many atrocities been committed in war if so many arguments have been put forward against them? This is the basic puzzle for which the individual contributions to this special issue are seeking to find tentative answers, drawing on case studies.