78 resultados para false personation
Resumo:
Models which define fitness in terms of per capita rate of increase of phenotypes are used to analyse patterns of individual growth. It is shown that sigmoid growth curves are an optimal strategy (i.e. maximize fitness) if (Assumption 1a) mortality decreases with body size; (2a) mortality is a convex function of specific growth rate, viewed from above; (3) there is a constraint on growth rate, which is attained in the first phase of growth. If the constraint is not attained then size should increase at a progressively reducing rate. These predictions are biologically plausible. Catch-up growth, for retarded individuals, is generally not an optimal strategy though in special cases (e.g. seasonal breeding) it might be. Growth may be advantageous after first breeding if birth rate is a convex function of G (the fraction of production devoted to growth) viewed from above (Assumption 5a), or if mortality rate is a convex function of G, viewed from above (Assumption 6c). If assumptions 5a and 6c are both false, growth should cease at the age of first reproduction. These predictions could be used to evaluate the incidence of indeterminate versus determinate growth in the animal kingdom though the data currently available do not allow quantitative tests. In animals with invariant adult size a method is given which allows one to calculate whether an increase in body size is favoured given that fecundity and developmental time are thereby increased.
Resumo:
This paper argues that the direct, vertical toleration of certain types of citizen by the Rawlsian liberal state is appropriate and required in circumstances in which these types of citizen pose a threat to the stability of the state. By countering the claim that vertical toleration is redundant given a commitment to the Rawlsian version of the liberal democratic ideal, and by articulating a version of that ideal that shows this claim to be false, the paper reaffirms the centrality of vertical toleration in the Rawlsian liberal account of state-citizen relations.
Resumo:
Left inferior frontal gyrus (IFG) is a critical neural substrate for the resolution of proactive interference (PI) in working memory. We hypothesized that left IFG achieves this by controlling the influence of familiarity- versus recollection-based information about memory probes. Consistent with this idea, we observed evidence for an early (200 msec)-peaking signal corresponding to memory probe familiarity and a late (500 msec)-resolving signal corresponding to full accrual of trial-related contextual ("recollection-based") information. Next, we applied brief trains of repetitive transcranial magnetic stimulation (rTMS) time locked to these mnemonic signals, to left IFG and to a control region. Only early rTMS of left IFG produced a modulation of the false alarm rate for high-PI probes. Additionally, the magnitude of this effect was predicted by individual differences in susceptibility to PI. These results suggest that left IFG-based control may bias the influence of familiarity- and recollection-based signals on recognition decisions.
Resumo:
Issues pertaining to consumer understanding of food health claims are complex and difficult to disentangle because there is a surprising lack of multidisciplinary research aimed at evaluating how consumers are influenced by factors impacting on the evaluation process. In the EU, current legislation is designed to protect consumers from misleading and false claims but there is much debate about the concept of the ‘average consumer’ referred to in the legislation. This review provides an overview of the current legislative framework, discusses the concept of the ‘average consumer’ and brings together findings on consumer understanding from an international perspective. It examines factors related to the personal characteristics of individuals such as socio-demographic status, knowledge, and attitudes, and factors pertaining to food and food supplement products such as the wording of claims and the communication of the strength and consistency of the scientific evidence. As well as providing insights for future research, the conclusions highlight the importance of enhancing the communication of scientific evidence to improve consumer understanding of food health claims.
Resumo:
A polymerase chain reaction (PCR) for the specific detection of the gene sequence, sefA, encoded by all isolates of Salmonella enteritidis, was developed. The PCR could detect as few as four S enteritidis washed bacterial cells but egg contents inhibited the PCR. Eggs spiked with 50 S enteritidis bacterial cells were homogenised, inoculated into buffered peptone water and grown at 37 degrees C for 16 hours, when the PCR was successful. A positive internal control was developed to differentiate between true and false negative PCR results for the detection of S enteritidis. In a limited trial of the egg handling procedures and the PCR, one of 250 chickens' eggs from retail outlets was found to be contaminated with S enteritidis.
Resumo:
Our differences are three. The first arises from the belief that "... a nonzero value for the optimally chosen policy instrument implies that the instrument is efficient for redistribution" (Alston, Smith, and Vercammen, p. 543, paragraph 3). Consider the two equations: (1) o* = f(P3) and (2) = -f(3) ++r h* (a, P3) representing the solution to the problem of maximizing weighted, Marshallian surplus using, simultaneously, a per-unit border intervention, 9, and a per-unit domestic intervention, wr. In the solution, parameter ot denotes the weight applied to producer surplus; parameter p denotes the weight applied to government revenues; consumer surplus is implicitly weighted one; and the country in question is small in the sense that it is unable to affect world price by any of its domestic adjustments (see the Appendix). Details of the forms of the functions f((P) and h(ot, p) are easily derived, but what matters in the context of Alston, Smith, and Vercammen's Comment is: Redistributivep referencest hatf avorp roducers are consistent with higher values "alpha," and whereas the optimal domestic intervention, 7r*, has both "alpha and beta effects," the optimal border intervention, r*, has only a "beta effect,"-it does not have a redistributional role. Garth Holloway is reader in agricultural economics and statistics, Department of Agricultural and Food Economics, School of Agriculture, Policy, and Development, University of Reading. The author is very grateful to Xavier Irz, Bhavani Shankar, Chittur Srinivasan, Colin Thirtle, and Richard Tiffin for their comments and their wisdom; and to Mario Mazzochi, Marinos Tsigas, and Cal Turvey for their scholarship, including help in tracking down a fairly complete collection of the papers that cite Alston and Hurd. They are not responsible for any errors or omissions. Note, in equation (1), that the border intervention is positive whenever a distortion exists because 8 > 0 implies 3 - 1 + 8 > 1 and, thus, f((P) > 0 (see Appendix). Using Alston, Smith, and Vercammen's definition, the instrument is now "efficient," and therefore has a redistributive role. But now, suppose that the distortion is removed so that 3 - 1 + 8 = 1, 8 = 0, and consequently the border intervention is zero. According to Alston, Smith, and Vercammen, the instrument is now "inefficient" and has no redistributive role. The reader will note that this thought experiment has said nothing about supporting farm incomes, and so has nothing whatsoever to do with efficient redistribution. Of course, the definition is false. It follows that a domestic distortion arising from the "excess-burden argument" 3 = 1 + 8, 8 > 0 does not make an export subsidy "efficient." The export subsidy, having only a "beta effect," does not have a redistributional role. The second disagreement emerges from the comment that Holloway "... uses an idiosyncratic definition of the relevant objective function of the government (Alston, Smith, and Vercammen, p. 543, paragraph 2)." The objective function that generates equations (1) and (2) (see the Appendix) is the same as the objective function used by Gardner (1995) when he first questioned Alston, Carter, and Smith's claim that a "domestic distortion can make a border intervention efficient in transferring surplus from consumers and taxpayers to farmers." The objective function used by Gardner (1995) is the same objective function used in the contributions that precede it and thus defines the literature on the debate about borderversus- domestic intervention (Streeten; Yeh; Paarlberg 1984, 1985; Orden; Gardner 1985). The objective function in the latter literature is the same as the one implied in another literature that originates from Wallace and includes most notably Gardner (1983), but also Alston and Hurd. Amer. J. Agr. Econ. 86(2) (May 2004): 549-552 Copyright 2004 American Agricultural Economics Association This content downloaded on Tue, 15 Jan 2013 07:58:41 AM All use subject to JSTOR Terms and Conditions 550 May 2004 Amer. J. Agr. Econ. The objective function in Holloway is this same objective function-it is, of course, Marshallian surplus.1 The third disagreement concerns scholarship. The Comment does not seem to be cognizant of several important papers, especially Bhagwati and Ramaswami, and Bhagwati, both of which precede Corden (1974, 1997); but also Lipsey and Lancaster, and Moschini and Sckokai; one important aspect of Alston and Hurd; and one extremely important result in Holloway. This oversight has some unfortunate repercussions. First, it misdirects to the wrong origins of intellectual property. Second, it misleads about the appropriateness of some welfare calculations. Third, it prevents Alston, Smith, and Vercammen from linking a finding in Holloway (pp. 242-43) with an old theorem (Lipsey and Lancaster) that settles the controversy (Alston, Carter, and Smith 1993, 1995; Gardner 1995; and, presently, Alston, Smith, and Vercammen) about the efficiency of border intervention in the presence of domestic distortions.
Resumo:
This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner).
Resumo:
Recalling information involves the process of discriminating between relevant and irrelevant information stored in memory. Not infrequently, the relevant information needs to be selected from amongst a series of related possibilities. This is likely to be particularly problematic when the irrelevant possibilities are not only temporally or contextually appropriate but also overlap semantically with the target or targets. Here, we investigate the extent to which purely perceptual features which discriminate between irrelevant and target material can be used to overcome the negative impact of contextual and semantic relatedness. Adopting a distraction paradigm, it is demonstrated that when distracters are interleaved with targets presented either visually (Experiment 1) or auditorily (Experiment 2), a within-modality semantic distraction effect occurs; semantically-related distracters impact upon recall more than unrelated distracters. In the semantically-related condition, the number of intrusions in recall is reduced whilst the number of correctly recalled targets is simultaneously increased by the presence of perceptual cues to relevance (color features in Experiment 1 or speaker’s gender in Experiment 2). However, as demonstrated in Experiment 3, even presenting semantically-related distracters in a language and a sensory modality (spoken Welsh) distinct from that of the targets (visual English) is insufficient to eliminate false recalls completely, or to restore correct recall to levels seen with unrelated distracters . Together, the study shows how semantic and non-semantic discriminability shape patterns of both erroneous and correct recall.
Resumo:
Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.
Resumo:
We propose and demonstrate a fully probabilistic (Bayesian) approach to the detection of cloudy pixels in thermal infrared (TIR) imagery observed from satellite over oceans. Using this approach, we show how to exploit the prior information and the fast forward modelling capability that are typically available in the operational context to obtain improved cloud detection. The probability of clear sky for each pixel is estimated by applying Bayes' theorem, and we describe how to apply Bayes' theorem to this problem in general terms. Joint probability density functions (PDFs) of the observations in the TIR channels are needed; the PDFs for clear conditions are calculable from forward modelling and those for cloudy conditions have been obtained empirically. Using analysis fields from numerical weather prediction as prior information, we apply the approach to imagery representative of imagers on polar-orbiting platforms. In comparison with the established cloud-screening scheme, the new technique decreases both the rate of failure to detect cloud contamination and the false-alarm rate by one quarter. The rate of occurrence of cloud-screening-related errors of >1 K in area-averaged SSTs is reduced by 83%. Copyright © 2005 Royal Meteorological Society.
Resumo:
The quality control, validation and verification of the European Flood Alert System (EFAS) are described. EFAS is designed as a flood early warning system at pan-European scale, to complement national systems and provide flood warnings more than 2 days before a flood. On average 20–30 alerts per year are sent out to the EFAS partner network which consists of 24 National hydrological authorities responsible for transnational river basins. Quality control of the system includes the evaluation of the hits, misses and false alarms, showing that EFAS has more than 50% of the time hits. Furthermore, the skills of both the meteorological as well as the hydrological forecasts are evaluated, and are included here for a 10-year period. Next, end-user needs and feedback are systematically analysed. Suggested improvements, such as real-time river discharge updating, are currently implemented.
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.
Resumo:
Following trends in operational weather forecasting, where ensemble prediction systems (EPS) are now increasingly the norm, flood forecasters are beginning to experiment with using similar ensemble methods. Most of the effort to date has focused on the substantial technical challenges of developing coupled rainfall-runoff systems to represent the full cascade of uncertainties involved in predicting future flooding. As a consequence much less attention has been given to the communication and eventual use of EPS flood forecasts. Drawing on interviews and other research with operational flood forecasters from across Europe, this paper highlights a number of challenges to communicating and using ensemble flood forecasts operationally. It is shown that operational flood forecasters understand the skill, operational limitations, and informational value of EPS products in a variety of different and sometimes contradictory ways. Despite the efforts of forecasting agencies to design effective ways to communicate EPS forecasts to non-experts, operational flood forecasters were often skeptical about the ability of forecast recipients to understand or use them appropriately. It is argued that better training and closer contacts between operational flood forecasters and EPS system designers can help ensure the uncertainty represented by EPS forecasts is represented in ways that are most appropriate and meaningful for their intended consumers, but some fundamental political and institutional challenges to using ensembles, such as differing attitudes to false alarms and to responsibility for management of blame in the event of poor or mistaken forecasts are also highlighted. Copyright © 2010 Royal Meteorological Society.
Resumo:
A method of automatically identifying and tracking polar-cap plasma patches, utilising data inversion and feature-tracking methods, is presented. A well-established and widely used 4-D ionospheric imaging algorithm, the Multi-Instrument Data Assimilation System (MIDAS), inverts slant total electron content (TEC) data from ground-based Global Navigation Satellite System (GNSS) receivers to produce images of the free electron distribution in the polar-cap ionosphere. These are integrated to form vertical TEC maps. A flexible feature-tracking algorithm, TRACK, previously used extensively in meteorological storm-tracking studies is used to identify and track maxima in the resulting 2-D data fields. Various criteria are used to discriminate between genuine patches and "false-positive" maxima such as the continuously moving day-side maximum, which results from the Earth's rotation rather than plasma motion. Results for a 12-month period at solar minimum, when extensive validation data are available, are presented. The method identifies 71 separate structures consistent with patch motion during this time. The limitations of solar minimum and the consequent small number of patches make climatological inferences difficult, but the feasibility of the method for patches larger than approximately 500 km in scale is demonstrated and a larger study incorporating other parts of the solar cycle is warranted. Possible further optimisation of discrimination criteria, particularly regarding the definition of a patch in terms of its plasma concentration enhancement over the surrounding background, may improve results.