78 resultados para false personation
Resumo:
Glaciers occupy an area of similar to 1600 km(2) in the Caucasus Mountains. There is widespread evidence of retreat since the Little Ice Age, but an up-to-date regional assessment of glacier change is lacking. In this paper, satellite imagery (Landsat Thematic Mapper and Enhanced Thematic Mapper Plus) is used to obtain the terminus position of 113 glaciers in the central Caucasus in 1985 and 2000, using a manual delineation process based on a false-colour composite (bands 5, 4, 3). Measurements reveal that 94% of the glaciers have retreated, 4% exhibited no overall change and 2% advanced. The mean retreat rate equates to similar to 8 m a(-1), and maximum retreat rates approach similar to 38 m a(-1). The largest (>10 km(2)) glaciers retreated twice as much (similar to 12 m a(-1)) as the smallest (<1 km(2)) glaciers (similar to 6 m a(-1)), and glaciers at lower elevations generally retreated greater distances. Supraglacial debris cover has increased in association with glacier retreat, and the surface area of bare ice has reduced by similar to 10% between 1985 and 2000. Results are compared to declassified Corona imagery from the 1960s and 1970s and detailed field measurements and mass-balance data for Djankuat glacier, central Caucasus. It is concluded that the decrease in glacier area appears to be primarily driven by increasing temperatures since the 1970s and especially since the mid-1990s. Continued retreat could lead to considerable changes in glacier runoff, with implications for regional water resources.
Resumo:
This paper reports changes in supraglacial debris cover and supra-/proglacial lake development associated with recent glacier retreat (1985-2000) in the central Caucasus Mountains, Russia. Satellite imagery (Landsat TM and ETM+) was used to map the surface area and supraglacial debris cover on six neighbouring glaciers in the Adylsu valley through a process of manual digitizing on a false-colour composite of bands 5, 4, 3 (red, green, blue). The distribution and surface area of supraglacial and proglacial lakes was digitized for a larger area, which extended to the whole Landsat scene. We also compare our satellite interpretations to field observations in the Adylsu valley. Supraglacial debris cover ranges from < 5% to > 25% on individual glaciers, but glacier retreat between 1985 and 2000 resulted in a 3-6% increase in the proportion of each glacier covered by debris. The only exception to this trend was a very small glacier where debris cover did not change significantly and remote mapping proved more difficult. The increase in debris cover is characterized by a progressive upglacier migration, which we suggest is being driven by focused ablation (and therefore glacier thinning) at the up-glacier limit of the debris cover, resulting in the progressive exposure of englacial debris. Glacier retreat has also been accompanied by an increase in the number of proglacial and supraglacial lakes in our study area, from 16 in 1985 to 24 in 2000, representing a 57% increase in their cumulative surface area. These lakes appear to be impounded by relatively recently lateral and terminal moraines and by debris deposits on the surface of the glacier. The changes in glacier surface characteristics reported here are likely to exert a profound influence on glacier mass balance and their future response to climate change. They may also increase the likelihood of glacier-related hazards (lake outbursts, debris slides), and future monitoring is recommended.
Resumo:
The Integrated Catchment Model of Nitrogen (INCA-N) was applied to the Lambourn and Pang river-systems to integrate current process-knowledge and available-data to test two hypotheses and thereby determine the key factors and processes controlling the movement of nitrate at the catchment-scale in lowland, permeable river-systems: (i) that the in-stream nitrate concentrations were controlled by two end-members only: groundwater and soil-water, and (ii) that the groundwater was the key store of nitrate in these river-systems. Neither hypothesis was proved true or false. Due to equifinality in the model structure and parameters at least two alternative models provided viable explanations for the observed in-stream nitrate concentrations. One model demonstrated that the seasonal-pattern in the stream-water nitrate concentrations was controlled mainly by the mixing of ground- and soil-water inputs. An alternative model demonstrated that in-stream processes were important. It is hoped further measurements of nitrate concentrations made in the catchment soil- and ground-water and in-stream may constrain the model and help determine the correct structure, though other recent studies suggest that these data may serve only to highlight the heterogeneity of the system. Thus when making model-based assessments and forecasts it is recommend that all possible models are used, and the range of forecasts compared. In this study both models suggest that cereal production contributed approximately 50% the simulated in-stream nitrate toad in the two catchments, and the point-source contribution to the in-stream load was minimal. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Research in construction management is diverse in content and in quality. There is much to be learned from more fundamental disciplines. Construction is a sub-set of human experience rather than a completely separate phenomenon. Therefore, it is likely that there are few problems in construction requiring the invention of a completely new theory. If construction researchers base their work only on that of other construction researchers, our academic community will become less relevant to the world at large. The theories that we develop or test must be of wider applicability to be of any real interest. In undertaking research, researchers learn a lot about themselves. Perhaps the only difference between research and education is that if we are learning about something which no-one else knows, then it is research, otherwise it is education. Self-awareness of this will help to reduce the chances of publishing work which only reveals a researcher’s own learning curve. Scientific method is not as simplistic as non-scientists claim and is the only real way of overcoming methodological weaknesses in our work. The reporting of research may convey the false impression that it is undertaken in the sequence in which it is written. Construction is not so unique and special as to require a completely different set of methods from other fields of enquiry. Until our research is reported in mainstream journals and conferences, there is little chance that we will influence the wider academic community and a concomitant danger that it will become irrelevant. The most useful insights will come from research which challenges the current orthodoxy rather than research which merely reports it.
Resumo:
Research in construction management is diverse in content and in quality. There is much to be learned from more fundamental disciplines. Construction is a sub-set of human experience rather than a completely separate phenomenon. Therefore, it is likely that there are few problems in construction requiring the invention of a completely new theory. If construction researchers base their work only on that of other construction researchers, our academic community will become less relevant to the world at large. The theories that we develop or test must be of wider applicability to be of any real interest. In undertaking research, researchers learn a lot about themselves. Perhaps the only difference between research and education is that if we are learning about something which no-one else knows, then it is research, otherwise it is education. Self-awareness of this will help to reduce the chances of publishing work which only reveals a researcher’s own learning curve. Scientific method is not as simplistic as non-scientists claim and is the only real way of overcoming methodological weaknesses in our work. The reporting of research may convey the false impression that it is undertaken in the sequence in which it is written. Construction is not so unique and special as to require a completely different set of methods from other fields of enquiry. Until our research is reported in mainstream journals and conferences, there is little chance that we will influence the wider academic community and a concomitant danger that it will become irrelevant. The most useful insights will come from research which challenges the current orthodoxy rather than research which merely reports it.
Resumo:
M. R. Banaji and A. G. Greenwald (1995) demonstrated a gender bias in fame judgments—that is, an increase in judged fame due to prior processing that was larger for male than for female names. They suggested that participants shift criteria between judging men and women, using the more liberal criterion for judging men. This "criterion-shift" account appeared problematic for a number of reasons. In this article, 3 experiments are reported that were designed to evaluate the criterion-shift account of the gender bias in the false-fame effect against a distribution-shift account. The results were consistent with the criterion-shift account, and they helped to define more precisely the situations in which people may be ready to shift their response criterion on an item-by-item basis. In addition, the results were incompatible with an interpretation of the criterion shift as an artifact of the experimental situation in the experiments reported by M. R. Banaji and A. G. Greenwald. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Resumo:
Counterstreaming electrons (CSEs) are treated as signatures of closed magnetic flux, i.e., loops connected to the Sun at both ends. However, CSEs at 1 AU likely fade as the apex of a closed loop passes beyond some distance R, owing to scattering of the sunward beam along its continually increasing path length. The remaining antisunward beam at 1 AU would then give a false signature of open flux. Subsequent opening of a loop at the Sun by interchange reconnection with an open field line would produce an electron dropout (ED) at 1 AU, as if two open field lines were reconnecting to completely disconnect from the Sun. Thus EDs can be signatures of interchange reconnection as well as the commonly attributed disconnection. We incorporate CSE fadeout into a model that matches time-varying closed flux from interplanetary coronal mass ejections (ICMEs) to the solar cycle variation in heliospheric flux. Using the observed occurrence rate of CSEs at solar maximum, the model estimates R ∼ 8–10 AU. Hence we demonstrate that EDs should be much rarer than CSEs at 1 AU, as EDs can only be detected when the juncture points of reconnected field lines lie sunward of the detector, whereas CSEs continue to be detected in the legs of all loops that have expanded beyond the detector, out to R. We also demonstrate that if closed flux added to the heliosphere by ICMEs is instead balanced by disconnection elsewhere, then ED occurrence at 1 AU would still be rare, contrary to earlier expectations.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
One of the largest uncertainties in quantifying the impact of aviation on climate concerns the formation and spreading of persistent contrails. The inclusion of a cloud scheme that allows for ice supersaturation into the integrated forecast system (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) can be a useful tool to help reduce these uncertainties. This study evaluates the quality of the ECMWF forecasts with respect to ice super saturation in the upper troposphere by comparing them to visual observations of persistent contrails and radiosonde measurements of ice supersaturation over England. The performance of 1- to 3-day forecasts is compared including also the vertical accuracy of the supersaturation forecasts. It is found that the operational forecasts from the ECMWF are able to predict cold ice supersaturated regions very well. For the best cases Peirce skill scores of 0.7 are obtained, with hit rates at times exceeding 80% and false-alarm rates below 20%. Results are very similar for comparisons with visual observations and radiosonde measurements, the latter providing the better statistical significance.
Resumo:
It is generally accepted that genetics may be an important factor in explaining the variation between patients’ responses to certain drugs. However, identification and confirmation of the responsible genetic variants is proving to be a challenge in many cases. A number of difficulties that maybe encountered in pursuit of these variants, such as non-replication of a true effect, population structure and selection bias, can be mitigated or at least reduced by appropriate statistical methodology. Another major statistical challenge facing pharmacogenetics studies is trying to detect possibly small polygenic effects using large volumes of genetic data, while controlling the number of false positive signals. Here we review statistical design and analysis options available for investigations of genetic resistance to anti-epileptic drugs.
Resumo:
When assessing hypotheses, the possibility and consequences of false-positive conclusions should be considered along with the avoidance of false-negative ones. A recent assessment of the system of rice intensification (SRI) by McDonald et al. [McDonald, A.J., Hobbs, P.R., Riha, S.J., 2006. Does the system of rice intensification outperform conventional best management? A synopsis of the empirical record. Field Crops Res. 96, 31-36] provides a good example where this was not done as it was preoccupied with avoiding false-positives only. It concluded, based on a desk study using secondary data assembled selectively from diverse sources and with a 95% level of confidence, that 'best management practices' (BMPs) on average produce 11% higher rice yields than SRI methods, and that, therefore, SRI has little to offer beyond what is already known by scientists.
Resumo:
High resolution descriptions of plant distribution have utility for many ecological applications but are especially useful for predictive modeling of gene flow from transgenic crops. Difficulty lies in the extrapolation errors that occur when limited ground survey data are scaled up to the landscape or national level. This problem is epitomized by the wide confidence limits generated in a previous attempt to describe the national abundance of riverside Brassica rapa (a wild relative of cultivated rapeseed) across the United Kingdom. Here, we assess the value of airborne remote sensing to locate B. rapa over large areas and so reduce the need for extrapolation. We describe results from flights over the river Nene in England acquired using Airborne Thematic Mapper (ATM) and Compact Airborne Spectrographic Imager (CASI) imagery, together with ground truth data. It proved possible to detect 97% of flowering B. rapa on the basis of spectral profiles. This included all stands of plants that occupied >2m square (>5 plants), which were detected using single-pixel classification. It also included very small populations (<5 flowering plants, 1-2m square) that generated mixed pixels, which were detected using spectral unmixing. The high detection accuracy for flowering B. rapa was coupled with a rather large false positive rate (43%). The latter could be reduced by using the image detections to target fieldwork to confirm species identity, or by acquiring additional remote sensing data such as laser altimetry or multitemporal imagery.
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
An important element of the developing field of proteomics is to understand protein-protein interactions and other functional links amongst genes. Across-species correlation methods for detecting functional links work on the premise that functionally linked proteins will tend to show a common pattern of presence and absence across a range of genomes. We describe a maximum likelihood statistical model for predicting functional gene linkages. The method detects independent instances of the correlated gain or loss of pairs of proteins on phylogenetic trees, reducing the high rates of false positives observed in conventional across-species methods that do not explicitly incorporate a phylogeny. We show, in a dataset of 10,551 protein pairs, that the phylogenetic method improves by up to 35% on across-species analyses at identifying known functionally linked proteins. The method shows that protein pairs with at least two to three correlated events of gain or loss are almost certainly functionally linked. Contingent evolution, in which one gene's presence or absence depends upon the presence of another, can also be detected phylogenetically, and may identify genes whose functional significance depends upon its interaction with other genes. Incorporating phylogenetic information improves the prediction of functional linkages. The improvement derives from having a lower rate of false positives and from detecting trends that across-species analyses miss. Phylogenetic methods can easily be incorporated into the screening of large-scale bioinformatics datasets to identify sets of protein links and to characterise gene networks.
Resumo:
Investigations of memory deficits in older individuals have concentrated on their increased likelihood of forgetting events or details of events that were actually encountered (errors of omission). However mounting evidence demonstrates that normal cognitive aging also is associated with an increased propensity for errors of commission-shown in false alarms or false recognition. The present study examined the origins of this age difference. Older and younger adults each performed three types of memory tasks in which details of encountered items might influence performance. Although older adults showed greater false recognition of related lures on a standard (identical) old/new episodic recognition task, older and younger adults showed parallel effects of detail on repetition priming and meaning-based episodic recognition (decreased priming and decreased meaning-based recognition for different relative to same exemplars). The results suggest that the older adults encoded details but used them less effectively than the younger adults in the recognition context requiring their deliberate, controlled use.