929 resultados para Incomplete diallel
Resumo:
The optically stimulated luminescence (OSL) signal within quartz may be enhanced by thermal transfer during pre-heating. This may occur via a thermally induced charge transfer from low temperature traps to the OSL traps. Thermal transfer may affect both natural and artificially irradiated samples. The effect, as empirically measured via recuperation tests, is typically observed to be negligible for old samples (<1% of natural signal). However, thermal transfer remains a major concern in the dating of young samples as thermal decay and transfers of geologically unstable traps (typically in the TL range 160–280°C) may be incomplete. Upon pre-heating such a sample might undergo thermal transfer to the dating trap and result in a De overestimate. As a result, there has been a tendency for workers to adopt less rigorous pre-heats for young samples. We have investigated the pre-heat dependence of 23 young quartz samples from various depositional environments using pre-heats between 170°C and 300°C, employing the single aliquot regeneration (SAR) protocol. SAR De's were also calculated for 25 additional young quartz samples of different depositional environments and compared with previous multiple aliquot additive dose (MAAD) data. Results demonstrate no significant De dependence upon pre-heat temperatures. A close correspondence between MAAD data and the current SAR data for the samples tested is also illustrated.
Resumo:
Aerosols affect the Earth's energy budget directly by scattering and absorbing radiation and indirectly by acting as cloud condensation nuclei and, thereby, affecting cloud properties. However, large uncertainties exist in current estimates of aerosol forcing because of incomplete knowledge concerning the distribution and the physical and chemical properties of aerosols as well as aerosol-cloud interactions. In recent years, a great deal of effort has gone into improving measurements and datasets. It is thus feasible to shift the estimates of aerosol forcing from largely model-based to increasingly measurement-based. Our goal is to assess current observational capabilities and identify uncertainties in the aerosol direct forcing through comparisons of different methods with independent sources of uncertainties. Here we assess the aerosol optical depth (τ), direct radiative effect (DRE) by natural and anthropogenic aerosols, and direct climate forcing (DCF) by anthropogenic aerosols, focusing on satellite and ground-based measurements supplemented by global chemical transport model (CTM) simulations. The multi-spectral MODIS measures global distributions of aerosol optical depth (τ) on a daily scale, with a high accuracy of ±0.03±0.05τ over ocean. The annual average τ is about 0.14 over global ocean, of which about 21%±7% is contributed by human activities, as estimated by MODIS fine-mode fraction. The multi-angle MISR derives an annual average AOD of 0.23 over global land with an uncertainty of ~20% or ±0.05. These high-accuracy aerosol products and broadband flux measurements from CERES make it feasible to obtain observational constraints for the aerosol direct effect, especially over global the ocean. A number of measurement-based approaches estimate the clear-sky DRE (on solar radiation) at the top-of-atmosphere (TOA) to be about -5.5±0.2 Wm-2 (median ± standard error from various methods) over the global ocean. Accounting for thin cirrus contamination of the satellite derived aerosol field will reduce the TOA DRE to -5.0 Wm-2. Because of a lack of measurements of aerosol absorption and difficulty in characterizing land surface reflection, estimates of DRE over land and at the ocean surface are currently realized through a combination of satellite retrievals, surface measurements, and model simulations, and are less constrained. Over the oceans the surface DRE is estimated to be -8.8±0.7 Wm-2. Over land, an integration of satellite retrievals and model simulations derives a DRE of -4.9±0.7 Wm-2 and -11.8±1.9 Wm-2 at the TOA and surface, respectively. CTM simulations derive a wide range of DRE estimates that on average are smaller than the measurement-based DRE by about 30-40%, even after accounting for thin cirrus and cloud contamination. A number of issues remain. Current estimates of the aerosol direct effect over land are poorly constrained. Uncertainties of DRE estimates are also larger on regional scales than on a global scale and large discrepancies exist between different approaches. The characterization of aerosol absorption and vertical distribution remains challenging. The aerosol direct effect in the thermal infrared range and in cloudy conditions remains relatively unexplored and quite uncertain, because of a lack of global systematic aerosol vertical profile measurements. A coordinated research strategy needs to be developed for integration and assimilation of satellite measurements into models to constrain model simulations. Enhanced measurement capabilities in the next few years and high-level scientific cooperation will further advance our knowledge.
Resumo:
This study examines the evolution of prices in markets with Internet price-comparison search engines. The empirical study analyzes laboratory data of prices available to informed consumers, for two industry sizes and two conditions on the sample (complete and incomplete). Distributions are typically bimodal. One of the two modes of distribution, corresponding to monopoly pricing, tends to attract such pricing strategies increasingly over time. The second one, corresponding to interior pricing, follows a decreasing trend. Monopoly pricing can serve as a means of insurance against more competitive (but riskier) behavior. In fact, experimental subjects who initially earn low profits due to interior pricing are more likely to switch to monopoly pricing than subjects who experience good returns from the start.
Resumo:
The Lincoln–Petersen estimator is one of the most popular estimators used in capture–recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11, f10, f01, f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln–Petersen estimator provides an estimate for f00. In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln–Petersen’s, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln–Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey.
Resumo:
Insect pollination benefits over three quarters of the world's major crops. There is growing concern that observed declines in pollinators may impact on production and revenues from animal pollinated crops. Knowing the distribution of pollinators is therefore crucial for estimating their availability to pollinate crops; however, in general, we have an incomplete knowledge of where these pollinators occur. We propose a method to predict geographical patterns of pollination service to crops, novel in two elements: the use of pollinator records rather than expert knowledge to predict pollinator occurrence, and the inclusion of the managed pollinator supply. We integrated a maximum entropy species distribution model (SDM) with an existing pollination service model (PSM) to derive the availability of pollinators for crop pollination. We used nation-wide records of wild and managed pollinators (honey bees) as well as agricultural data from Great Britain. We first calibrated the SDM on a representative sample of bee and hoverfly crop pollinator species, evaluating the effects of different settings on model performance and on its capacity to identify the most important predictors. The importance of the different predictors was better resolved by SDM derived from simpler functions, with consistent results for bees and hoverflies. We then used the species distributions from the calibrated model to predict pollination service of wild and managed pollinators, using field beans as a test case. The PSM allowed us to spatially characterize the contribution of wild and managed pollinators and also identify areas potentially vulnerable to low pollination service provision, which can help direct local scale interventions. This approach can be extended to investigate geographical mismatches between crop pollination demand and the availability of pollinators, resulting from environmental change or policy scenarios.
Resumo:
Background. Current models of concomitant, intermittent strabismus, heterophoria, convergence and accommodation anomalies are either theoretically complex or incomplete. We propose an alternative and more practical way to conceptualize clinical patterns. Methods. In each of three hypothetical scenarios (normal; high AC/A and low CA/C ratios; low AC/A and high CA/C ratios) there can be a disparity-biased or blur-biased “style”, despite identical ratios. We calculated a disparity bias index (DBI) to reflect these biases. We suggest how clinical patterns fit these scenarios and provide early objective data from small illustrative clinical groups. Results. Normal adults and children showed disparity bias (adult DBI 0.43 (95%CI 0.50-0.36), child DBI 0.20 (95%CI 0.31-0.07) (p=0.001). Accommodative esotropes showed less disparity-bias (DBI 0.03). In the high AC/A and low CA/C scenario, early presbyopes had mean DBI of 0.17 (95%CI 0.28-0.06), compared to DBI of -0.31 in convergence excess esotropes. In the low AC/A and high CA/C scenario near exotropes had mean DBI of 0.27, while we predict that non-strabismic, non-amblyopic hyperopes with good vision without spectacles will show lower DBIs. Disparity bias ranged between 1.25 and -1.67. Conclusions. Establishing disparity or blur bias, together with knowing whether convergence to target demand exceeds accommodation or vice versa explains clinical patterns more effectively than AC/A and CA/C ratios alone. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. We suggest clinicians look carefully at details of accommodation and convergence changes induced by lenses, dissociation and prisms and use these to plan treatment in relation to the model.
Resumo:
Early modern play-texts present numerous puzzles for scholars interested in ascertaining how plays were (or may have been) staged. the principal evidence of course for a notional "reconstruction" of practices is the apparatus of stage directions, augmented by indications in the dialogue. in conjunction a joining-of-the-dots is often possible, at least in broad-brush terms. But as is well known, the problem is that stage directions tend to be incomplete, imprecise, inaccurate or missing altogether; more significantly, even when present they offer only slight and indirect evidence of actual stagecraft. Some stage directions are rather more "literary" than "theatrical" in provenance, and in any case to the extent that they do serve the reader (early modern or modern) they cannot be regarded as providing a record of stage practice. After all, words can be no more than imperfect substitutes for (and of another order from) the things they represent. For the most part directions serve as a guide that provides the basis for reasonable interpretation informed by our knowledge of theatre architecture, technology, and comparable play-situations, rather than concrete evidence of actual practice. Quite how some stage business was carried out remains uncertain, leaving the scholar little option but to hypothesize solutions. One such conundrum arises in christopher Marlowe's The Jew of Malta. the scenario in question is hardly an obscure one, but it has not been examined in detail, even by modern editors. the purpose of this essay is to explore what sense might be made of the surviving textual evidence, in combination with our knowledge of theatre architecture and playmaking culture in the late sixteenth century.
Resumo:
Skillful and timely streamflow forecasts are critically important to water managers and emergency protection services. To provide these forecasts, hydrologists must predict the behavior of complex coupled human–natural systems using incomplete and uncertain information and imperfect models. Moreover, operational predictions often integrate anecdotal information and unmodeled factors. Forecasting agencies face four key challenges: 1) making the most of available data, 2) making accurate predictions using models, 3) turning hydrometeorological forecasts into effective warnings, and 4) administering an operational service. Each challenge presents a variety of research opportunities, including the development of automated quality-control algorithms for the myriad of data used in operational streamflow forecasts, data assimilation, and ensemble forecasting techniques that allow for forecaster input, methods for using human-generated weather forecasts quantitatively, and quantification of human interference in the hydrologic cycle. Furthermore, much can be done to improve the communication of probabilistic forecasts and to design a forecasting paradigm that effectively combines increasingly sophisticated forecasting technology with subjective forecaster expertise. These areas are described in detail to share a real-world perspective and focus for ongoing research endeavors.
Resumo:
This article examines a 14th-c. translation into Old Occitan prose of a late-antique life of Alexander the Great: Justin’s Epitome of the 'Historia Philippicae' of Pompeius Trogus. The article argues that it is the work of translators whose knowledge of pagan Latin materials was incomplete and whose use of their native tongue rested on non-literary bases. This text has not been edited before, and examining its uneven treatment of its source provides important new insights into the work of translators in the later Middle Ages. In conclusion, the article suggests some new approaches to the understanding of translation as a process of reconstruction and adaptation.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.
Resumo:
One and One and One is a collaborative project organized by Tim Renshaw with Outside Architecture. The work in the exhibition explored the incomplete and addresses it as an active process that opens up architectural and spatial structure to new forms of experience.
Resumo:
Sexuality and Law scholarship is a new and developing field but, like most legal scholarship, it is dominated by masculine concerns and methodologies. This article explains why research that ignores feminist concerns and methodologies will be incomplete and inaccurate, and suggests questions that should be asked of resources to ensure a complete and accurate coverage of the topic.
Resumo:
Massive economic and population growth, and urbanization are expected to lead to a tripling of anthropogenic emissions in southern West Africa (SWA) between 2000 and 2030. However, the impacts of this on human health, ecosystems, food security, and the regional climate are largely unknown. An integrated assessment is challenging due to (a) a superposition of regional effects with global climate change, (b) a strong dependence on the variable West African monsoon, (c) incomplete scientific understanding of interactions between emissions, clouds, radiation, precipitation, and regional circulations, and (d) a lack of observations. This article provides an overview of the DACCIWA (Dynamics-Aerosol-Chemistry-Cloud Interactions in West Africa) project. DACCIWA will conduct extensive fieldwork in SWA to collect high-quality observations, spanning the entire process chain from surface-based natural and anthropogenic emissions to impacts on health, ecosystems, and climate. Combining the resulting benchmark dataset with a wide range of modeling activities will allow (a) assessment of relevant physical, chemical, and biological processes, (b) improvement of the monitoring of climate and atmospheric composition from space, and (c) development of the next generation of weather and climate models capable of representing coupled cloud-aerosol interactions. The latter will ultimately contribute to reduce uncertainties in climate predictions. DACCIWA collaborates closely with operational centers, international programs, policy-makers, and users to actively guide sustainable future planning for West Africa. It is hoped that some of DACCIWA’s scientific findings and technical developments will be applicable to other monsoon regions.
Resumo:
Introduction: Resistance to anticoagulants in Norway rats (Rattus norvegicus) and house mice (Mus domesticus) has been studied in the UK since the early 1960s. In no other country in the world is our understanding of resistance phenomena so extensive and profound. Almost every aspect of resistance in the key rodent target species has been examined in laboratory and field trials and results obtained by independent researchers have been published. It is the principal purpose of this document to present a short synopsis of this information. More recently, however, the development of genetical techniques has provided a definitive means of detection of resistant genotypes among pest rodent populations. Preliminary information from a number of such surveys will also be presented. Resistance in Norway rats: A total of nine different anticoagulant resistance mutations (single nucleotide polymorphisms or SNPs) are found among Norway rats in the UK. In no other country worldwide are present so many different forms of Norway rat resistance. Among these nine SNPs, five are known to confer on rats that carry them a significant degree of resistance to anticoagulant rodenticides. These mutations are: L128Q, Y139S, L120Q, Y139C and Y139F. The latter three mutations confer, to varying degrees, practical resistance to bromadiolone and difenacoum, the two second-generation anticoagulants in predominant use in the UK. It is the recommendation of RRAG that bromadiolone and difenacoum should not be used against rats carrying the L120Q, Y139C and Y139F mutations because this will promote the spread of resistance and jeopardise the long-term efficacy of anticoagulants. Brodifacoum, flocoumafen and difethialone are effective against these three genotypes but cannot presently be used because of the regulatory restriction that they can only be applied against rats that are living and feeding predominantly indoors. Our understanding of the geographical distribution of Norway rat resistance in incomplete but is rapidly increasing. In particular, the mapping of the focus of L120Q Norway rat resistance in central-southern England by DNA sequencing is well advanced. We now know that rats carrying this resistance mutation are present across a large part of the counties of Hampshire, Berkshire and Wiltshire, and the resistance spreads into Avon, Oxfordshire and Surrey. It is also found, perhaps as outlier foci, in south-west Scotland and East Sussex. L120Q is currently the most severe form of anticoagulant resistance found in Norway rats and is prevalent over a considerable part of central-southern England. A second form of advanced Norway rat resistance is conferred by the Y139C mutation. This is noteworthy because it occurs in at least four different foci that are widely geographically dispersed, namely in Dumfries and Galloway, Gloucestershire, Yorkshire and Norfolk. Once again, bromadiolone and difenacoum are not recommended for use against rats carrying this genotype and a concern of RRAG is that continued applications of resisted active substances may result in Y139C becoming more or less ubiquitous across much of the UK. Another type of advanced resistance, the Y139F mutation, is present in Kent and Sussex. This means that Norway rats, carrying some degree of resistance to bromadiolone and difenacoum, are now found from the south coast of Kent, west into the city of Bristol, to Yorkshire in the north-east and to the south-west of Scotland. This difficult situation can only deteriorate further where these three genotypes exist and resisted anticoagulants are predominantly used against them. Resistance in house mice: House mouse is not so well understood but the presence in the UK of two resistant genotypes, L128S and Y139C, is confirmed. House mice are naturally tolerant to anticoagulants and such is the nature of this tolerance, and the presence of genetical resistance, that house mice resistant to the first-generation anticoagulants are considered to be widespread in the UK. Consequently, baits containing warfarin, sodium warfarin, chlorophacinone and coumatetralyl are not approved for use against mice. This regulatory position is endorsed by RRAG. Baits containing brodifacoum, flocoumafen and difethialone are effective against house mice and may be applied in practice because house mouse infestations are predominantly indoors. There are some reports of resistance among mice in some areas to the second-generation anticoagulant bromadiolone, while difenacoum remains largely efficacious. Alternatives to anticoagulants: The use of habitat manipulation, that is the removal of harbourage, denial of the availability of food and the prevention of ingress to structures, is an essential component of sustainable rodent pest management. All are of importance in the management of resistant rodents and have the advantage of not selecting for resistant genotypes. The use of these techniques may be particularly valuable in preventing the build-up of rat infestations. However, none can be used to remove any sizeable extant rat infestation and for practical reasons their use against house mice is problematic. Few alternative chemical interventions are available in the European Union because of the removal from the market of zinc phosphide, calciferol and bromethalin. Our virtual complete reliance on the use of anticoagulants for the chemical control of rodents in the UK, and more widely in the EU, calls for improved schemes for resistance management. Of course, these might involve the use of alternatives to anticoagulant rodenticides. Also important is an increasing knowledge of the distribution of resistance mutations in rats and mice and the use of only fully effective anticoagulants against them.
Resumo:
Key Performance Indicators (KPIs) are the main instruments of Business Performance Management. KPIs are the measures that are translated to both the strategy and the business process. These measures are often designed for an industry sector with the assumptions about business processes in organizations. However, the assumptions can be too incomplete to guarantee the required properties of KPIs. This raises the need to validate the properties of KPIs prior to their application to performance measurement. This paper applies the method called EXecutable Requirements Engineering Management and Evolution (EXTREME) for validation of the KPI definitions. EXTREME semantically relates the goal modeling, conceptual modeling and protocol modeling techniques into one methodology. The synchronous composition built into protocol modeling enables raceability of goals in protocol models and constructive definitions of a KPI. The application of the method clarifies the meaning of KPI properties and procedures of their assessment and validation.