899 resultados para Detector alignment and calibration methods (lasers, sources, particle-beams)
Resumo:
For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.
Resumo:
The primary aim of this thesis is to analyse legal and governance issues in the use of Environmental NPR-PPMs, particularly those aiming to promote sustainable practices or to protect natural resources. NPR-PPMs have traditionally been thought of as being incompatible with the rules of the World Trade Organization (WTO). However, the issue remains untouched by WTO adjudicatory bodies. One can suggest that WTO adjudicatory bodies may want to leave this issue to the Members, but the analysis of the case law also seems to indicate that the question of legality of NPR-PPMs has not been brought ‘as such’ in dispute settlement. This thesis advances the argument that despite the fact that the legal status of NPR-PPMs remains unsettled, during the last decades adjudicatory bodies have been scrutinising environmental measures based on NPR-PPMs just as another expression of the regulatory autonomy of the Members. Though NPR-PPMs are regulatory choices associated with a wide range of environmental concerns, trade disputes giving rise to questions related to the legality of process-based measures have been mainly associated with the protection of marine wildlife (i.e., fishing techniques threatening or affecting animal species). This thesis argues that environmental objectives articulated as NPR-PPMs can indeed qualify as legitimate objectives both under the GATT and the TBT Agreement. However, an important challenge for the their compatibility with WTO law relate to aspects associated with arbitrary or unjustifiable discrimination. In the assessment of discrimination procedural issues play an important role. This thesis also elucidates other important dimensions to the issue from the perspective of global governance. One of the arguments advanced in this thesis is that a comprehensive analysis of environmental NPR-PPMs should consider not only their role in what is regarded as trade barriers (governmental and market-driven), but also their significance in global objectives such as the transition towards a green economy and sustainable patterns of consumption and production.
Resumo:
Aquifer denitrification is among the most poorly constrained fluxes in global and regional nitrogen budgets. The few direct measurements of denitrification in groundwaters provide limited information about its spatial and temporal variability, particularly at the scale of whole aquifers. Uncertainty in estimates of denitrification may also lead to underestimates of its effect on isotopic signatures of inorganic N, and thereby confound the inference of N source from these data. In this study, our objectives are to quantify the magnitude and variability of denitrification in the Upper Floridan Aquifer (UFA) and evaluate its effect on N isotopic signatures at the regional scale. Using dual noble gas tracers (Ne, Ar) to generate physical predictions of N2 gas concentrations for 112 observations from 61 UFA springs, we show that excess (i.e. denitrification-derived) N2 is highly variable in space and inversely correlated with dissolved oxygen (O2). Negative relationships between O2 and δ15N NO3 across a larger dataset of 113 springs, well-constrained isotopic fractionation coefficients, and strong 15N:18O covariation further support inferences of denitrification in this uniquely organic-matter-poor system. Despite relatively low average rates, denitrification accounted for 32 % of estimated aquifer N inputs across all sampled UFA springs. Back-calculations of source δ15N NO3 based on denitrification progression suggest that isotopically-enriched nitrate (NO3-) in many springs of the UFA reflects groundwater denitrification rather than urban- or animal-derived inputs. © Author(s) 2012.
Resumo:
A survey of teaching and assessment methods employed in UK Higher Education programmes for Human-Computer Interaction (HCI) courses was conducted in April 2003. The findings from this survey are presented, and conclusions drawn.
Resumo:
The growth of computer power allows the solution of complex problems related to compressible flow, which is an important class of problems in modern day CFD. Over the last 15 years or so, many review works on CFD have been published. This book concerns both mathematical and numerical methods for compressible flow. In particular, it provides a clear cut introduction as well as in depth treatment of modern numerical methods in CFD. This book is organised in two parts. The first part consists of Chapters 1 and 2, and is mainly devoted to theoretical discussions and results. Chapter 1 concerns fundamental physical concepts and theoretical results in gas dynamics. Chapter 2 describes the basic mathematical theory of compressible flow using the inviscid Euler equations and the viscous Navier–Stokes equations. Existence and uniqueness results are also included. The second part consists of modern numerical methods for the Euler and Navier–Stokes equations. Chapter 3 is devoted entirely to the finite volume method for the numerical solution of the Euler equations and covers fundamental concepts such as order of numerical schemes, stability and high-order schemes. The finite volume method is illustrated for 1-D as well as multidimensional Euler equations. Chapter 4 covers the theory of the finite element method and its application to compressible flow. A section is devoted to the combined finite volume–finite element method, and its background theory is also included. Throughout the book numerous examples have been included to demonstrate the numerical methods. The book provides a good insight into the numerical schemes, theoretical analysis, and validation of test problems. It is a very useful reference for applied mathematicians, numerical analysts, and practice engineers. It is also an important reference for postgraduate researchers in the field of scientific computing and CFD.
Resumo:
The definitive paper by Stuiver and Polach (1977) established the conventions for reporting of 14C data for chronological and geophysical studies based on the radioactive decay of 14C in the sample since the year of sample death or formation. Several ways of reporting 14C activity levels relative to a standard were also established, but no specific instructions were given for reporting nuclear weapons testing (post-bomb) 14C levels in samples. Because the use of post-bomb 14C is becoming more prevalent in forensics, biology, and geosciences, a convention needs to be adopted. We advocate the use of fraction modern with a new symbol F14C to prevent confusion with the previously used Fm, which may or may not have been fractionation corrected. We also discuss the calibration of post-bomb 14C samples and the available datasets and compilations, but do not give a recommendation for a particular dataset.
Resumo:
Cyclododecane (CDD) is a waxy, solid cyclic hydrocarbon (C12H24) that sublimes at room temperature and possesses strong hydrophobicity. In paper conservation CDD is used principally as a temporary fixative of water-soluble media during aqueous treatments. Hydrophobicity, ease of reversibility, low toxicity, and absence of residues are reasons often cited for its use over alternative materials although the latter two claims continue to be debated in the literature. The sublimation rate has important implications for treatment planning as well as health and safety considerations given the dearth of reliable information on its toxicity and exposure limits. This study examined how the rate of sublimation is affected by fiber type, sizing, and surface finish as well as delivery in the molten phase and as a saturated solution in low boiling petroleum ether. The effect of warming the paper prior to application was also evaluated. Sublimation was monitored using gravimetric analysis after which samples were tested for residues with gas chromatography-flame ionization detection (GC-FID) to confirm complete sublimation. Water absorbency tests were conducted to determine whether this property is fully reestablished. Results suggested that the sublimation rate of CDD is affected minimally by all of the paper characteristics and application methods examined in this study. The main factors influencing the rate appear to be the thickness and mass of the CDD over a given surface area as well as temperature and ventilation. The GC-FID results showed that most of the CDD sublimed within several days of its disappearance from the paper surface regardless of the application method. Minimal changes occurred in the water absorbency of the samples following complete sublimation.
Resumo:
The turn within urban policy to address increasingly complex social, economic and environmental problems has exposed some of the fragility of traditional measurement models and their reliance on the rational paradigm. This article looks at the experiences of the European Union (EU) Programme for Peace and Reconciliation in Northern Ireland and its particular attempt to construct new District Partnerships to deliver area-based regeneration programmes. It highlights the need to combine instrumental and interpretative evaluation methods in an attempt to explain the wider contribution of governance to conflict resolution and participatory practice in local development. It concludes by highlighting the value of conceptual approaches that deal with the politics of evaluation and the distributional effects of policy interventions designed to create new relationships within and between multiple stakeholders.