955 resultados para theorem
Resumo:
The level of information provided by ink evidence to the criminal and civil justice system is limited. The limitations arise from the weakness of the interpretative framework currently used, as proposed in the ASTM 1422-05 and 1789-04 on ink analysis. It is proposed to use the likelihood ratio from the Bayes theorem to interpret ink evidence. Unfortunately, when considering the analytical practices, as defined in the ASTM standards on ink analysis, it appears that current ink analytical practices do not allow for the level of reproducibility and accuracy required by a probabilistic framework. Such framework relies on the evaluation of the statistics of the ink characteristics using an ink reference database and the objective measurement of similarities between ink samples. A complete research programme was designed to (a) develop a standard methodology for analysing ink samples in a more reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in a forensic context. This report focuses on the first of the three stages. A calibration process, based on a standard dye ladder, is proposed to improve the reproducibility of ink analysis by HPTLC, when these inks are analysed at different times and/or by different examiners. The impact of this process on the variability between the repetitive analyses of ink samples in various conditions is studied. The results show significant improvements in the reproducibility of ink analysis compared to traditional calibration methods.
Resumo:
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.
Resumo:
One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. We used data from 751 studies including 4,372,000 adults from 146 of the 200 countries we make estimates for. Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries. Wellcome Trust.
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Resumo:
The frequency dependence of the electron-spin fluctuation spectrum, P(Q), is calculated in the finite bandwidth model. We find that for Pd, which has a nearly full d-band, the magnitude, the range, and the peak frequency of P(Q) are greatly reduced from those in the standard spin fluctuation theory. The electron self-energy due to spin fluctuations is calculated within the finite bandwidth model. Vertex corrections are examined, and we find that Migdal's theorem is valid for spin fluctuations in the nearly full band. The conductance of a normal metal-insulator-normal metal tunnel junction is examined when spin fluctuations are present in one electrode. We find that for the nearly full band, the momentum independent self-energy due to spin fluctuations enters the expression for the tunneling conductance with approximately the same weight as the self-energy due to phonons. The effect of spin fluctuations on the tunneling conductance is slight within the finite bandwidth model for Pd. The effect of spin fluctuations on the tunneling conductance of a metal with a less full d-band than Pd may be more pronounced. However, in this case the tunneling conductance is not simply proportional to the self-energy.
Resumo:
Abstract: Root and root finding are concepts familiar to most branches of mathematics. In graph theory, H is a square root of G and G is the square of H if two vertices x,y have an edge in G if and only if x,y are of distance at most two in H. Graph square is a basic operation with a number of results about its properties in the literature. We study the characterization and recognition problems of graph powers. There are algorithmic and computational approaches to answer the decision problem of whether a given graph is a certain power of any graph. There are polynomial time algorithms to solve this problem for square of graphs with girth at least six while the NP-completeness is proven for square of graphs with girth at most four. The girth-parameterized problem of root fining has been open in the case of square of graphs with girth five. We settle the conjecture that recognition of square of graphs with girth 5 is NP-complete. This result is providing the complete dichotomy theorem for square root finding problem.
Resumo:
According to the List Colouring Conjecture, if G is a multigraph then χ' (G)=χl' (G) . In this thesis, we discuss a relaxed version of this conjecture that every simple graph G is edge-(∆ + 1)-choosable as by Vizing’s Theorem ∆(G) ≤χ' (G)≤∆(G) + 1. We prove that if G is a planar graph without 7-cycles with ∆(G)≠5,6 , or without adjacent 4-cycles with ∆(G)≠5, or with no 3-cycles adjacent to 5-cycles, then G is edge-(∆ + 1)-choosable.
Resumo:
Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.
Resumo:
Let f(x) be a complex rational function. In this work, we study conditions under which f(x) cannot be written as the composition of two rational functions which are not units under the operation of function composition. In this case, we say that f(x) is prime. We give sufficient conditions for complex rational functions to be prime in terms of their degrees and their critical values, and we derive some conditions for the case of complex polynomials. We consider also the divisibility of integral polynomials, and we present a generalization of a theorem of Nieto. We show that if f(x) and g(x) are integral polynomials such that the content of g divides the content of f and g(n) divides f(n) for an integer n whose absolute value is larger than a certain bound, then g(x) divides f(x) in Z[x]. In addition, given an integral polynomial f(x), we provide a method to determine if f is irreducible over Z, and if not, find one of its divisors in Z[x].
Resumo:
Symmetry group methods are applied to obtain all explicit group-invariant radial solutions to a class of semilinear Schr¨odinger equations in dimensions n = 1. Both focusing and defocusing cases of a power nonlinearity are considered, including the special case of the pseudo-conformal power p = 4/n relevant for critical dynamics. The methods involve, first, reduction of the Schr¨odinger equations to group-invariant semilinear complex 2nd order ordinary differential equations (ODEs) with respect to an optimal set of one-dimensional point symmetry groups, and second, use of inherited symmetries, hidden symmetries, and conditional symmetries to solve each ODE by quadratures. Through Noether’s theorem, all conservation laws arising from these point symmetry groups are listed. Some group-invariant solutions are found to exist for values of n other than just positive integers, and in such cases an alternative two-dimensional form of the Schr¨odinger equations involving an extra modulation term with a parameter m = 2−n = 0 is discussed.
Resumo:
For inviscid fluid flow in any n-dimensional Riemannian manifold, new conserved vorticity integrals generalizing helicity, enstrophy, and entropy circulation are derived for lower-dimensional surfaces that move along fluid streamlines. Conditions are determined for which the integrals yield constants of motion for the fluid. In the case when an inviscid fluid is isentropic, these new constants of motion generalize Kelvin’s circulation theorem from closed loops to closed surfaces of any dimension.
Resumo:
Consider an undirected graph G and a subgraph of G, H. A q-backbone k-colouring of (G,H) is a mapping f: V(G) {1, 2, ..., k} such that G is properly coloured and for each edge of H, the colours of its endpoints differ by at least q. The minimum number k for which there is a backbone k-colouring of (G,H) is the backbone chromatic number, BBCq(G,H). It has been proved that backbone k-colouring of (G,T) is at most 4 if G is a connected C4-free planar graph or non-bipartite C5-free planar graph or Cj-free, j∈{6,7,8} planar graph without adjacent triangles. In this thesis we improve the results mentioned above and prove that 2-backbone k-colouring of any connected planar graphs without adjacent triangles is at most 4 by using a discharging method. In the second part of this thesis we further improve these results by proving that for any graph G with χ(G) ≥ 4, BBC(G,T) = χ(G). In fact, we prove the stronger result that a backbone tree T in G exists, such that ∀ uv ∈ T, |f(u)-f(v)|=2 or |f(u)-f(v)| ≥ k-2, k = χ(G). For the case that G is a planar graph, according to Four Colour Theorem, χ(G) = 4; so, BBC(G,T) = 4.
Resumo:
Suzumura shows that a binary relation has a weak order extension if and only if it is consistent. However, consistency is demonstrably not sufficient to extend an upper semi-continuous binary relation to an upper semicontinuous weak order. Jaffray proves that any asymmetric (or reflexive), transitive and upper semicontinuous binary relation has an upper semicontinuous strict (or weak) order extension. We provide sufficient conditions for existence of upper semicontinuous extensions of consistence rather than transitive relations. For asymmetric relations, consistency and upper semicontinuity suffice. For more general relations, we prove one theorem using a further consistency property and another with an additional continuity requirement.
Resumo:
A desirable property of a voting procedure is that it be immune to the strategic withdrawal of a candidate for election. Dutta, Jackson, and Le Breton (Econometrica, 2001) have established a number of theorems that demonstrate that this condition is incompatible with some other desirable properties of voting procedures. This article shows that Grether and Plott's nonbinary generalization of Arrow's Theorem can be used to provide simple proofs of two of these impossibility theorems.
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.