960 resultados para Averaging Theorem
Resumo:
In proposing theories of how we should design and specify networks of processes it is necessary to show that the semantics of any language we use to write down the intended behaviours of a system has several qualities. First in that the meaning of what is written on the page reflects the intention of the designer; second that there are no unexpected behaviours that might arise in a specified system that are hidden from the unsuspecting specifier; and third that the intention for the design of the behaviour of a network of processes can be communicated clearly and intuitively to others. In order to achieve this we have developed a variant of CSP, called CSPt, designed to solve the problems of termination of parallel processes present in the original formulation of CSP. In CSPt we introduced three parallel operators, each with a different kind of termination semantics, which we call synchronous, asynchronous and race. These operators provide specifiers with an expressive and flexible tool kit to define the intended behaviour of a system in such a way that unexpected or unwanted behaviours are guaranteed not to take place. In this paper we extend out analysis of CSPt and introduce the notion of an alphabet diagram that illustrates the different categories of events that can arise in the parallel composition of processes. These alphabet diagrams are then used to analyse networks of three processes in parallel with the aim of identifying sufficient constraints to ensure associativity of their parallel composition. Having achieved this we then proceed to prove associativity laws for the three parallel operators of CSPt. Next, we illustrate how to design and construct a network of three processes that satisfy the associativity law, using the associativity theorem and alphabet diagrams. Finally, we outline how this could be achieved for more general networks of processes.
Resumo:
In proposing theories of how we should design and specify networks of processes it is necessary to show that the semantics of any language we use to write down the intended behaviours of a system has several qualities. First in that the meaning of what is written on the page reflects the intention of the designer; second that there are no unexpected behaviours that might arise in a specified system that are hidden from the unsuspecting specifier; and third that the intention for the design of the behaviour of a network of processes can be communicated clearly and intuitively to others. In order to achieve this we have developed a variant of CSP, called CSPt, designed to solve the problems of termination of parallel processes present in the original formulation of CSP. In CSPt we introduced three parallel operators, each with a different kind of termination semantics, which we call synchronous, asynchronous and race. These operators provide specifiers with an expressive and flexible tool kit to define the intended behaviour of a system in such a way that unexpected or unwanted behaviours are guaranteed not to take place. In this paper we extend out analysis of CSPt and introduce the notion of an alphabet diagram that illustrates the different categories of events that can arise in the parallel composition of processes. These alphabet diagrams are then used to analyse networks of three processes in parallel with the aim of identifying sufficient constraints to ensure associativity of their parallel composition. Having achieved this we then proceed to prove associativity laws for the three parallel operators of CSPt. Next, we illustrate how to design and construct a network of three processes that satisfy the associativity law, using the associativity theorem and alphabet diagrams. Finally, we outline how this could be achieved for more general networks of processes.
Resumo:
Dissertação de Mestrado em Engenharia de Redes de Comunicação e Multimédia
Resumo:
Dissertation presented to obtain the Ph.D degree in Chemistry.
Resumo:
C4 photosynthesis is an adaptation derived from the more common C3 photosynthetic pathway that confers a higher productivity under warm temperature and low atmospheric CO2 concentration [1, 2]. C4 evolution has been seen as a consequence of past atmospheric CO2 decline, such as the abrupt CO2 fall 32-25 million years ago (Mya) [3-6]. This relationship has never been tested rigorously, mainly because of a lack of accurate estimates of divergence times for the different C4 lineages [3]. In this study, we inferred a large phylogenetic tree for the grass family and estimated, through Bayesian molecular dating, the ages of the 17 to 18 independent grass C4 lineages. The first transition from C3 to C4 photosynthesis occurred in the Chloridoideae subfamily, 32.0-25.0 Mya. The link between CO2 decrease and transition to C4 photosynthesis was tested by a novel maximum likelihood approach. We showed that the model incorporating the atmospheric CO2 levels was significantly better than the null model, supporting the importance of CO2 decline on C4 photosynthesis evolvability. This finding is relevant for understanding the origin of C4 photosynthesis in grasses, which is one of the most successful ecological and evolutionary innovations in plant history.
Resumo:
The level of information provided by ink evidence to the criminal and civil justice system is limited. The limitations arise from the weakness of the interpretative framework currently used, as proposed in the ASTM 1422-05 and 1789-04 on ink analysis. It is proposed to use the likelihood ratio from the Bayes theorem to interpret ink evidence. Unfortunately, when considering the analytical practices, as defined in the ASTM standards on ink analysis, it appears that current ink analytical practices do not allow for the level of reproducibility and accuracy required by a probabilistic framework. Such framework relies on the evaluation of the statistics of the ink characteristics using an ink reference database and the objective measurement of similarities between ink samples. A complete research programme was designed to (a) develop a standard methodology for analysing ink samples in a more reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in a forensic context. This report focuses on the first of the three stages. A calibration process, based on a standard dye ladder, is proposed to improve the reproducibility of ink analysis by HPTLC, when these inks are analysed at different times and/or by different examiners. The impact of this process on the variability between the repetitive analyses of ink samples in various conditions is studied. The results show significant improvements in the reproducibility of ink analysis compared to traditional calibration methods.
Resumo:
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.
Resumo:
One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. We used data from 751 studies including 4,372,000 adults from 146 of the 200 countries we make estimates for. Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries. Wellcome Trust.
Resumo:
This study examined factors contributing to the differences in left ventricular mass as measured by Doppler echocardiography in children. Fourteen boys (10.3 ± 0.3 years of age) and 1 1 girls (10.5 ± 0.4 years of age) participated in the study. Height and weight were measured, and relative body fat was determined from the measurement of skinfold thickness according to Slaughter et al. (1988). Lean Body Mass was then calculated by subtracting the fat mass from the total body mass. Sexual maturation was self-assessed using the stages of sexual maturation by Tanner (1962). Both pubic hair development and genital (penis or breast for boys and girls respectively) development were used to determine sexual maturation. Carotid Pulse pressure was assessed by applanation tomometry in the left carotid artery. Cardiac mass was measured by Doppler Echocardiography. Images of cardiac structures were taken using B-Mode and were then translated to M- Mode. The dimensions at the end diastole were obtained at the onset of the QRS complex of the electrocardiogram in a plane through a standard position. Measurements included: (a) the diameter of the left ventricle at the end diastole was measured from the septum edge to the endocardium mean border, (b) the posterior wall was measured as the distance from to anterior wall to the epicardium surface, and (c) the interventricular septum was quantified as the distance from the surface of the left ventricle border to the right ventricle septum surface. Systolic time measurements were taken at the peak of the T-wave of the electrocardiogram. Each measurement was taken three to five times before averaging. Average values were used to calculate cardiac mass using the following equation (Deveraux et al. 1986). Weekly physical activity metabolic equivalent was calculated using a standardize activity questionnaire (Godin and Shepard, 1985) and peakV02 was measured on a cycloergometer. There were no significant differences in cardiovascular mesurements between boys and girls. Left ventricular mass was correlated (p<0.05) with size, maturation, peakV02 and physical activity metabolic equivalent. In boys, lean body mass alone explained 36% of the variance in left ventricular mass while weight was the single strongest predictor of left ventricular mass (R =0.80) in girls. Lean body mass, genital developemnt and physical activity metabolic equivalent together explained 46% and 81% in boys and girls, respectively. However, the combination of lean body mass, genital development and peakV02 (ml kgLBM^ min"') explained up to 84% of the variance in left ventricular mass in girls, but added nothing in boys. It is concluded that left ventricular mass was not statistically different between pre-adolescent boys and girls suggesting that hormonal, and therefore, body size changes in adolescence have a main effect on cardiac development and its final outcome. Although body size parameters were the strongest correlates of left ventricular mass in this pre-adolescent group of children, to our knowledge, this is the first study to report that sexual maturation, as well as physical activity and fitness, are also strong associated with left ventricular mass in pre-adolescents, especially young females. Arterial variables, such as systolic blood pressure and carotid pulse pressure, are not strong determinants of left ventricular mass in this pre-adolescent group. In general, these data suggest that although there is no gender differences in the absolute values of left ventricular mass, as children grow, the factors that determine cardiac mass differ between the genders, even in the same pre-adolescent age.
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Developmental variations in the peripheral erythrocytic system of the rainbow trout, Salmo gairdneri
Resumo:
The peripheral circulating erythrocytic system of the rainbow trout, l3 almo gairdner , was examined in vitro in relation differences in the morphology and multiple hemoglobin system organization of adult and juvenile red cells. Cells were separated by velocity sedimentation under unit gravity, a procedure requiring red cell exposure to an incubation medium for periods of at least three hours. Therefore , this must provide an environment in which red cells remain in a condition approximaing normalcy. Previous studies having demonstrated commonly employed media to be ineffective in this regard , a medium was developed through modification of Cortl and saline. One of the principal additions to this me dium , norepinephrine, altered cell regulation of intracellular calcium, magnesium and chloride concentrations. Catecholamine involvement was also suggeste d in the synthes is of hemoglobin . The procedure was found to separtate cells primarily by density and, to a lesser extent, by shape. Characterization of red cells revealed two subpopulations to exist . The first comprised the bulk of the cell population, and were of greater l ength, width, volume and major:minor axis ratio than the smaller population; these were adult cells. The later, juvenile cells were of smaller overall size and were more spherical in shape . Juvenile cells also possessed fewer electrophore tpically distinguishable isomorphs than did adults with only eight of eleven hemoglobin component s typically found With maturation,hemoglobin complement with the development of three more bands. The total complement of the adult cell contained 7 cathodal bands and four anodal hemoglobin isomorphs. Bands acquired with maturation comprised the smallest percentage of the cells hemoglobin. each averaging less than one-percent of the total. Whether these additional bands are derived through degradation and reaggregation of existing components or are the product of pe gQy2 synthesis is not yet known.
Resumo:
The frequency dependence of the electron-spin fluctuation spectrum, P(Q), is calculated in the finite bandwidth model. We find that for Pd, which has a nearly full d-band, the magnitude, the range, and the peak frequency of P(Q) are greatly reduced from those in the standard spin fluctuation theory. The electron self-energy due to spin fluctuations is calculated within the finite bandwidth model. Vertex corrections are examined, and we find that Migdal's theorem is valid for spin fluctuations in the nearly full band. The conductance of a normal metal-insulator-normal metal tunnel junction is examined when spin fluctuations are present in one electrode. We find that for the nearly full band, the momentum independent self-energy due to spin fluctuations enters the expression for the tunneling conductance with approximately the same weight as the self-energy due to phonons. The effect of spin fluctuations on the tunneling conductance is slight within the finite bandwidth model for Pd. The effect of spin fluctuations on the tunneling conductance of a metal with a less full d-band than Pd may be more pronounced. However, in this case the tunneling conductance is not simply proportional to the self-energy.
Resumo:
Abstract: Root and root finding are concepts familiar to most branches of mathematics. In graph theory, H is a square root of G and G is the square of H if two vertices x,y have an edge in G if and only if x,y are of distance at most two in H. Graph square is a basic operation with a number of results about its properties in the literature. We study the characterization and recognition problems of graph powers. There are algorithmic and computational approaches to answer the decision problem of whether a given graph is a certain power of any graph. There are polynomial time algorithms to solve this problem for square of graphs with girth at least six while the NP-completeness is proven for square of graphs with girth at most four. The girth-parameterized problem of root fining has been open in the case of square of graphs with girth five. We settle the conjecture that recognition of square of graphs with girth 5 is NP-complete. This result is providing the complete dichotomy theorem for square root finding problem.
Resumo:
According to the List Colouring Conjecture, if G is a multigraph then χ' (G)=χl' (G) . In this thesis, we discuss a relaxed version of this conjecture that every simple graph G is edge-(∆ + 1)-choosable as by Vizing’s Theorem ∆(G) ≤χ' (G)≤∆(G) + 1. We prove that if G is a planar graph without 7-cycles with ∆(G)≠5,6 , or without adjacent 4-cycles with ∆(G)≠5, or with no 3-cycles adjacent to 5-cycles, then G is edge-(∆ + 1)-choosable.
Resumo:
Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.