17 resultados para Fisher exact test


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Acute bacterial meningitis (BM) continues to be an important cause of childhood mortality and morbidity, especially in developing countries. Prognostic scales and the identification of risk factors for adverse outcome both aid in assessing disease severity. New antimicrobial agents or adjunctive treatments - except for oral glycerol - have essentially failed to improve BM prognosis. A retrospective observational analysis found paracetamol beneficial in adult bacteraemic patients, and some experts recommend slow β-lactam infusion. We examined these treatments in a prospective, double-blind, placebo-controlled clinical trial. Patients and methods A retrospective analysis included 555 children treated for BM in 2004 in the infectious disease ward of the Paediatric Hospital of Luanda, Angola. Our prospective study randomised 723 children into four groups, to receive a combination of cefotaxime infusion or boluses every 6 hours for the first 24 hours and oral paracetamol or placebo for 48 hours. The primary endpoints were 1) death or severe neurological sequelae (SeNeSe), and 2) deafness. Results In the retrospective study, the mortality of children with blood transfusion was 23% (30 of 128) vs. without blood transfusion 39% (109 of 282; p=0.004). In the prospective study, 272 (38%) of the children died. Of those 451 surviving, 68 (15%) showed SeNeSe, and 12% (45 of 374) were deaf. Whereas no difference between treatment groups was observable in primary endpoints, the early mortality in the infusion-paracetamol group was lower, with the difference (Fisher s exact test) from the other groups at 24, 48, and 72 hours being significant (p=0.041, 0.0005, and 0.005, respectively). Prognostic factors for adverse outcomes were impaired consciousness, dyspnoea, seizures, delayed presentation, and absence of electricity at home (Simple Luanda Scale, SLS); the Bayesian Luanda Scale (BLS) also included abnormally low or high blood glucose. Conclusions New studies concerning the possible beneficial effect of blood transfusion, and concerning longer treatment with cefotaxime infusion and oral paracetamol, and a study to validate our simple prognostic scales are warranted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix metalloproteinase (MMP) -8, collagenase-2, is a key mediator of irreversible tissue destruction in chronic periodontitis and detectable in gingival crevicular fluid (GCF). MMP-8 mostly originates from neutrophil leukocytes, the first line of defence cells which exist abundantly in GCF, especially in inflammation. MMP-8 is capable of degrading almost all extra-cellular matrix and basement membrane components and is especially efficient against type I collagen. Thus the expression of MMP-8 in GCF could be valuable in monitoring the activity of periodontitis and possibly offers a diagnostic means to predict progression of periodontitis. In this study the value of MMP-8 detection from GCF in monitoring of periodontal health and disease was evaluated with special reference to its ability to differentiate periodontal health and different disease states of the periodontium and to recognise the progression of periodontitis, i.e. active sites. For chair-side detection of MMP-8 from the GCF or peri-implant sulcus fluid (PISF) samples, a dip-stick test based on immunochromatography involving two monoclonal antibodies was developed. The immunoassay for the detection of MMP-8 from GCF was found to be more suitable for monitoring of periodontitis than detection of GCF elastase concentration or activity. Periodontally healthy subjects and individuals suffering of gingivitis or of periodontitis could be differentiated by means of GCF MMP-8 levels and dipstick testing when the positive threshold value of the MMP-8 chair-side test was set at 1000 µg/l. MMP-8 dipstick test results from periodontally healthy and from subjects with gingivitis were mainly negative while periodontitis patients sites with deep pockets ( 5 mm) and which were bleeding on probing were most often test positive. Periodontitis patients GCF MMP-8 levels decreased with hygiene phase periodontal treatment (scaling and root planing, SRP) and even reduced during the three month maintenance phase. A decrease in GCF MMP-8 levels could be monitored with the MMP-8 test. Agreement between the test stick and the quantitative assay was very good (κ = 0.81) and the test provided a baseline sensitivity of 0.83 and specificity of 0.96. During the 12-month longitudinal maintenance phase, periodontitis patients progressing sites (sites with an increase in attachment loss ≥ 2 mm during the maintenance phase) had elevated GCF MMP-8 levels compared with stable sites. General mean MMP-8 concentrations in smokers (S) sites were lower than in non-smokers (NS) sites but in progressing S and NS sites concentrations were at an equal level. Sites with exceptionally and repeatedly elevated MMP-8 concentrations during the maintenance phase were clustered in smoking patients with poor response to SRP (refractory patients). These sites especially were identified by the MMP-8 test. Subgingival plaque samples from periodontitis patients deep periodontal pockets were examined by polymerase chain reaction (PCR) to find out if periodontal lesions may serve as a niche for Chlamydia pneumoniae. Findings were compared with the clinical periodontal parameters and GCF MMP-8 levels to determine the correlation with periodontal status. Traces of C. pneumoniae were identified from one periodontitis patient s pooled subgingival plaque sample by means of PCR. After periodontal treatment (SRP) the sample was negative for C. pneumoniae. Clinical parameters or biomarkers (MMP-8) of the patient with the positive C. pneumoniae finding did not differ from other study patients. In this study it was concluded that MMP-8 concentrations in GCF of sites from periodontally healthy individuals, subjects with gingivitis or with periodontitis are at different levels. The cut-off value of the developed MMP-8 test is at an optimal level to differentiate between these conditions and can possibly be utilised in identification of individuals at the risk of the transition of gingivitis to periodontitis. In periodontitis patients, repeatedly elevated GCF MMP-8 concentrations may indicate sites at risk of progression of periodontitis as well as patients with poor response to conventional periodontal treatment (SRP). This can be monitored by MMP-8 testing. Despite the lower mean GCF MMP-8 concentrations in smokers, a fraction of smokers sites expressed very high MMP-8 concentrations together with enhanced periodontal activity and could be identified with MMP-8 specific chair-side test. Deep periodontal lesions may be niches for non-periodontopathogenic micro-organisms with systemic effects like C. pneumoniae and possibly play a role in the transmission from one subject to another.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To evaluate the applicability of visual feedback posturography (VFP) for quantification of postural control, and to characterize the horizontal angular vestibulo-ocular reflex (AVOR) by use of a novel motorized head impulse test (MHIT). Methods: In VFP, subjects standing on a platform were instructed to move their center of gravity to symmetrically placed peripheral targets as fast and accurately as possible. The active postural control movements were measured in healthy subjects (n = 23), and in patients with vestibular schwannoma (VS) before surgery (n = 49), one month (n = 17), and three months (n = 36) after surgery. In MHIT we recorded head and eye position during motorized head impulses (mean velocity of 170º/s and acceleration of 1 550º/s²) in healthy subjects (n = 22), in patients with VS before surgery (n = 38) and about four months afterwards (n = 27). The gain, asymmetry and latency in MHIT were calculated. Results: The intraclass correlation coefficient for VFP parameters during repeated tests was significant (r = 0.78-0.96; p < 0.01), although two of four VFP parameters improved slightly during five test sessions in controls. At least one VFP parameter was abnormal pre- and postoperatively in almost half the patients, and these abnormal preoperative VFP results correlated significantly with abnormal postoperative results. The mean accuracy in postural control in patients was reduced pre- and postoperatively. A significant side difference with VFP was evident in 10% of patients. In the MHIT, the normal gain was close to unity, the asymmetry in gain was within 10%, and the latency was a mean ± standard deviation 3.4 ± 6.3 milliseconds. Ipsilateral gain or asymmetry in gain was preoperatively abnormal in 71% of patients, whereas it was abnormal in every patient after surgery. Preoperative gain (mean ± 95% confidence interval) was significantly lowered to 0.83 ± 0.08 on the ipsilateral side compared to 0.98 ± 0.06 on the contralateral side. The ipsilateral postoperative mean gain of 0.53 ± 0.05 was significantly different from preoperative gain. Conclusion: The VFP is a repeatable, quantitative method to assess active postural control within individual subjects. The mean postural control in patients with VS was disturbed before and after surgery, although not severely. Side difference in postural control in the VFP was rare. The horizontal AVOR results in healthy subjects and in patients with VS, measured with MHIT, were in agreement with published data achieved using other techniques with head impulse stimuli. The MHIT is a non-invasive method which allows reliable clinical assessment of the horizontal AVOR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The aim of the present study was to develop and test new digital imaging equipment and methods for diagnosis and follow-up of ocular diseases. Methods: The whole material comprised 398 subjects (469 examined eyes), including 241 patients with melanocytic choroidal tumours, 56 patients with melanocytic iris tumours, 42 patients with diabetes, a 52-year old patient with chronic phase of VKH disease, a 30-year old patient with an old blunt eye injury, and 57 normal healthy subjects. Digital 50° (Topcon TRC 50 IA) and 45° (Canon CR6-45NM) fundus cameras, a new handheld digital colour videocamera for eye examinations (MediTell), a new subtraction method using the Topcon Image Net Program (Topcon corporation, Tokyo, Japan), a new method for digital IRT imaging of the iris we developed, and Zeiss photoslitlamp with a digital camera body were used for digital imaging. Results: Digital 50° red-free imaging had a sensitivity of 97.7% and two-field 45° and 50° colour imaging a sensitivity of 88.9-94%. The specificity of the digital 45°-50° imaging modalities was 98.9-100% versus the reference standard and ungradeable images that were 1.2-1.6%. By using the handheld digital colour video camera only, the optic disc and central fundus located inside 20° from the fovea could be recorded with a sensitivity of 6.9% for detection of at least mild NPDR when compared with the reference standard. Comparative use of digital colour, red-free, and red light imaging showed 85.7% sensitivity, 99% specificity, and 98.2 % exact agreement versus the reference standard in differentiation of small choroidal melanoma from pseudomelanoma. The new subtraction method showed growth in four of 94 melanocytic tumours (4.3%) during a mean ±SD follow-up of 23 ± 11 months. The new digital IRT imaging of the iris showed the sphincter muscle and radial contraction folds of Schwalbe in the pupillary zone and radial structural folds of Schwalbe and circular contraction furrows in the ciliary zone of the iris. The 52-year-old patient with a chronic phase of VKH disease showed extensive atrophy and occasional pigment clumps in the iris stroma, detachment of the ciliary body with severe ocular hypotony, and shallow retinal detachment of the posterior pole in both eyes. Infrared transillumination imaging and fluorescein angiographic findings of the iris showed that IR translucence (p=0.53), complete masking of fluorescence (p=0.69), presence of disorganized vessels (p=0.32), and fluorescein leakage (p=1.0) at the site of the lesion did not differentiate an iris nevus from a melanoma. Conclusions: Digital 50° red-free and two-field 50° or 45° colour imaging were suitable for DR screening, whereas the handheld digital video camera did not fulfill the needs of DR screening. Comparative use of digital colour, red-free and red light imaging was a suitable method in the differentiation of small choroidal melanoma from different pseudomelanomas. The subtraction method may reveal early growth of the melanocytic choroidal tumours. Digital IRT imaging may be used to study changes of the stroma and posterior surface of the iris in various diseases of the uvea. It contributed to the revealment of iris atrophy and serous detachment of the ciliary body with ocular hypotony together with the shallow retinal detachment of the posterior pole as new findings of the chronic phase of VKH disease. Infrared translucence and angiographic findings are useful in differential diagnosis of melanocytic iris tumours, but they cannot be used to determine if the lesion is benign or malignant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Topics in Spatial Econometrics — With Applications to House Prices Spatial effects in data occur when geographical closeness of observations influences the relation between the observations. When two points on a map are close to each other, the observed values on a variable at those points tend to be similar. The further away the two points are from each other, the less similar the observed values tend to be. Recent technical developments, geographical information systems (GIS) and global positioning systems (GPS) have brought about a renewed interest in spatial matters. For instance, it is possible to observe the exact location of an observation and combine it with other characteristics. Spatial econometrics integrates spatial aspects into econometric models and analysis. The thesis concentrates mainly on methodological issues, but the findings are illustrated by empirical studies on house price data. The thesis consists of an introductory chapter and four essays. The introductory chapter presents an overview of topics and problems in spatial econometrics. It discusses spatial effects, spatial weights matrices, especially k-nearest neighbours weights matrices, and various spatial econometric models, as well as estimation methods and inference. Further, the problem of omitted variables, a few computational and empirical aspects, the bootstrap procedure and the spatial J-test are presented. In addition, a discussion on hedonic house price models is included. In the first essay a comparison is made between spatial econometrics and time series analysis. By restricting the attention to unilateral spatial autoregressive processes, it is shown that a unilateral spatial autoregression, which enjoys similar properties as an autoregression with time series, can be defined. By an empirical study on house price data the second essay shows that it is possible to form coordinate-based, spatially autoregressive variables, which are at least to some extent able to replace the spatial structure in a spatial econometric model. In the third essay a strategy for specifying a k-nearest neighbours weights matrix by applying the spatial J-test is suggested, studied and demonstrated. In the final fourth essay the properties of the asymptotic spatial J-test are further examined. A simulation study shows that the spatial J-test can be used for distinguishing between general spatial models with different k-nearest neighbours weights matrices. A bootstrap spatial J-test is suggested to correct the size of the asymptotic test in small samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the clustering pattern in the Finnish stock market. Using trading volume and time as factors capturing the clustering pattern in the market, the Keim and Madhavan (1996) and the Engle and Russell (1998) model provide the framework for the analysis. The descriptive and the parametric analysis provide evidences that an important determinant of the famous U-shape pattern in the market is the rate of information arrivals as measured by large trading volumes and durations at the market open and close. Precisely, 1) the larger the trading volume, the greater the impact on prices both in the short and the long run, thus prices will differ across quantities. 2) Large trading volume is a non-linear function of price changes in the long run. 3) Arrival times are positively autocorrelated, indicating a clustering pattern and 4) Information arrivals as approximated by durations are negatively related to trading flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksessa mitataan porsastuotannon tuottavuuden kehitystä ProAgrian sikatilinpäätöstiloilla vuosina 2003–2008. Tuottavuutta mitataan Fisher-tuottavuusindeksillä, joka dekomponoidaan tekniseen, allokatiiviseen ja skaalatehokkuuteen sekä teknologiseen kehitykseen ja hintavaikutukseen. Koko aineistosta aggregoidulla tuottavuusindeksillä mitattuna tuottavuus kasvoi viidessä vuodessa yhteensä 14,3 % vuotuisen kasvun ollessa 2,7 %. Tuottajien keskimääräinen tuottavuusindeksi antaa lähes saman tuloksen: sen mukaan tuottavuus kasvaa yhteensä 14,7 %, mikä tekee 2,8 % vuodessa. Skaalatehokkuuden paraneminen havaitaan merkittävimmäksi tuottavuuskasvun lähteeksi. Skaalatehokkuus paranee aggregoidusti mitattuna 1,6 % vuodessa ja tiloilla keskimäärin 2,1 % vuodessa. Teknisen tehokkuuden koheneminen on toinen tuottavuuskasvua edistävä tekijä tutkimusjaksolla. Molemmilla mittaustavoilla nousu on keskimäärin 1,4 % vuodessa. Allokatiivinen tehokkuus laskee hieman: aggregoidusti mitattuna 0,1 % ja keskimäärin 0,4 % vuodessa. Teknologinen kehitys tutkimusjaksolla on lievästi negatiivista, keskimäärin -0,1 % vuodessa. Vuosittaiset vaihtelut ovat kuitenkin voimakkaita. Hintojen muutokset eivät juuri ole vaikuttaneet tuottavuuden tasoon, sillä hintavaikutuksen vuotuiset muutokset jäävät jokaisena vuonna alle puoleen prosenttiin ja keskimääräinen vuotuinen muutos on -0,1 %. Keskeinen tuottavuuskasvua edistänyt tekijä näyttää olleen tilakoon kasvu, joka on parantanut rakenteellista tehokkuutta. Teknologisen kehityksen jääminen negatiiviseksi kuitenkin tarkoittaa, että paras havaittu tuottavuuden taso ei ole noussut lainkaan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vegetation maps and bioclimatic zone classifications communicate the vegetation of an area and are used to explain how the environment regulates the occurrence of plants on large scales. Many practises and methods for dividing the world’s vegetation into smaller entities have been presented. Climatic parameters, floristic characteristics, or edaphic features have been relied upon as decisive factors, and plant species have been used as indicators for vegetation types or zones. Systems depicting vegetation patterns that mainly reflect climatic variation are termed ‘bioclimatic’ vegetation maps. Based on these it has been judged logical to deduce that plants moved between corresponding bioclimatic areas should thrive in the target location, whereas plants moved from a different zone should languish. This principle is routinely applied in forestry and horticulture but actual tests of the validity of bioclimatic maps in this sense seem scanty. In this study I tested the Finnish bioclimatic vegetation zone system (BZS). Relying on the plant collection of Helsinki University Botanic Garden’s Kumpula collection, which according to the BZS is situated at the northern limit of the hemiboreal zone, I aimed to test how the plants’ survival depends on their provenance. My expectation was that plants from the hemiboreal or southern boreal zones should do best in Kumpula, whereas plants from more southern and more northern zones should show progressively lower survival probabilities. I estimated probability of survival using collection database information of plant accessions of known wild origin grown in Kumpula since the mid 1990s, and logistic regression models. The total number of accessions I included in the analyses was 494. Because of problems with some accessions I chose to separately analyse a subset of the complete data, which included 379 accessions. I also analysed different growth forms separately in order to identify differences in probability of survival due to different life strategies. In most analyses accessions of temperate and hemiarctic origin showed lower survival probability than those originating from any of the boreal subzones, which among them exhibited rather evenly high probabilities. Exceptionally mild and wet winters during the study period may have killed off hemiarctic plants. Some winters may have been too harsh for temperate accessions. Trees behaved differently: they showed an almost steadily increasing survival probability from temperate to northern boreal origins. Various factors that could not be controlled for may have affected the results, some of which were difficult to interpret. This was the case in particular with herbs, for which the reliability of the analysis suffered because of difficulties in managing their curatorial data. In all, the results gave some support to the BZS, and especially its hierarchical zonation. However, I question the validity of the formulation of the hypothesis I tested since it may not be entirely justified by the BZS, which was designed for intercontinental comparison of vegetation zones, but not specifically for transcontinental provenance trials. I conclude that botanic gardens should pay due attention to information management and curational practices to ensure the widest possible applicability of their plant collections.